Storm processes
We will start with Nimbus first, which is actually the entry-point daemon in Storm. Just to compare with Hadoop, Nimbus is actually the job tracker of Storm. Nimbus's job is to distribute code to all supervisor daemons of a cluster. So, when topology code is submitted, it actually reaches all physical machines in the cluster. Nimbus also monitors failure of supervisors. If a supervisor continues to fail, then Nimbus reassigns those workers' jobs to other workers of a different physical machine. The current version of Storm allows only one instance of the Nimbus daemon to run. Nimbus is also responsible for assigning tasks to supervisor nodes. If you lose Nimbus, the workers will still continue to compute. Supervisors will continue to restart workers as and when they die. Without Nimbus, a worker's task won't be reassigned to another machine worker within the cluster.
There is no alternative Storm process that will take over if Nimbus dies, and no process...