PySpark is an interactive CLI, built-in with Spark, which provides the Python way of developing for processing large amounts of data, either from a single source or aggregating from multiple sources. This is one of the most widely-used CLIs for data interaction. It has a much wider community, due to its simplicity in developing data-processing applications from five different sources. It can achieve this more efficiently and with less effort for developing in Python than Scala, R, or Java.
PySpark can be found in the bin directory of the binary installations. Moreover, this can be directly run in local or pseudo mode, where all of the resources of an instance can be directly used. But as PySpark is an application CLI for spark, there wouldn't be any master or worker web UI. Only the application web UI would be accessible. Refer to the PySpark docs for further information...