HOW-TO: PySpark and Jupyter Notebooks

At this point, I have a working persistent terminal session and installed all the necessary components to run PySpark. I just need a development environment where I could put those together to work in tandem.

For this I would need the jupyter notebook development environment. For those not familiar with the jupyter environment or what it does, the developers have come up with fantastic documentation in this link. The miniconda3 that was previously installed before is perfect for this. Jupyter integrates right into this. Simply invoke [conda install jupyter --yes], to install jupyter notebook. This will install a lot of python libraries, and will definitely take a while depending on your internet connection.

Again, as mentioned the documentation of jupyter is fantastic. If you refer to the quickstart guides and follow along, the details will get you up and running in no time. For this guide, let's jump a bit forward into the configuration and generate a default configuration. Execute [jupyter notebook --generate-config] on the terminal.


jupyter notebook --generate-config

As seen from the screen capture above, the default configuration file is written to the user's home directory inside a newly created hidden directory ".jupyter". Inside, there will be only one file -- "jupyter_notebook_config.py". Modify this config file such that it allows the jupyter notebook to be accessed by any remote computer on the network.

Insert the following lines of code (start at line #2):

c = get_config()
c.NotebookApp.ip = '*'
c.NotebookApp.port = 8888
c.NotebookApp.open_browser = False

After inserting the lines above, test the configuration by running [jupyter notebook] on a terminal.

jupyter notebook

Once you see the terminal above, where it says "Copy/paste this URL into your browser when you connect..", the jupyter configuration is now good to go. There is one problem, however -- when you close the terminal the jupyter session dies. This is where "screen" comes in. Execute [jupyter notebook] in a screen session and detach the session.

Just so you have an idea what PySpark can do, I have tried sorting 8M rows of a timeseries pyspark dataframe and it took roughly ~4s to execute. Try this with a regular pandas dataframe and it will take minutes to complete. It doesn't stop there, there are lots more it can do.

RELATED: Data Science -- Where to Start?

Having a working PySpark setup residing on a powerful server that is accessible via web is one powerful tool in the arsenal of a data scientist. Next time, let's heed the warning about the jupyter notebook being accessible by everyone -- how? Soon..

Share:

Subscribe for Latest Update

Popular Posts

Post Labels

100gb (1) acceleration (1) acrobat (1) adblock (1) advanced (1) ahci (1) airdrop (2) aix (14) angry birds (1) article (21) aster (1) audiodg.exe (1) automatic (2) autorun.inf (1) bartpe (1) battery (2) bigboss (1) binance (1) biometrics (1) bitcoin (3) blackberry (1) book (1) boot-repair (2) calendar (1) ccleaner (3) chrome (5) cloud (1) cluster (1) compatibility (3) CPAN (1) crypto (3) cydia (1) data (3) ddos (1) disable (1) discount (1) DLNA (1) dmidecode (1) dns (7) dracut (1) driver (1) error (10) esxi5 (2) excel (1) facebook (1) faq (36) faucet (1) firefox (17) firewall (2) flash (5) free (3) fun (1) gadgets (4) games (1) garmin (5) gmail (3) google (4) google+ (2) gps (5) grub (2) guide (1) hardware (6) how (1) how-to (45) huawei (1) icloud (1) info (4) iphone (7) IPMP (2) IPV6 (1) iscsi (1) jailbreak (1) java (3) kodi (1) linux (28) locate (1) lshw (1) luci (1) mafia wars (1) malware (1) mapsource (1) memory (2) mikrotik (5) missing (1) mods (10) mouse (1) multipath (1) multitasking (1) NAT (1) netapp (1) nouveau (1) nvidia (1) osmc (1) outlook (2) p2v (2) patch (1) performance (19) perl (1) philippines (1) php (1) pimp-my-rig (9) pldthomedsl (1) plugin (1) popcorn hour (10) power shell (1) process (1) proxy (2) pyspark (1) python (13) qos (1) raspberry pi (7) readyboost (2) reboot (2) recall (1) recovery mode (1) registry (2) rename (1) repository (1) rescue mode (1) review (15) right-click (1) RSS (2) s3cmd (1) salary (1) sanity check (1) security (15) sendmail (1) sickgear (3) software (10) solaris (17) squid (3) SSD (3) SSH (9) swap (1) tip (4) tips (42) top list (3) torrent (5) transmission (1) treewalk (2) tunnel (1) tweak (4) tweaks (41) ubuntu (4) udemy (6) unknown device (1) updates (12) upgrade (1) usb (12) utf8 (1) utility (2) V2V (1) virtual machine (4) VirtualBox (1) vmware (14) vsphere (1) wannacry (1) wifi (4) windows (54) winpe (2) xymon (1) yum (1) zombie (1)

Blog Archives

RANDOM POSTS