Okay that title is a little bit of a mouth full, so lets crack out the Nerd Jargon Buster-
NGINX – Free web server software which pairs really nicely with Linux web servers. Pretty similar to Apache, but seems to have more reverse proxy settings baked into it.
Reverse Proxy – A proxy server is simply just a middle man between a person who is requesting a website and the web server that actually serves the content. Because of this middleman, the web server does not know anything about the client that is requesting the resource, making the user anonymous. A reverse proxy is the same idea just flipped the other way around. Clients requesting a resource don’t know which server they are actually getting the data from because the Reverse Proxy Server simply relays the resource to them. This is commonly used to speed up sites by implementing Proxy Server caching (which cuts down on additional web server fetch requests) and allows for easy load balancing by allocating visitors to a specific server which is “less busy” from a communal pool of web servers.
MERN – MongoDB, Express, React and NodeJS, an ancroyn for a group of technologies used to create Full-Stack Web Apps.
For the past month I have been rewriting some of our internal tools at Jolly IT from a LAMP stack to a MERN stack. I picked NodeJS as our new backend over the old PHP server and adopted GraphQL for our queries instead of exposing REST APIs. I had a tonne of fun recreating this as a MERN app, however the fun stopped as soon as I realised I was going to run into hosting issues…
The old Back-End was entirely written in PHP, which meant previously I could just FTP any changes I made back to our server and everything would update smoothly. Unfortunately, one of the downfalls of NodeJS is that it requires you to allocate it its own VPS server, and if you’re like me and haven’t worked a lot with NGINX before than you can be a little bit lost. I won’t cover everything in detail from start to finish as this blog is mainly just a couple of pointers and tips I wish I knew last week.
The above NGINX configuration might look a little intense, but if you break it down into the three locations, you can see that all we are doing here is setting up the routes for our web server. The first location is the Front-End of our application and what the visitors will see. the $uri variable is simply just the root we defined on the second line and we append index.html which is the HTML file that the ReactDOM library injects our React code into. The second location is used to serve static resources such as CSS and JS files, if you don’t add this location then you’ll get a bunch of errors in the browser console as none of our scripts will be able to be found and consequently not served to our visitors. Finally, the last location is purely used for the Back-End where our GraphQL responds to requests from the Front-End. The proxy_pass value is the local address and port of where your NodeJS server is running. Make sure you put the correct port number which you have defined within your Express configuration and also ensure nothing else is currently using the port or NGINX will throw a 500 server error.
At this point, if you have added your config and transferred your Back-End and Front-End files to the server then you should be able to start your Express app over SSH using your build/start command that is defined inside your package.json file of your Express app. This is all well and good, however we are running the app from the shell session, so if we were close the terminal or lose internet connection the Back-End would immedaitely drop and no one would be able to use our app. Fortunately for us, there is a really great tool named pm2 ( http://pm2.keymetrics.io/ ) which allows you to run node apps without having to stay logged in. We can create a new pm2 powered app using the following command-
pm2 start npm –name <YourAppName> –start
If you get no error and see a message which declares the app has been configured successfully, you should be good to go. You can test pm2 is working by closing your SSH session and seeing if the Back-End remains online and connected to the Front-End. The command pm2 ls will display a list of all available apps and their current status.
You can manage your apps by using pm2 <ACTION> <ID or App Name>. The most common actions you will probably be using are: stop, start and restart.
Benefits of Using NGINX with NodeJS
You might be reading this and thinking “wait, why would I go through the bother of setting up NGINX? Can’t NodeJS act as a web server in its own right?” and you’d be correct. However what I love about NGINX is that it gives you the freedom to host all of your apps on the same server, or the flipside freedom to split sites onto different servers and just use reverse proxy to relay through to them. After carrying out the configuration above I was able to host an additional NodeJS app on a seperate domain within a matter of minutes. All I had to do was copy the new app to my Ubuntu apps folder, create a new config file (NGINX will scan and autodetect /etc/nginx for any .conf files) and then simply run the app from SSH or setup a new pm2 app if I wanted the app to be persistent. If you are running everything on one server like myself then you only need one VPS and therefore 1 public IP Address which you can point all your domain names towards. Each seperate site is defined as a “server” within your conf file and NGINX maps the proxy_pass value to your server_name which is defined at the top of each .conf file.
- You are more than likely going to need a SSL Certificate if your app is using any kind of authentication or dealing with sensitive data. Lets Encrypt certificates are a perfect free option if you are on a budget, a student, or both in my case. CertBot (https://certbot.eff.org/) makes this process dead easy and has some really intuative docs. You may need to open TCP port 443 incoming to allow for secure connections connections if it isn’t already. You can check your existing connections with the ufw status command.
- If you have your app hosted in GitHUB then you can simply push updates and pull them down to your server via SSH directly whenever you need to make changes. This mimics the ease of FTP’ing files I once missed.