• Get In Touch
October 19, 2016

Install and Configure ELK Stack on Ubuntu-14.04

Want your very own server? Get our 1GB memory, Xeon V4, 25GB SSD VPS for £10.00 / month.
Get a Cloud Server

The ELK stack is a combination of Elasticsearch, Logstash, and Kibana that is used to monitor logs from central location. The ELK stack is very useful when you are trying to troubleshoot problems with your servers or applications. You can easily search and analyze all of your logs in a single place.

The ELK stack consists of following four components:

  1. Logstash : It is an open source tool that processes incoming logs and storing for future use.
  2. Elasticsearch : It is used to store all of the logs
  3. Kibana : It is used to search and view the logs that Logstash has indexed through web interface.
  4. Filebeat : It is used on client side that will send their logs to Logstash.

In this tutorial, we will install and configure the ELK stack with all components in Ubuntu 14.04.

Requirements

  • A server runing Ubuntu-14.04.
  • A non-root user with sudo privileges setup on your server.

Installing Java

Before starting, you will need to install Java 8 because Elasticsearch and Logstash require Java. To install a recent version of Oracle Java 8, you will need to add the Oracle Java PPA to Ubuntu repository. You can add this repository with the following command:

sudo add-apt-repository -y ppa:webupd8team/java

Next, update the repository database using the following command:

sudo apt-get update

Once the repository is updated, you can Install the latest stable version of Oracle Java 8 using the following command:

sudo apt-get -y install oracle-java8-installer

You can verify the Java with the following command:

sudo java -version

You should see the Java version in the following output:

    java version "1.8.0_101"
    Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

Installing Elasticsearch

Before starting, you will need to import the Elasticsearch public GPG key into apt.
You can do this with the following command:

sudo wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Then, you will need to add Elastic’s package source list to apt.

To do this open the sources.list file:

sudo nano /etc/apt/sources.list

Add the following line:

    deb http://packages.elastic.co/elasticsearch/2.x/debian stable main

Save the file and update the repository with the following command:

sudo apt-get update

Now, install the Elasticsearch with the following command:

sudo apt-get -y install elasticsearch

The output looks some thing like this:

    Get:1 http://packages.elastic.co/elasticsearch/2.x/debian/ stable/main elasticsearch all 2.4.1 [27.3 MB]
    Fetched 27.3 MB in 7min 57s (57.0 kB/s)                                        
    Selecting previously unselected package elasticsearch.
    (Reading database ... 241682 files and directories currently installed.)
    Preparing to unpack .../elasticsearch_2.4.1_all.deb ...
    Unpacking elasticsearch (2.4.1) ...
    Processing triggers for ureadahead (0.100.0-16) ...
    ureadahead will be reprofiled on next reboot
    Setting up elasticsearch (2.4.1) ...
    Installing new version of config file /etc/default/elasticsearch ...
    Installing new version of config file /etc/elasticsearch/elasticsearch.yml ...
    Installing new version of config file /etc/init.d/elasticsearch ...
    Installing new version of config file /usr/lib/systemd/system/elasticsearch.service ...
    Processing triggers for ureadahead (0.100.0-16) ...


Once elasticsearch is installed, you will need to restrict outside access to Elasticsearch instance, you can do this by editing elasticsearch.yml file.

sudo /etc/elasticsearch/elasticsearch.yml

Find the line network.host and replace its value with localhost.

        network.host: localhost

Save the file and start elasticsearch service:

sudo /etc/init.d/elasticsearch start

Next, enable elasticsearch service to start at boot with the following command:

sudo update-rc.d elasticsearch defaults

You should see the following output:

     Adding system startup for /etc/init.d/elasticsearch ...
       /etc/rc0.d/K20elasticsearch -> ../init.d/elasticsearch
       /etc/rc1.d/K20elasticsearch -> ../init.d/elasticsearch
       /etc/rc6.d/K20elasticsearch -> ../init.d/elasticsearch
       /etc/rc2.d/S20elasticsearch -> ../init.d/elasticsearch
       /etc/rc3.d/S20elasticsearch -> ../init.d/elasticsearch
       /etc/rc4.d/S20elasticsearch -> ../init.d/elasticsearch
       /etc/rc5.d/S20elasticsearch -> ../init.d/elasticsearch

Now, elasticsearch is up and running, it’s time to test elasticsearch.

You can test elasticsearch with the following curl command:

curl localhost:9200

You should see the following output:

    {
      "name" : "Giganto",
      "cluster_name" : "elasticsearch",
      "version" : {
        "number" : "2.3.5",
        "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
        "build_timestamp" : "2016-07-27T10:36:52Z",
        "build_snapshot" : false,
        "lucene_version" : "5.5.0"
      },
      "tagline" : "You Know, for Search"
    }

Installing Logstash

By default Logstash is not available in Ubutnu repository, so you will need to add Logstash source list to apt.

sudo nano /etc/apt/sources.list

Add the following line:

    deb http://packages.elastic.co/logstash/2.2/debian stable main

Save the file and update the repository:

sudo apt-get update

Now, install the logstash with the following command:

sudo apt-get install logstash

Output:

    Selecting previously unselected package logstash.
    (Reading database ... 258343 files and directories currently installed.)
    Preparing to unpack .../logstash_1%3a2.2.4-1_all.deb ...
    Unpacking logstash (1:2.2.4-1) ...
    Processing triggers for ureadahead (0.100.0-16) ...
    Setting up logstash (1:2.2.4-1) ...
    Processing triggers for ureadahead (0.100.0-16) ...

Configure Logstash

Once logstash is installed, you will need to configure the logstash file located at /etc/logstash/conf.d directory. The configuration consists of three parts inputs, filters, and outputs.

Before configuring logstash, create a directory for storing certificate and key for logstash.

sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private

Next, add IP address of ELK server to OpenSSL configuration file:

sudo nano /etc/ssl/openssl.cnf

Find the section [ v3_ca] and add the following line:

    subjectAltName = IP: 192.168.1.7

Save the file and generate SSL certificate by running the following command:

Where 192.168.1.7 is your ELK server IP address.

cd /etc/pki/tls
sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/filebeat.key -out certs/filebeat.crt

Note that you will need to copy this certificate to every clients whose logs you want send to ELK server.

Now, create the filebeat input configuration file with the following command:

sudo nano /etc/logstash/conf.d/beats-input.conf

Add the following lines:

    input {  
      beats {
        port => 5044
        type => "logs"
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/filebeat.crt"
        ssl_key => "/etc/pki/tls/private/filebeat.key"
      }
    }

Next, create logstash filters config file:

sudo nano /etc/logstash/conf.d/syslog-filter.conf

Add the following lines:

    filter {  
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }

Last, create logstash outputs config file:

sudo nano /etc/logstash/conf.d/output.conf

Add the following lines:

    output {  
      elasticsearch {
        hosts => ["localhost:9200"]
      }
      stdout { codec => rubydebug }
    }

Save the file and test your Logstash configuration with the following command.

sudo service logstash configtest

The output will display Configuration OK if there are no errors. Otherwise, check the logstash log to troubleshoot problems.

Next, restart logstash service and enable logstash service to run automatically at bootup:

sudo /etc/init.d/logstash restart
sudo update-rc.d logstash defaults

Output:

     Adding system startup for /etc/init.d/logstash ...
       /etc/rc0.d/K20logstash -> ../init.d/logstash
       /etc/rc1.d/K20logstash -> ../init.d/logstash
       /etc/rc6.d/K20logstash -> ../init.d/logstash
       /etc/rc2.d/S20logstash -> ../init.d/logstash
       /etc/rc3.d/S20logstash -> ../init.d/logstash
       /etc/rc4.d/S20logstash -> ../init.d/logstash
       /etc/rc5.d/S20logstash -> ../init.d/logstash

Installing Kibana

To install Kibana, you will need to add Elastic’s package source list to apt.

You can create kibana source list file with the following command:

sudo echo "deb http://packages.elastic.co/kibana/4.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/kibana-4.4.x.list

Next, update the apt repository with the following command:

sudo apt-get update

Finally, install Kabana by running the following command:

sudo apt-get -y install kibana

Output:

    Selecting previously unselected package kibana.
    (Reading database ... 241755 files and directories currently installed.)
    Preparing to unpack .../kibana_4.4.2_amd64.deb ...
    Unpacking kibana (4.4.2) ...
    Processing triggers for ureadahead (0.100.0-16) ...
    Setting up kibana (4.4.2) ...
    Processing triggers for ureadahead (0.100.0-16) ...

Once kibana is installed, you will need to configure kibana. You can do this by editing it’s configuration file:

sudo nano /opt/kibana/config/kibana.yml

Change the following line:

    server.port: 5601
    server.host: localhost

Now, start the kibana service and enable it to start at boot:

sudo /etc/init.d/kibana start
sudo update-rc.d kibana defaults

Output:

     Adding system startup for /etc/init.d/kibana ...
       /etc/rc0.d/K20kibana -> ../init.d/kibana
       /etc/rc1.d/K20kibana -> ../init.d/kibana
       /etc/rc6.d/K20kibana -> ../init.d/kibana
       /etc/rc2.d/S20kibana -> ../init.d/kibana
       /etc/rc3.d/S20kibana -> ../init.d/kibana
       /etc/rc4.d/S20kibana -> ../init.d/kibana
       /etc/rc5.d/S20kibana -> ../init.d/kibana

You can verify whether kibana is running or not with the following command:

netstat -pltn

Output:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      2509/perl       
    tcp        0      0 0.0.0.0:21201           0.0.0.0:*               LISTEN      834/vsftpd      
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1056/sshd       
    tcp        0      0 127.0.0.1:51101         0.0.0.0:*               LISTEN      2685/irssi      
    tcp        0      0 127.0.0.1:51102         0.0.0.0:*               LISTEN      2614/rtorrent   
    tcp        0      0 0.0.0.0:51103           0.0.0.0:*               LISTEN      2614/rtorrent   
    tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      8183/node       
    tcp6       0      0 ::1:9200                :::*                    LISTEN      6314/java       
    tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      6314/java       
    tcp6       0      0 :::38192                :::*                    LISTEN      1053/java       
    tcp6       0      0 :::5044                 :::*                    LISTEN      7545/java       
    tcp6       0      0 ::1:9300                :::*                    LISTEN      6314/java       
    tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      6314/java       
    tcp6       0      0 :::22                   :::*                    LISTEN      1056/sshd       
    tcp6       0      0 :::2181                 :::*                    LISTEN      1053/java       

Once kibana is installed, you will need to download sample Kibana dashboards and Beats index patterns. You can download the sample dashboard with the following command:

curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip

Once download is complete, unzip the downloaded file with the following command:

unzip beats-dashboards-1.1.0.zip

Now, load the sample dashboards, visualizations and Beats index patterns into Elasticsearch by running the following command:

cd beats-dashboards-1.1.0
./load.sh

You will find the following index patterns in the the kibana dashboard:

    packetbeat-*
    topbeat-*
    filebeat-*
    winlogbeat-*

Here, we will use only filebeat to forward logs to Elasticsearch, so we will load a filebeat index template into the elasticsearch.

To do this, download the filebeat index template.

curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Output:

      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   991  100   991    0     0    504      0  0:00:01  0:00:01 --:--:--   504

Now load the following template by running the following command:

curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json

If the template loaded properly, you should see the following output:

    {
      "acknowledged" : true
    }

Installing Nginx

You will also need to install Nginx to set up a reverse proxy to allow external access to Kibana server because we configured Kibana to listen on localhost.

To install Nginx, run the following command:

sudo apt-get install nginx

You will also need to install apache2-utils for htpasswd utility:

sudo apt-get install apache2-utils

Now, Create admin user to access Kibana web interface using htpasswd utility:

sudo htpasswd -c /etc/nginx/htpasswd.users admin

Enter password as you wish, you will need this password to access Kibana web interface.

Next, open Nginx default configuration file:

sudo nano /etc/nginx/sites-available/default

Delete all the lines and add the following lines:

        server {
        listen 80;
        server_name 192.168.1.7;
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;
        location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
        } 

Save and exit the file. Nginx now directS your server’s traffic to the Kibana server, which is listening on localhost:5601. Now restart Nginx service and enable it to start at boot:

sudo /etc/init.d/nginx restart
sudo update-rc.d nginx defaults

Output:

     Adding system startup for /etc/init.d/nginx ...
       /etc/rc0.d/K20nginx -> ../init.d/nginx
       /etc/rc1.d/K20nginx -> ../init.d/nginx
       /etc/rc6.d/K20nginx -> ../init.d/nginx
       /etc/rc2.d/S20nginx -> ../init.d/nginx
       /etc/rc3.d/S20nginx -> ../init.d/nginx
       /etc/rc4.d/S20nginx -> ../init.d/nginx
       /etc/rc5.d/S20nginx -> ../init.d/nginx

Now, Kibana is accessible via public IP address of your ELK server. The ELK server is now ready to receive filebeat data, now it’s time to setting up Filebeat on each client server.

Setup Filebeat on THE Client Server

You will also need to setup filebeat on each Ubuntu server that you want to send logs to Logstash on your ELK Server.

Before setting up filebeat on the client server, you will need to copy SSL certificate from ELK server to your client server.

On the ELK server, run the following command to copy SSL certificate to client server:

scp /etc/pki/tls/certs/filebeat.crt user@client-server-ip:/tmp/

Now, on client server, copy ELK server’s SSL certificate into appropriate location:

First, create directory structure for SSL certificate:

sudo mkdir -p /etc/pki/tls/certs/

Then, copy certificate into it:

sudo cp /tmp/filebeat.crt /etc/pki/tls/certs/

Now, it’s time to install filebeat package on client server.

To install filebeat, you will need to create source list for filebeat, you can do this with the following command:

echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list

Then, add the GPG key with the following command:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Next, update the repository with the following command:

sudo apt-get update

Finally install filebeat by running the following command:

sudo apt-get install filebeat

Output:

    Reading package lists... Done
    Building dependency tree        
    Reading state information... Done
    The following NEW packages will be installed:
      filebeat
    0 upgraded, 1 newly installed, 0 to remove and 781 not upgraded.
    Need to get 4,255 kB of archives.
    After this operation, 12.6 MB of additional disk space will be used.
    WARNING: The following packages cannot be authenticated!
      filebeat
    Install these packages without verification? [y/N] y
    Get:1 https://packages.elastic.co/beats/apt/ stable/main filebeat amd64 1.3.1 [4,255 kB]
    Fetched 4,255 kB in 34s (122 kB/s)                                             
    Selecting previously unselected package filebeat.
    (Reading database ... 241746 files and directories currently installed.)
    Preparing to unpack .../filebeat_1.3.1_amd64.deb ...
    Unpacking filebeat (1.3.1) ...
    Processing triggers for ureadahead (0.100.0-16) ...
    ureadahead will be reprofiled on next reboot
    Setting up filebeat (1.3.1) ...
    Processing triggers for ureadahead (0.100.0-16) ...

Once filebeat is installed, start filebeat service and enable it to start at boot:

sudo /etc/init.d/filebeat start
sudo update-rc.d filebeat defaults

Output:

     Adding system startup for /etc/init.d/filebeat ...
       /etc/rc0.d/K20filebeat -> ../init.d/filebeat
       /etc/rc1.d/K20filebeat -> ../init.d/filebeat
       /etc/rc6.d/K20filebeat -> ../init.d/filebeat
       /etc/rc2.d/S20filebeat -> ../init.d/filebeat
       /etc/rc3.d/S20filebeat -> ../init.d/filebeat
       /etc/rc4.d/S20filebeat -> ../init.d/filebeat
       /etc/rc5.d/S20filebeat -> ../init.d/filebeat

Next, you will need to configure Filebeat to connect to Logstash on our ELK Server.
You can do this by editing Filebeat configuration file located at /etc/filebeat/filebeat.yml.

sudo nano /etc/filebeat/filebeat.yml

Change the file as shown below:

    filebeat:
      prospectors:
        -
          paths:
            - /var/log/auth.log
            - /var/log/syslog
          #  - /var/log/*.log

          input_type: log

          document_type: syslog

      registry_file: /var/lib/filebeat/registry

    output:
      logstash:
        hosts: ["192.168.1.7:5044"]
        bulk_max_size: 1024

        tls:
          certificate_authorities: ["/etc/pki/tls/certs/filebeat.crt"]

    shipper:

    logging:
      files:
        rotateeverybytes: 10485760 # = 10MB

Save the file and restart filebeat service:

sudo /etc/init.d/filebeat restart

Now Filebeat is sending syslog and auth.log to Logstash on your ELK server.

Once everything is up-to-date, you will need to test whether Filebeat on your client server should be shipping your logs to Logstash on your ELK server.

To do this, run the following command on your ELK server:

curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

You should see the following output:

    {
          "_index" : "filebeat-2016.01.29",
          "_type" : "log",
          "_id" : "AVKO98yuaHvsHQLa53HE",
          "_score" : 1.0,
          "_source":{"message":"Feb  3 14:34:00 rails sshd[963]: Server listening on :: port 22.","@version":"1","@timestamp":"2016-10-18T19:59:09.145Z","beat":{"hostname":"topbeat-u-03","name":"topbeat-u-03"},"count":1,"fields":null,"input_type":"log","offset":70,"source":"/var/log/auth.log","type":"log","host":"topbeat-u-03"}
        }

You can also test filebeat by running the following command on Client Server:

sudo filebeat -c /etc/filebeat/filebeat.yml -e -v

You should see the following output:

    2016/10/19 05:43:25.068025 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
    2016/10/19 05:43:25.069519 logstash.go:106: INFO Max Retries set to: 3
    2016/10/19 05:43:25.355674 outputs.go:126: INFO Activated logstash as output plugin.
    2016/10/19 05:43:25.355863 publish.go:288: INFO Publisher name: Vyom-PC
    2016/10/19 05:43:25.356329 async.go:78: INFO Flush Interval set to: 1s
    2016/10/19 05:43:25.356381 async.go:84: INFO Max Bulk Size set to: 1024
    2016/10/19 05:43:25.356460 beat.go:168: INFO Init Beat: filebeat; Version: 1.3.1
    2016/10/19 05:43:25.357301 beat.go:194: INFO filebeat sucessfully setup. Start running.
    2016/10/19 05:43:25.357445 registrar.go:68: INFO Registry file set to: /var/lib/filebeat/registry
    2016/10/19 05:43:25.357526 registrar.go:80: INFO Loading registrar data from /var/lib/filebeat/registry
    2016/10/19 05:43:25.358161 prospector.go:133: INFO Set ignore_older duration to 0s
    2016/10/19 05:43:25.358213 prospector.go:133: INFO Set close_older duration to 1h0m0s
    2016/10/19 05:43:25.358271 prospector.go:133: INFO Set scan_frequency duration to 10s
    2016/10/19 05:43:25.358304 prospector.go:93: INFO Input type set to: log
    2016/10/19 05:43:25.358337 prospector.go:133: INFO Set backoff duration to 1s
    2016/10/19 05:43:25.358370 prospector.go:133: INFO Set max_backoff duration to 10s
    2016/10/19 05:43:25.358389 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
    2016/10/19 05:43:25.358406 prospector.go:113: INFO force_close_file is disabled
    2016/10/19 05:43:25.358507 prospector.go:143: INFO Starting prospector of type: log
    2016/10/19 05:43:25.358813 log.go:115: INFO Harvester started for file: /var/log/syslog
    2016/10/19 05:43:25.358952 crawler.go:78: INFO All prospectors initialised with 21 states to persist
    2016/10/19 05:43:25.359048 registrar.go:87: INFO Starting Registrar
    2016/10/19 05:43:25.358845 log.go:115: INFO Harvester started for file: /var/log/auth.log
    2016/10/19 05:43:25.359304 publish.go:88: INFO Start sending events to output
    2016/10/19 05:44:34.245076 single.go:161: INFO backoff retry: 2s
    ^[[B2016/10/19 05:46:24.333060 publish.go:104: INFO Events sent: 2048
    2016/10/19 05:46:24.335025 registrar.go:162: INFO Registry file updated. 21 states written.

Allow ELK Through Your Firewall

Next, you will need to configure your firewall to allow traffic to the following ports.
You can do this by running the following command:

sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5601 -j ACCEPT
sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5044 -j ACCEPT

Now, save the iptables rules with the following command:

sudo service iptables save

Finally, restart iptables service with the following command:

sudo service iptables restart

Access the Kibana Web Interface

When everything is up-to-date, it’s time to access the Kibana web interface.

On the client computer, open your web browser and type the URL http://your-elk-server-ip, Enter “kibana” credentials that you have created earlier, you will be redirected to Kibana welcome page.

HP_NO_IMG/data/uploads/users/7ecb43a5-b365-4ebf-93d5-f3b632f29f33/639368547.png” alt=”” />

Then, click filebeat* in the top left sidebar, and click on the status of the ELK server. You should see the status of the ELK server:

HP_NO_IMG/data/uploads/users/7ecb43a5-b365-4ebf-93d5-f3b632f29f33/1703122035.png” alt=”” />

Conclusion

You have now successfully configured ELK server. You can now easily install filebeat client on any number of client servers and send the logs to the ELK server.
Feel free to ask any questions, if you have any doubts.

Want your very own server? Get our 1GB memory, Xeon V4, 25GB SSD VPS for £10.00 / month.
Get a Cloud Server

Share this Article!

Related Posts

Node.js Authentication – A Complete Guide with Passport and JWT

Node.js Authentication – A Complete Guide with Passport and JWT

Truth be told, it’s difficult for a web application that doesn’t have some kind of identification, even if you don’t see it as a security measure in and of itself. The Internet is a kind of lawless land, and even on free services like Google’s, authentication ensures that abuses will be avoided or at least […]

Node.js and MongoDB: How to Connect MongoDB With Node

Node.js and MongoDB: How to Connect MongoDB With Node

MongoDB is a document-oriented NoSQL database, which was born in 2007 in California as a service to be used within a larger project, but which soon became an independent and open-source product. It stores documents in JSON, a format based on JavaScript and simpler than XML, but still with good expressiveness. It is the dominant […]

Using MySQL with Node.js: A Complete Tutorial

Using MySQL with Node.js: A Complete Tutorial

Although data persistence is almost always a fundamental element of applications, Node.js has no native integration with databases. Everything is delegated to third-party libraries to be included manually, in addition to the standard APIs. Although MongoDB and other non-relational databases are the most common choice with Node because if you need to scale an application, […]

Node.Js Vs Django: Which Is the Best for Your Project

Node.Js Vs Django: Which Is the Best for Your Project

Django and NodeJs are two powerful technologies for web development, both have great functionality, versatile applications, and a great user interface. Both are open source and can be used for free. But which one fits your project best? NodeJs is based on JavaScript, while Django is written in Python. These are two equally popular technologies […]

Nodejs Vs PHP:  Which Works Best?

Nodejs Vs PHP: Which Works Best?

Before getting into the “battle” between Node.js and PHP we need to understand why the issue is still ongoing. It all started with the increased demand for smartphone applications, their success forcing developers to adapt to new back-end technologies that could handle a multitude of simultaneous requests. JavaScript has always been identified as a client-side […]