Arabic NLP
Sentiment Analysis for Arabic Comments




Done by Mehdi CHEBBAH


Table Of Contents


In what follows we will try to build a model that classifies comments (in Arabic) on a given product into 3 classes:

To make the classification we will use the Naïve Bayes algorithm which is a probabilistic classifier based on the bayes theorem. This method is among the (classic) methods most used to perform sentiment analysis or more generally natural language processing.

Working environment


Anaconda is a utility for Python offering many features. It offers for example the possibility to install libraries and to use them in its programs, but also offers software to help developers to have a complete development environment quickly.


Spyder (named Pydee in its first versions) is a development environment for Python. Free (MIT license) and multi-platform, it integrates many libraries for scientific use Matplotlib, NumPy, SciPy and IPython.


Scikit-learn is a free Python library for machine learning. It is developed by many contributors, especially in the academic world by French higher education and research institutes like Inria and Télécom Paris. It includes functions for estimating random forests, logistic regressions, classification algorithms, and support vector machines. It is designed to harmonize with other free Python libraries, notably NumPy and SciPy.


Pandas is a library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. Pandas is a free software under BSD license.


Building the model

Phase I: Data-set preparation

Regarding the data-set used in this analysis it is available here.

It is 100K (99999) reviews on different products, the dataset combines reviews of hotels, books, movies, products and some airlines.

It has three classes (mixed, negative and positive). Most were mapped from the reviewers' ratings, with 3 being mixed, above 3 positive, and below 3 negative. Each row has a label and text separated by a tab (tsv).

The text (notice) was cleaned by removing Arabic numerals and non-Arabic characters. There are no duplicate reviews in the dataset.

First, we want to import the data-set:

In order for the analysis to be of better quality, we need to do another data cleaning where we eliminate the empty words (Stop Words) that will falsify our analysis (or degrade the results)

A stop word is an insignificant word in a text. It is so common that it is useless to include it in the analysis.

Examples of stop words: أنا, كان, منذ, حتى, غير, و

To remove these words we will use a list of Arabic stop words available here.

Then we will split the data-set into Features (X) and Labels (y):

Then we split our data-set into Train and Test.


Phase II: Training

Now after the preparation of the data-set we can easily build our model by running the following code:


We can try using a different kernel For example the Gaussian but in practice the kernel that gives the best accuracy in our case is the Multinomial kernel.

Phase III: Validation

We can try to test our model on the test-set (Cross-validation) by running the following code:

Then we can calculate the different measures of model performance for example:

The confusion matrix


The accuracy


Phase IV: Trial

You can also test the model manually on real comments for example:



We can further increase the accuracy of our model by eliminating more empty words and increasing the number of features used (we used 5000 in this model).

For this model to be really useful we need to invest in the dialect language since most comments in social networks are written in dialect which limits the use of the model.

We can also try to build a model that analyzes the sentiments of comments that contain emojis since they are widely used in comments.

Bibliography & Webography

  1. (fr)
  2. (fr)
  3. (fr)
  4. (fr)
  5. (en)
  6. (en)
  7. (en)