Data Science Interview Questions with Answers for Freshers

There are a lot of things that a data scientist should know, I’ll give you a list of data science queries I dealt with in several interviews. If a data scientist can start from here and if it’s a while in this field then it might be a replication, but you’ll find many things from here. I’ll try to start from the basic data science interview questions and cover the first one ahead of time, so let’s get started.

1. What is the difference between supervised and unmanaged machine learning?

Machine Learning – The supervision of a supervised machine requires training in labeling information.

Unsupervised Machine Learning – Studying an unsupervised machine does not require labelled data.

2. Explain how a ROC curve works?

The ROC curve is a graphical representation of the contrast between true positive rates and false-positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity (true positive rate) and false-positive rate.

3. What is selection Bias?

Selection bias occurs when the sample obtained is not representative of the population intended to be analysed.

4. Explain the SVM machine learning algorithm in detail

SVM is a support vector machine, a supervised machine learning algorithm that is used for both regression and classification. If you have n properties in your training data set, SVM will attempt to plot it in the n-dimensional space with the value of each attribute in a specific coordinate. Hyperplanes are used by SVM to separate different classes based on the provided kernel function.

5. What are the support vectors in SVM?

Thin lines represent the distance from the classification to the closest data points called support vectors (dark data points) in a graph. The distance between two thin lines is called the margin.

6. What are the functions of the different kernels in SVM?

There are four types of kernels in SVM.

  • Linear kernel
  • The polynomial kernel
  • Radial Basis Kernel
  • The sigmoid kernel

7. Explain in detail about the Decision Tree algorithm

A Decision tree is one of the most important supervised machine learning algorithm which is used primarily for regression and classification. The algorithm breaks the data set into smaller subsets, while increasing the associated decision tree. The result is a tree with Decision Nodes and Leaf Nodes. The decision tree can handle partition information and numeric information.

8. What is deep learning?

Deep learning is a subfield of machine learning inspired by the structure and function of the brain called an artificial neural network. We have a lot of numbers of algorithms under machine learning like Linear regression, SVM, Neural network etc and deep learning is just an extension of Neural networks. In neural nets, we consider a small number of hidden layers but when it comes to deep learning algorithms we consider a huge number of hidden layers to better understand the input-output relationship.

9. What is reinforcement learning?

Reinforcement Learning is learning what to do and how to map situations to actions. The end result is to maximise the numerical reward signal. The learner is not told which action to take but instead must discover which action will yield the maximum reward. Reinforcement learning is inspired by the learning of human beings, it is based on the reward/penalty mechanism.

10. What is selection bias?

Selection bias is the bias introduced by the selection of individuals, groups or data for analysis in such a way that proper randomisation is not achieved, thereby ensuring that the sample obtained is not representative of the population intended to be analysed. It is sometimes referred to as the selection effect. The phrase “selection bias” most often refers to the distortion of statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may not be accurate.

11. Explain what regularisation is and why it is useful

Regularisation is the process of adding a tuning parameter to a model to induce smoothness to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1(Lasso) or L2(ridge). The model predictions should then minimize the loss function calculated on the regularized training set.

12. What is TF/IDF vectorization?

TF–IDF is short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining. The tf-IDF value increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.

13. What are Recommender Systems?

A subclass of information filtering systems that are meant to predict the preferences or ratings that a user would give to a product. Recommender systems are widely used in movies, news, research articles, products, social tags, music, etc.

14. What is the difference between regression and classification of ML methods?

Regression and Classification machine learning methods come under the supervised machine learning algorithms. The major difference is that while in a supervised machine learning algorithm, we need labelled data in order to train our algorithms, moreover, we must provide the correct labels during training, and the algorithm tries to learn the pattern from input to output. If our labels are discrete values then it is a classification problem, e.g. A, B, etc. But if our labels are continuous values then it becomes a regression problem, e.g. 1.23, 1.333 etc.

15. What is ‘Naive’ in a Naive Bayes?

The Naive Bayes Algorithm is based on the Bayes Theorem. Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event.

What is Naive?

The Algorithm is ‘naive’ because it makes assumptions that may or may not turn out to be correct.

16. Why do we generally use Softmax nonlinearity function as the last operation in-network?

It is because it takes in a vector of real numbers and returns a probability distribution. Its definition is as follows. Let x be a vector of real numbers (positive, negative, whatever, there are no constraints). Then the ith component of Softmax(x) is —
It should be clear that the output is a probability distribution: each element is non-negative and the sum over all components is 1.

17. You are given a data set consisting of variables with more than 30 per cent missing values. How will you deal with them?

The following are ways to handle missing data values:

If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.

For smaller data sets, we can substitute missing values with the mean or average of the rest of the data using a pandas data frame in python. There are different ways to do so, such as df.mean(), df.fillna(mean).

18. For the given points, how will you calculate the Euclidean distance in Python?

plot1 = [1,3]
plot2 = [2,5]

The Euclidean distance can be calculated as follows:

euclidean_distance = sqrt((plot1[0]-plot2[0])**2 + (plot1[1]-plot2[1])**2)

19. What are recommender systems?

A recommender system predicts what a user would rate a specific product based on their preferences. It can be split into two different areas:

Collaborative filtering – As an example, Last.fm recommends tracks that other users with similar interests play often. This is also commonly seen on Amazon after making a purchase; customers may notice the following message accompanied by product recommendations: “Users who bought this also bought…”

Content-based filtering – As an example: Pandora uses the properties of a song to recommend music with similar properties. Here, we look at content, instead of looking at who else is listening to music.

20. How do you find RMSE and MSE in a linear regression model?

RMSE and MSE are two of the most common measures of accuracy for a linear regression model.

RMSE indicates the Root Mean Square Error.

MSE indicates the Mean Square Error.

21. How can you select k fork-means?

We use the elbow method to select k for k-means clustering. The idea of the elbow method is to run k-means clustering on the data set where ‘k’ is the number of clusters.

Within the sum of squares (WSS), it is defined as the sum of the squared distance between each member of the cluster and its centroid.

22. What is the significance of p-value?

p-value typically ≤ 0.05 – This indicates strong evidence against the null hypothesis; so you reject the null hypothesis.
p-value typically > 0.05 – This indicates weak evidence against the null hypothesis, so you accept the null hypothesis.
p-value at cutoff 0.05 – This is considered to be marginal, meaning it could go either way.

23. How can outlier values be treated?

You can drop outliers only if it is a garbage value.

Example: height of an adult = ABC ft. This cannot be true, as the height cannot be a string value. In this case, outliers can be removed.

If the outliers have extreme values, they can be removed. For example, if all the data points are clustered between zero to 10, but one point lies at 100, then we can remove this point.

If you cannot drop outliers, you can try the following:

  • Try a different model. Data detected as outliers by linear models can be fit by nonlinear models. Therefore, be sure you are choosing the correct model.
  • Try normalizing the data. This way, the extreme data points are pulled to a similar range.
  • You can use algorithms that are less affected by outliers; an example would be random forests.

24. How can a time-series data be declared as stationery?

It is stationary when the variance and mean of the series are constant with time.

Here is a visual example:

In the first graph, the variance is constant with time. Here, X is the time factor and Y is the variable. The value of Y goes through the same points all the time; in other words, it is stationary.

In the second graph, the waves get bigger, which means it is non-stationary and the variance is changing with time.

25. How can you calculate accuracy using a confusion matrix?

Consider this confusion matrix:

You can see the values for total data, actual values, and predicted values.

The formula for accuracy is:

Accuracy = (True Positive + True Negative) / Total Observations
= (262 + 347) / 650
= 609 / 650
= 0.93

As a result, we get an accuracy of 93%.

Do you like the above Story about Data Science Interview Questions. Help Others, Share us.