Data Science Interview Questions and Answers

1) What do you mean by word Data Science? Ans: Data Science is the extraction of knowledge from large volumes of data that are structured or unstructured, which is a continuation of the field data mining and predictive analytics, It is also known as knowledge discovery and data mining.
2) Explain the term botnet? Ans: A botnet is a a type of bot running on an IRC network that has been created with a Trojan.
3) What is Data Visualization? Ans: Data visualization is a common term that describes any effort to help people understand the significance of data by placing it in a visual context.
4) How you can define Data cleaning as a critical part of process? Ans: Cleaning up data to the point where you can work with it is a huge amount of work. If we’re trying to reconcile a lot of sources of data that we don’t control like in this flight, it can take 80% of our time.
5) Point out 7 Ways how Data Scientists use Statistics? Ans: 1. Design and interpret experiments to inform product decisions. 2. Build models that predict signal, not noise. 3. Turn big data a into the big picture 4. Understand user retention, engagement, conversion, and leads. 5. Give your users what they want. 6. Estimate intelligently. 7. Tell the story with the data.
6) Differentiate between Data modeling and Database design? Ans: Data Modeling – Data modeling (or modeling) in software engineering is the process of creating a data model for an information system by applying formal data modeling techniques. Database Design- Database design is the system of producing a detailed data model of a database. The term database design can be used to describe many different parts of the design of an overall database system.
7) Describe in brief the data Science Process flowchart? Ans: 1.Data is collected from sensors in the environment. 2. Data is “cleaned” or it can process to produce a data set (typically a data table) usable for processing. 3. Exploratory data analysis and statistical modeling may be performed. 4. A data product is a program such as retailers use to inform new purchases based on purchase history. It may also create data and feed it back into the environment.
8) What do you understand by term hash table collisions? Ans: Hash table (hash map) is a kind of data structure used to implement an associative array, a structure that can map keys to values. Ideally, the hash function will assign each key to a unique bucket, but sometimes it is possible that two keys will generate an identical hash causing both keys to point to the same bucket. It is known as hash collisions.
9) Compare and contrast R and SAS? Ans: SAS is commercial software whereas R is free source and can be downloaded by anyone. SAS is easy to learn and provide easy option for people who already know SQL whereas R is a low level programming language and hence simple procedures takes longer codes.
10) What do you understand by letter ‘R’? Ans: R is a low level language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at BELL.
11) What all things R environment includes? Ans: 1. A suite of operators for calculations on arrays, in particular matrices, 2. An effective data handling and storage facility, 3. A large, coherent, integrated collection of intermediate tools for data analysis, an effective data handling and storage facility, 4. Graphical facilities for data analysis and display either on-screen or on hardcopy, and 5. A well-developed, simple and effective programming language which includes conditionals, loops, user-defined recursive functions and input and output facilities.
12) What are the applied Machine Learning Process Steps? Ans: 1. Problem Definition: Understand and clearly describe the problem that is being solved. 2. Analyze Data: Understand the information available that will be used to develop a model. 3. Prepare Data: Define and expose the structure in the dataset. 4. Evaluate Algorithms: Develop robust test harness and baseline accuracy from which to improve and spot check algorithms. 5. Improve Results: Improve results to develop more accurate models. 6. Present Results: Details the problem and solution so that it can be understood by third parties.
13) Compare Multivariate, Univariate and Bivariate analysis? Ans: MULTIVARIATE: Multivariate analysis focuses on the results of observations of many different variables for a number of objects. UNIVARIATE: Univariate analysis is perhaps the simplest form of statistical analysis. Like other forms of statistics, it can be inferential or descriptive. The key fact is that only one variable is involved. BIVARIATE: Bivariate analysis is one of the simplest forms of quantitative (statistical) analysis. It involves the analysis of two variables (often denoted as X, Y), for the purpose of determining the empirical relationship between them.
14) What is Hypothesis in Machine Learning? Ans: The hypothesis space used by a machine learning system is the set of all hypotheses that might possibly be returned by it. It is typically dened by a hypothesis language, possibly in conjunction with a language bias.
15) Differentiate between Uniform and Skewed Distribution? Ans: UNIFORM DISTRIBUTION: A uniform distribution, sometimes also known as a rectangular distribution, is a distribution that has constant probability. The latter of which simplifies to the expected for . The continuous distribution is implemented as Uniform Distribution SKEWED DISTRIBUTION: In probability theory and statistics, Skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive or negative, or even undefined. The qualitative interpretation of the skew is complicated.
16) What do you understand by term Transformation in Data Acquisition? Ans: The transformation process allows you to consolidate, cleanse, and integrate data. We can semantically arrange the data from heterogeneous sources.
17) What do you understand by term Normal Distribution? Ans: It is a function which shows the distribution of many random variables as a symmetrical bell-shaped graph.
18) What is Data Acquisition? Ans: It is the process of measuring an electrical or physical phenomenon such as voltage, current, temperature, pressure, or sound with a computer. A DAQ system comprises of sensors, DAQ measurement hardware, and a computer with programmable software.
19) What is Data Collection? Ans: Data collection is the process of collecting and measuring information on variables of interest, in a proper systematic fashion that enables one to answer stated research questions hypotheses, and revise outcomes.
20.What do you understand by term Use case? Ans: A use case is a methodology used in system analysis to identify, clarify, and organize system requirements. The use case consists of a set of possible sequences of interactions between systems and users in a particular environment and related to a defined particular goal.
21) What is Sampling and Sampling Distribution? Ans: SAMPLING: Sampling is the process of choosing units (ex- people, organizations) from a population of interest so that by studying the sample we can fairly generalize our results back to the population from which they were chosen. SAMPLING DISTRIBUTION: The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given size.
22) What is Linear Regression? Ans: In statistics, linear regression is an way for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variable) denoted by X. The case of one explanatory variable is known as simple linear regression.
23) Differentiate between Extrapolation and Interpolation? Ans: Extrapolation is an approximate of a value based on extending a known sequence of values or facts beyond the area that is certainly known. Interpolation is an estimation of a value within two known values in a list of values.
24) How expected value is different from Mean value? Ans: There is no difference. These are two names for the same thing. They are mostly used in different contexts, though if we talk about the expected value of a random variable and the mean of a sample, population or probability distribution.
25) Differentiate between Systematic and Cluster Sampling? Ans: SYSTEMATIC SAMPLING: Systematic sampling is a statistical methology involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equal-probability method. CLUSTER SAMPLING: A cluster sample is a probability sample by which each sampling unit is a collection, or cluster, of elements.
26) What are the advantages of Systematic Sampling? Ans: 1.Easier to perform in the field, especially if a proper frame is not available. 2. Regularly provides more information per unit cost than simple random sampling, in the sense of smaller variances.
27) What do you understand by term Threshold limit value? Ans: The threshold limit value (TLV) of a chemical substance is a level in which it is believed that a worker can be exposed day after day for a working lifetime  without affecting his/her health.
28) Differentiate between Validation Set and Test set? Ans: Validation set: It is a set of examples used to tune the parameters [i.e., architecture, not weights] of a classifier, for example to choose the number of hidden units in a neural network. Test set: A set of examples used only to assess the performance [generalization] of a fully specified classifier.
29) How can R and Hadoop be used together? Ans: The most common way to link R and Hadoop is to use HDFS (potentially managed by Hive or HBase) as the long-term store for all data, and use Map Reduce jobs (potentially submitted from Hive, Pig, or Oozie) to encode, enrich, and sample data sets from HDFS into R. Data analysts can then perform complex modeling exercises on a subset of prepared data in R.
30) What do you understand by term RIMPALA? Ans: RImpala-package contains the R functions required to connect, execute queries and retrieve back results from Impala. It uses the rJava package to create a JDBC connection to any of the impala servers running on a Hadoop Cluster.
31) What is Collaborative Filtering? Ans: Collaborative filtering (CF) is a method used by some recommender systems. It consists of two senses, a narrow one and a more general one. In general, collaborative filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources.
32) What are the challenges of Collaborative Filtering? Ans: 1. Scalability 2. Data sparsity 3.Synonyms 4. Grey sheep Data sparsity 5. Shilling attacks 6. Diversity and the Long Tail
33) What do you understand by Big data? Ans: Big data is a buzzword, or catch-phrase, which describe a massive volume of both structured and unstructured data that is so large which is difficult to process using traditional database and software techniques.
34) What do you understand by Matrix factorization? Ans: Matrix factorization is simply a mathematical tool for playing around with matrices, and is therefore applicable in many scenarios by which one would find out something hidden under the data.
35) What do you understand by term Singular Value Decomposition? Ans: In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It has many useful applications in signal processing and statistics.
36) What do you mean by Recommender systems? Ans: Recommender systems or recommendation systems (sometimes replacing “system” with a synonym such as platform or engine) are a subclass of information filtering system that seek to predict the ‘rating’ or ‘preference’ that a user would give to an item.
37) What are the applications of Recommender Systems? Ans: Recommender systems have become extremely common in recent years, and are applied in a variety of applications. The most popular ones are probably movies, music, news, books, research articles, search queries, social tags, and products in general.
38) What are the two ways of Recommender System? Ans: Recommender systems typically produce a list of recommendations in one of two ways: Through collaborative or content-based filtering. Collaborative filtering approaches building a model from a user’s past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete characteristics of an item in order to recommend additional items with similar properties.
39) What are the factors to find the most accurate recommendation algorithms? Ans: 1. Diversity 2. Recommender Persistence 3.Privacy 4.User Demographics 5.Robustness 6.Serendipity 7.Trust 8. Labeling
40) What is K-Nearest Neighbor? Ans: k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all computation is deferred until classification. The k-NN algorithm is among the simplest of all machine learning algorithms.
41) What is Horizontal Slicing? Ans: In horizontal slicing, projects are broken up roughly along architectural lines. That is there would be one team for UI, one team for business logic and services (SOA), and another team for data.
42) What are the advantages of vertical slicing? Ans: The advantage of slicing vertically is you are more efficient. You don’t have the overhead, and effort that comes from trying to coordinate activities across multiple teams. No need to negotiate for resources. You’re all on the same team.
43) What is null hypothesis? Ans: In inferential statistics the null hypothesis usually refers to a general statement or default position that there is no relationship between two measured phenomena, or no difference among groups.
44) What is Statistical hypothesis? Ans: In statistical hypothesis testing, the alternative hypothesis (or maintained hypothesis or research hypothesis) and the null hypothesis are the two rival hypotheses which are compared by a statistical hypothesis test.
45) What is performance measure? Ans: Performance measurement is the method of collecting, analyzing and/or reporting information regarding the performance of an individual, group, organization, system or component.
46) What is the use of tree command? Ans: This command is used to list contents of directories in a tree-like format.
47) What is the use of uniq command? Ans: This command is used to report or omit repeated lines.
48) Which command is used translate or delete characters? Ans: tr command is used translate or delete characters.
49) What is the use of tapkee command? Ans: This command is used to reduce dimensionality of a data set using various algorithms.
50) Which command is used to sort the lines of text files? Ans: sort command is used to sort the lines of text files.
51) How can you check if a data set or time series is Random? Ans: To check whether a dataset is random or not use the lag plot. If the lag plot for the given dataset does not show any structure then it is random.
52) Write the code to sort an array in NumPy by the nth column? Ans: Using argsort () function this can be achieved. If there is an array X and you would like to sort the nth column then code for this will be x[x [: n-1].argsort ()]
53) How would you create a taxonomy to identify key customer trends in unstructured data?  Ans: The best way to approach this question is to mention that it is good to check with the business owner and understand their objectives before categorizing the data. Having done this, it is always good to follow an iterative approach by pulling new data samples and improving the model accordingly by validating it for accuracy by soliciting feedback from the stakeholders of the business. This helps ensure that your model is producing actionable results and improving over the time.
54) Python or R – Which one would you prefer for text analytics? Ans: The best possible answer for this would be Python because it has Pandas library that provides easy to use data structures and high performance data analysis tools.
55) Which technique is used to predict categorical responses? Ans: Classification technique is used widely in mining for classifying data sets.
56) What is logistic regression? Or State an example when you have used logistic regression recently. Ans: Logistic Regression often referred as logit model is a technique to predict the binary outcome from a linear combination of predictor variables. For example, if you want to predict whether a particular political leader will win the election or not. In this case, the outcome of prediction is binary i.e. 0 or 1 (Win/Lose). The predictor variables here would be the amount of money spent for election campaigning of a particular candidate, the amount of time spent in campaigning, etc.
57) What are Recommender Systems? Ans: A subclass of information filtering systems that are meant to predict the preferences or ratings that a user would give to a product. Recommender systems Cleaning data from multiple sources to transform it into a format that data analysts or data scientists can work with is a cumbersome process because – as the number of data sources increases, the time take to clean the data increases exponentially due to the number of sources and the volume of data generated in these sources. It might take up to 80% of the time for just cleaning data making it a critical part of analysis task.
Chandanakatta

Chandanakatta

Author

Hey there! I shoot some hoops when I’m not drowned in the books, sitting by the side of brooks.