5 most challenging interview questions on TensorFlow

- Advertisement -


This article was published as a part of the Data Science Blogthon.

- Advertisement -

- Advertisement -

Source: totaljobs.com

introduction

- Advertisement -

TensorFlow is one of the most promising deep learning frameworks for developing state-of-the-art deep learning solutions. Given the popularity and widespread use of TensorFlow to automate processes and create new tools in industry, a clear understanding of this framework is imperative to succeed in a data science interview.

In this article, I have compiled a list of five challenging interview questions and their solutions related to TensorFlow framework

TensorFlow related interview

Following are some questions and detailed answers.

Question 1: What types of tensors does TensorFlow support? Explain using examples.

Answer: Broadly speaking, there are

constant tensor variable tensor placeholder tensor

1. Stable tensor: Stable tensor is a type of tensor that cannot be changed while the graph is running. In this a node is created which takes a value and cannot be modified while the graph is running.

The constant tensor can be initialized using the tf.constant function name.

syntax:

tf.constant(value, dtype=None, shape=None, name=”constant tensor”)

Example code:

constant_var1 = tf.constant(4) constant_var2 = tf.constant(4.0) constant_var3 = tf.constant(“Hello Drishti”) Print(constant_var1) Print(constant_var2) Print(constant_var3)

>> Output:

tf.Tensor(4, size=(), dtype=int32) tf.Tensor(4, size=(), dtype=float32) tf.Tensor(b’Hello Drishti’, size=(), dtype=string)

2. Variable tensor: The nodes which output their current value are called variable tensors. These tensors can retain/preserve their value over successive graph runs. In this, during the calculation of the graph, the variables are changed by different operations.

It is mainly used to represent variable parameters in ML models.

Let’s take into account the equation for a linear model:

tensorflow

Source: Jai Alamar

In the above equation, “w” represents tand and “b” represents the biases which are trainable variable tensors.

In practice, variable tensors can be initialized using the tf.Variable function name. The initial value of the variable() constructor can be a tensor of any datatype and size. This initial value determines the type/size of the variable, which does not change even after creation and can be changed with the help of “assign” method.

syntax:

tf. variable(value, dtype = None, size = None, name = “Variable tensor”)

Example Case 1:

example_tensor1 = tf.Variable(2) #dtype–>int32 example_tensor1.assign(4) #dtype–>int32

>> Output:

Example Case 2:

example_tensor2 = tf.Variable(2.0) #dtype–> float32 example_tensor2.assign(4) #assign a value of type int32

>> Output:

Example Case 3:

example_tensor3 = tf.Variable(2) #dtype–>int32 example_tensor3.assign(4.0) #assigning a value of type float32

>> Output: TypeError: cannot convert 4.0 of dtype int32 to EagerTensor

So, from the above three examples, we can see that the initial value determines the type/size of the variable, which does not change even after creation.

3. Placeholder tensor: Placeholder tensors are advantageous over regular variables as they do not require initialization for use. They only require a datatype and tensor size so that even without any stored values ​​the graph knows what to calculate with. Data can be assigned at a later stage. This type of tensor is beneficial in scenarios in which a neural network takes input from an external source and when we do not want the graph to depend on some real value while developing the graph.

It can be initialized using the function name tf.placeholder.

syntax:

tf.compat.v1.placeholder(dtype, size = none, name = “placeholder”)

Example code:

# import package tensorflow.compat.v1 as tf tf.compat.v1.disable_eager_execution(), name = “y”) # create and add a third node z = tf.add(x, y, name= “z”) sess = tf.Session() # run session sess.run(z, feed_dict={ x: 1, y: 8})

>> Output: 9.0

Question 2: What differentiates tf.variable from tf.placeholder?

Answer: The difference between tf.Variable and tf.Placeholder can be defined as:

tf.Variable tf.placeholder requires initialization at the time of declaration. It is an empty variable that does not require initialization for use; The value can be defined at run time. It only requires the datatype and the tensor shape. These are typically used to hold the values ​​of weights and biases during session execution. They are bound within the feelings. The values ​​are changed during the execution of the program. The values ​​are not changed during the execution of the program. The values ​​that will be needed throughout the program are stored using tf.variable. tf.placeholder is used to handle external data.

Question 3: What is the use of tf.is_tensor?

Answer: tf.is_tensor Evaluates whether the given Python object (example_obj) is a type that can be directly consumed by TensorFlow ops without any type transformation that needs to be converted to a tensor before being fed. There are, for example, Python scalars and NumPy arrays.

syntax:

tf.is_tensor( example_obj ) Question 4: What is post-training quantization, and what are its benefits?

Answer: Post-training quantization is a model compression approach that reduces the weight representation while improving CPU and accelerator latency with a slight decrease in model accuracy. A pre-trained float TF model can be quantized by converting it to TF Lite format using the TensorFlow Lite converter.

Some of the advantages of post-training quantization are as follows:

Reduction in Access Memory Costs By using fewer-bit quantized data, less data needs to be transferred both on- and off-chip, increasing compute efficiency, reducing memory bandwidth, and saving energy. Question 5: What are the types of post-quantization techniques?

Answer: There are broadly three types of post-quantization techniques, which are:

dynamic range quantization full integer quantization float16 quantization

tensorflow

Figure 1: Decision tree to determine which post-training quantization approach is optimal for a use case

Source: Tensorflow.org

1. Dynamic Range Quantization: It is recommended to start with Dynamic Range Quantization as it requires less memory and calculates more quickly without a representative dataset for calibration. Only the weights from floating point to integer are statically quantized at conversion time (which provides 8 bit precision).

code output

Source: Tensorflow.org

To reduce latency during inference, activation functions are dynamically scaled to 8-bits, and then 8-bit weights and activations are used for computations. This optimization works perfectly well with fixed-point approximations. Nevertheless, since the outputs are stored using floating point, the increased speed of dynamic-range operations is less than that of full fixed-point calculations.

2. Full integer quantization: Additional latency improvements, reductions in maximum memory usage, and compatibility with hardware or accelerators can be achieved by quantizing (integer) the model math.

For this, we need to estimate/calibrate the range of all floating-point tensors in the model, because variable tensors, such as model inputs, activations, and model outputs, cannot be calibrated until we have run a few estimation cycles. Do not run Therefore, the converter requires representative datasets to calibrate them.

3. Float16 Quantization: By quantizing the weights to float16, we can compress a floating point model. For float 16 quantization of weights, the following steps can be used:

code output

Source: tensorflow.org

The following are some of the advantages of float16 quantization:

When model compression is enabled (up to half) the accuracy is slightly reduced. It supports some delegates, for example, GPU delegates, which can work directly with float16 data, allowing faster processing than float32 computational processing.

Following are some of the disadvantages of float16 quantization:

It doesn’t reduce latency that much. A float16 quantized model will, by default, “dequantize” the weight values ​​to float32 when the CPU is used. [Notably, the GPU delegate will not carry out the dequan]

tensorflow

Source: Tensorflow.org

conclusion

This article presents the five most important Interview Questions related to TensorFlow framework that can be asked in a Data Science interview. By using these interview questions, you can increase your understanding about various concepts, prepare effective responses and present them to the interviewer.

In summary, the main points of this article are as follows:

Broadly speaking, TensorFlow supports three types of tensors, i.e., constant tensors, variable tensors, and placeholder tensors. The main difference between tf.Variable and tf.placeholder is that tf.Variable needs to be initialized; In contrast, tf.placeholder does not. tf.is_tensor Checks whether the given python object (example_obj) is a type that can be directly consumed by TensorFlow ops without any type transformation that requires converting to a tensor. Post-training quantization is a model compression approach that reduces the weight representation while improving CPU and other accelerator latency with a slight decrease in model accuracy. There are broadly three types of post-quantization techniques: i) dynamic range quantization, ii) full integer quantization, and iii) float16 quantization.

related



Source link

- Advertisement -

Recent Articles

Related Stories