title

TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Python | Edureka

description

π₯ Edureka TensorFlow Training (Use Code "πππππππππ"): https://www.edureka.co/ai-deep-learning-with-tensorflow
This Edureka TensorFlow Tutorial video (Blog: https://goo.gl/4zxMfU) will help you in understanding various important basics of TensorFlow. It also includes a use-case in which we will create a model that will differentiate between a rock and a mine using TensorFlow.
Below are the topics covered in this tutorial:
1. What are Tensors?
2. What is TensorFlow?
3. TensorFlow Code-basics
4. Graph Visualization
5. TensorFlow Data structures
6. Use-Case Naval Mine Identifier (NMI)
Subscribe to our channel to get video updates. Hit the subscribe button above.
Check our complete Deep Learning With TensorFlow playlist here: https://goo.gl/cck4hE
PG in Artificial Intelligence and Machine Learning with NIT Warangal : https://www.edureka.co/post-graduate/machine-learning-and-ai
Post Graduate Certification in Data Science with IIT Guwahati - https://www.edureka.co/post-graduate/data-science-program
(450+ Hrs || 9 Months || 20+ Projects & 100+ Case studies)
- - - - - - - - - - - - - -
How it Works?
1. This is 21 hrs of Online Live Instructor-led course. Weekend class: 7 sessions of 3 hours each.
2. We have a 24x7 One-on-One LIVE Technical Support to help you with any problems you might face or any clarifications you may require during the course.
3. At the end of the training you will have to undergo a 2-hour LIVE Practical Exam based on which we will provide you a Grade and a Verifiable Certificate!
- - - - - - - - - - - - - -
About the Course
Edureka's Deep learning with Tensorflow course will help you to learn the basic concepts of TensorFlow, the main functions, operations and the execution pipeline. Starting with a simple βHello Wordβ example, throughout the course you will be able to see how TensorFlow can be used in curve fitting, regression, classification and minimization of error functions. This concept is then explored in the Deep Learning world. You will evaluate the common, and not so common, deep neural networks and see how these can be exploited in the real world with complex raw data using TensorFlow. In addition, you will learn how to apply TensorFlow for backpropagation to tune the weights and biases while the Neural Networks are being trained. Finally, the course covers different types of Deep Architectures, such as Convolutional Networks, Recurrent Networks and Autoencoders.
Delve into neural networks, implement Deep Learning algorithms, and explore layers of data abstraction with the help of this Deep Learning with TensorFlow course.
- - - - - - - - - - - - - -
Who should go for this course?
The following professionals can go for this course:
1. Developers aspiring to be a 'Data Scientist'
2. Analytics Managers who are leading a team of analysts
3. Business Analysts who want to understand Deep Learning (ML) Techniques
4. Information Architects who want to gain expertise in Predictive Analytics
5. Professionals who want to captivate and analyze Big Data
6. Analysts wanting to understand Data Science methodologies
However, Deep learning is not just focused to one particular industry or skill set, it can be used by anyone to enhance their portfolio.
- - - - - - - - - - - - - -
Why Learn Deep Learning With TensorFlow?
TensorFlow is one of the best libraries to implement Deep Learning. TensorFlow is a software library for numerical computation of mathematical expressions, using data flow graphs. Nodes in the graph represent mathematical operations, while the edges represent the multidimensional data arrays (tensors) that flow between them. It was created by Google and tailored for Machine Learning. In fact, it is being widely used to develop solutions with Deep Learning.
Machine learning is one of the fastest-growing and most exciting fields out there, and Deep Learning represents its true bleeding edge. Deep learning is primarily a study of multi-layered neural networks, spanning over a vast range of model architectures. Traditional neural networks relied on shallow nets, composed of one input, one hidden layer and one output layer. Deep-learning networks are distinguished from these ordinary neural networks having more hidden layers, or so-called more depth. These kinds of nets are capable of discovering hidden structures within unlabeled and unstructured data (i.e. images, sound, and text), which constitutes the vast majority of data in the world.
For more information, please write back to us at sales@edureka.co or call us at IND: 9606058406 / US: 18338555775 (toll-free).
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Telegram: https://t.me/edurekaupdates

detail

{'title': 'TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Python | Edureka', 'heatmap': [{'end': 870.774, 'start': 808.369, 'weight': 0.775}, {'end': 1110.36, 'start': 1076.737, 'weight': 0.721}, {'end': 2038.675, 'start': 1969.662, 'weight': 0.769}], 'summary': 'Covers tensorflow fundamentals, including model implementation, computation graph visualization, optimizing model training with gradient descent, nmi use case implementation, tensorflow data processing, and model optimization. it achieves 85% test accuracy and an average mean squared error of 24.4505 after 1,000 epochs, demonstrating the practical application of tensorflow in developing a naval mine identifier (nmi) model.', 'chapters': [{'end': 74.896, 'segs': [{'end': 74.896, 'src': 'embed', 'start': 15.594, 'weight': 0, 'content': [{'end': 24.716, 'text': 'Nevel Mine Identifier is a pretty serious problem in which one needs to identify whether an obstacle is a rock or a mine on the basis of sonar signals bounced or reflected by it.', 'start': 15.594, 'duration': 9.122}, {'end': 27.197, 'text': "So let's get into the details of this use case.", 'start': 25.456, 'duration': 1.741}, {'end': 34.384, 'text': 'Imagine you are hired by the US Navy and your task is to create a model that can differentiate between a rock and a mine.', 'start': 28.16, 'duration': 6.224}, {'end': 38.567, 'text': 'We will call this model as a Naval Mine Identifier, NMI.', 'start': 35.005, 'duration': 3.562}, {'end': 45.392, 'text': 'A naval mine is a self-contained explosive device placed in water to damage or destroy surface ships or submarines.', 'start': 39.308, 'duration': 6.084}, {'end': 49.855, 'text': 'If you want a better picture, just consider the diagram that is there in front of your screen.', 'start': 46.092, 'duration': 3.763}, {'end': 55.619, 'text': 'So here we have three submarines, out of which one is broken down into small pieces when it passes through a naval mine.', 'start': 50.315, 'duration': 5.304}, {'end': 60.992, 'text': 'The major use of these underwater mines or naval mines began in World War I.', 'start': 56.51, 'duration': 4.482}, {'end': 68.974, 'text': 'Similarly in World War II, nearly 700, 000 naval mines were laid, accounting for more ships sunk or damaged than any other weapon.', 'start': 60.992, 'duration': 7.982}, {'end': 72.075, 'text': 'So now you must have understood the importance of this use case.', 'start': 69.615, 'duration': 2.46}, {'end': 74.896, 'text': 'This model can actually save a lot of lives.', 'start': 72.656, 'duration': 2.24}], 'summary': 'Develop a naval mine identifier model to differentiate between rocks and mines, with 700,000 mines laid in world war ii causing significant damage.', 'duration': 59.302, 'max_score': 15.594, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo15594.jpg'}], 'start': 0.349, 'title': 'Tensorflow tutorial: naval mine identifier', 'summary': 'Covers the fundamentals of tensorflow to implement a naval mine identifier (nmi), a model that can differentiate between a rock and a mine based on sonar signals, with the potential to save lives by preventing damage to ships and submarines during war.', 'chapters': [{'end': 74.896, 'start': 0.349, 'title': 'Tensorflow tutorial: naval mine identifier', 'summary': 'Covers the fundamentals of tensorflow to implement a naval mine identifier (nmi), a model that can differentiate between a rock and a mine based on sonar signals, with the potential to save lives by preventing damage to ships and submarines during war.', 'duration': 74.547, 'highlights': ['Naval mines can cause significant damage, with nearly 700,000 laid in World War II, resulting in more ships being sunk or damaged than any other weapon. Highlighting the destructive impact of naval mines in World War II, with approximately 700,000 mines laid and causing the most damage to ships.', 'The tutorial focuses on implementing a Naval Mine Identifier (NMI) using TensorFlow to differentiate between rocks and mines based on sonar signals. Emphasizing the primary focus of the tutorial on utilizing TensorFlow to create a Naval Mine Identifier for distinguishing rocks and mines via sonar signals.', 'The potential of the NMI model to save lives by preventing damage to ships and submarines is highlighted. Stressing the life-saving potential of the Naval Mine Identifier model in preventing damage to ships and submarines, thereby emphasizing its significance.']}], 'duration': 74.547, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo349.jpg', 'highlights': ['The tutorial focuses on implementing a Naval Mine Identifier (NMI) using TensorFlow to differentiate between rocks and mines based on sonar signals.', 'The potential of the NMI model to save lives by preventing damage to ships and submarines is highlighted.', 'Naval mines can cause significant damage, with nearly 700,000 laid in World War II, resulting in more ships being sunk or damaged than any other weapon.']}, {'end': 864.812, 'segs': [{'end': 108.353, 'src': 'embed', 'start': 75.677, 'weight': 1, 'content': [{'end': 79.098, 'text': "Now let's look at the data set that we'll use to create this deep learning model.", 'start': 75.677, 'duration': 3.421}, {'end': 81.859, 'text': "Here we'll be using the sonar data set.", 'start': 80.097, 'duration': 1.762}, {'end': 90.988, 'text': 'This data set contains sonar signals which includes 111 patterns bounced off a metal cylinder and 97 patterns bounced off a rock.', 'start': 82.359, 'duration': 8.629}, {'end': 93.551, 'text': 'Both at various angles and conditions.', 'start': 91.489, 'duration': 2.062}, {'end': 97.475, 'text': 'Hence every record in the data set represents a pattern.', 'start': 94.312, 'duration': 3.163}, {'end': 99.317, 'text': 'Now let me show you the data set.', 'start': 98.116, 'duration': 1.201}, {'end': 108.353, 'text': 'So there are total 61 columns in the data set, in which the first 60 columns is a set of 60 numbers in the range from 0.0 to 1.0,', 'start': 100.029, 'duration': 8.324}], 'summary': 'Using sonar data set with 111 metal cylinder patterns and 97 rock patterns, totaling 208 patterns, with 61 columns of numerical data.', 'duration': 32.676, 'max_score': 75.677, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo75677.jpg'}, {'end': 230.748, 'src': 'embed', 'start': 195.683, 'weight': 0, 'content': [{'end': 197.083, 'text': "So I'll just go ahead and run this.", 'start': 195.683, 'duration': 1.4}, {'end': 202.884, 'text': 'So this is how the output looks like where the deep learning model is getting trained.', 'start': 199.643, 'duration': 3.241}, {'end': 207.985, 'text': 'If you observe with every iteration, the accuracy is increasing and the error is decreasing.', 'start': 203.384, 'duration': 4.601}, {'end': 209.445, 'text': "So we'll stop it right here.", 'start': 208.465, 'duration': 0.98}, {'end': 217.341, 'text': 'So guys any questions any doubts with respect to what is our use case and what is the data set about you can go ahead and ask me any questions.', 'start': 210.558, 'duration': 6.783}, {'end': 219.422, 'text': 'Emma has a question.', 'start': 218.742, 'duration': 0.68}, {'end': 221.203, 'text': "She's asking what is the size of the data set?", 'start': 219.442, 'duration': 1.761}, {'end': 230.748, 'text': 'So the size of the data set is 208 cross 61, which means we have 208 rows and 61 columns, out of which the last column is nothing but the label,', 'start': 221.563, 'duration': 9.185}], 'summary': 'Trained deep learning model with increasing accuracy and decreasing error, dataset size 208x61.', 'duration': 35.065, 'max_score': 195.683, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo195683.jpg'}, {'end': 283.366, 'src': 'embed', 'start': 256.97, 'weight': 2, 'content': [{'end': 260.952, 'text': 'TensorFlow programs use a tensor data structure to represent all data.', 'start': 256.97, 'duration': 3.982}, {'end': 264.876, 'text': 'You can think of a tensor as an n-dimensional array or list.', 'start': 261.452, 'duration': 3.424}, {'end': 267.918, 'text': 'Now consider the example that is there in front of your screen.', 'start': 265.716, 'duration': 2.202}, {'end': 270.342, 'text': 'So the first tensor has dimension zero.', 'start': 268.462, 'duration': 1.88}, {'end': 273.723, 'text': 'The next tensor has two dimensions because it has rows as well as columns.', 'start': 270.703, 'duration': 3.02}, {'end': 277.164, 'text': 'And the last tensor has dimension three because it has one more field.', 'start': 274.043, 'duration': 3.121}, {'end': 282.185, 'text': 'Now in TensorFlow system, tensors are described by a unit of dimensionality known as rank.', 'start': 277.904, 'duration': 4.281}, {'end': 283.366, 'text': "So let's understand that.", 'start': 282.425, 'duration': 0.941}], 'summary': 'Tensorflow uses tensors to represent data, with dimensions and ranks.', 'duration': 26.396, 'max_score': 256.97, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo256970.jpg'}, {'end': 387.153, 'src': 'embed', 'start': 359.59, 'weight': 3, 'content': [{'end': 363.171, 'text': 'So guys, now is the time to understand what is TensorFlow.', 'start': 359.59, 'duration': 3.581}, {'end': 370.834, 'text': 'TensorFlow is an open source software library released in 2015 by Google to make it easier for developers to design,', 'start': 363.992, 'duration': 6.842}, {'end': 373.776, 'text': 'develop and train deep learning models on neural networks.', 'start': 370.834, 'duration': 2.942}, {'end': 379.178, 'text': 'Now TensorFlow works by first defining and describing our model in abstract.', 'start': 374.516, 'duration': 4.662}, {'end': 383.57, 'text': 'and then we are ready, we make it a reality in a session.', 'start': 379.847, 'duration': 3.723}, {'end': 387.153, 'text': "Now don't worry guys, in the next slide I'll explain what exactly is session.", 'start': 384.151, 'duration': 3.002}], 'summary': 'Tensorflow is an open source software by google for deep learning models on neural networks.', 'duration': 27.563, 'max_score': 359.59, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo359590.jpg'}, {'end': 477.007, 'src': 'embed', 'start': 446.155, 'weight': 4, 'content': [{'end': 447.955, 'text': 'So this is how you build a computational graph.', 'start': 446.155, 'duration': 1.8}, {'end': 449.696, 'text': 'Here we have defined two constant nodes.', 'start': 447.995, 'duration': 1.701}, {'end': 453.537, 'text': 'Node 1 has a value of 3 and node 2 has value 4.', 'start': 450.056, 'duration': 3.481}, {'end': 457.377, 'text': 'Then what we do, we need to run it inside a session in order to execute the graph.', 'start': 453.537, 'duration': 3.84}, {'end': 460.538, 'text': 'Now let me go ahead and execute this practically in my PyCharm.', 'start': 457.897, 'duration': 2.641}, {'end': 462.458, 'text': 'This is my PyCharm again guys.', 'start': 461.178, 'duration': 1.28}, {'end': 466.159, 'text': 'First thing I need to do is import tensorflow as tf.', 'start': 462.538, 'duration': 3.621}, {'end': 468.38, 'text': "Then we'll define two constant nodes.", 'start': 466.899, 'duration': 1.481}, {'end': 477.007, 'text': 'Node 1 equal to tf.constant and the value will be stored inside it will be 3.0.', 'start': 468.6, 'duration': 8.407}], 'summary': 'The transcript discusses building a computational graph using tensorflow, defining constant nodes with values of 3 and 4, and executing the graph within a session in pycharm.', 'duration': 30.852, 'max_score': 446.155, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo446155.jpg'}], 'start': 75.677, 'title': 'Tensorflow fundamentals and model implementation', 'summary': 'Discusses the fundamentals of tensorflow, such as tensor representation and computation graph, with practical examples. it also covers implementing a tensorflow model using a sonar dataset with 208 rows and 61 columns to achieve high accuracy for metal cylinder and rock detection.', 'chapters': [{'end': 239.296, 'start': 75.677, 'title': 'Implementing tensorflow model', 'summary': 'Discusses the use of the sonar data set comprising 208 rows and 61 columns, with 111 patterns for metal cylinder and 97 patterns for a rock, to train a deep learning model using tensorflow, aiming for the highest possible accuracy.', 'duration': 163.619, 'highlights': ['The sonar data set contains 111 patterns bounced off a metal cylinder and 97 patterns bounced off a rock, with 208 rows and 61 columns. The data set contains 111 patterns bounced off a metal cylinder and 97 patterns bounced off a rock, with 208 rows and 61 columns.', 'The last column of every record in the data set represents a label indicating whether it is a rock or a mine, with the size of the data set being 208 rows and 61 columns. The last column of every record in the data set represents a label indicating whether it is a rock or a mine, with the size of the data set being 208 rows and 61 columns.', 'The ultimate goal is to achieve the highest possible accuracy, where the deep learning model is trained with increasing accuracy and decreasing error. The ultimate goal is to achieve the highest possible accuracy, where the deep learning model is trained with increasing accuracy and decreasing error.']}, {'end': 864.812, 'start': 239.636, 'title': 'Fundamentals of tensorflow', 'summary': 'Introduces the fundamentals of tensorflow, including the representation of data in the form of tensors, the concept of tensor rank and data types, and understanding the computation graph and session in tensorflow, with practical examples and code demonstrations.', 'duration': 625.176, 'highlights': ['Representation of Data in Tensors The chapter explains that data in TensorFlow is represented in the form of tensors, which are n-dimensional arrays or lists, and provides examples of tensors with different dimensions such as zero, two, and three.', 'Understanding Tensor Rank and Data Types The concept of tensor rank is elaborated, where rank two represents a matrix and rank one represents a vector, with examples provided for each. Additionally, various tensor data types such as integer, float, string, and Boolean are explained, emphasizing that data type specification is not mandatory in TensorFlow, but can help save memory.', 'Introduction to TensorFlow and Computation Graph The chapter introduces TensorFlow as an open-source software library released by Google in 2015 for designing, developing, and training deep learning models. It further explains the concept of computation graph in TensorFlow, where the model is first defined and described abstractly before being executed in a session, with emphasis on the flow of tensors through the computation graph.', 'Building and Executing Computation Graph in TensorFlow The process of building the computation graph in TensorFlow, where operations are defined without actual computations, and then executing the graph using a session is demonstrated with examples, showcasing the creation of constant nodes, launching the graph, and printing the results.', 'Visualizing TensorFlow with TensorBoard The use of TensorBoard, a suite of web applications for understanding TensorFlow graphs, is explained, along with the process of creating a FileWriter object for writing summaries and using command-line execution to run TensorBoard as a local web app for graph visualization.']}], 'duration': 789.135, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo75677.jpg', 'highlights': ['The ultimate goal is to achieve the highest possible accuracy, where the deep learning model is trained with increasing accuracy and decreasing error.', 'The sonar data set contains 111 patterns bounced off a metal cylinder and 97 patterns bounced off a rock, with 208 rows and 61 columns.', 'Representation of Data in Tensors The chapter explains that data in TensorFlow is represented in the form of tensors, which are n-dimensional arrays or lists, and provides examples of tensors with different dimensions such as zero, two, and three.', 'Introduction to TensorFlow and Computation Graph The chapter introduces TensorFlow as an open-source software library released by Google in 2015 for designing, developing, and training deep learning models.', 'Building and Executing Computation Graph in TensorFlow The process of building the computation graph in TensorFlow, where operations are defined without actual computations, and then executing the graph using a session is demonstrated with examples, showcasing the creation of constant nodes, launching the graph, and printing the results.']}, {'end': 1587.503, 'segs': [{'end': 894.117, 'src': 'embed', 'start': 866.472, 'weight': 0, 'content': [{'end': 870.774, 'text': 'So it says that TensorBoard runs as a local web app at port 6006.', 'start': 866.472, 'duration': 4.302}, {'end': 872.335, 'text': 'Let us see how our graph looks like now.', 'start': 870.774, 'duration': 1.561}, {'end': 883.607, 'text': 'So this is how our graph looks like it has two constant nodes here and one node for multiplication.', 'start': 878.803, 'duration': 4.804}, {'end': 887.851, 'text': 'So when I click on each of these notes, there will be some information displayed about that node.', 'start': 884.068, 'duration': 3.783}, {'end': 890.353, 'text': 'So if you notice here, it is a float type.', 'start': 888.171, 'duration': 2.182}, {'end': 894.117, 'text': 'It also contains the value that is stored inside this particular constant node, which is five.', 'start': 890.794, 'duration': 3.323}], 'summary': 'Tensorboard runs as a local web app at port 6006, displaying a graph with 2 constant nodes and 1 multiplication node.', 'duration': 27.645, 'max_score': 866.472, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo866472.jpg'}, {'end': 994.692, 'src': 'embed', 'start': 967.898, 'weight': 2, 'content': [{'end': 973.55, 'text': 'Now what if I want my graph to accept the external input? Now let me explain that with an example of our use case only.', 'start': 967.898, 'duration': 5.652}, {'end': 979.136, 'text': 'So here we want the features to be fed back to the graph that we cannot do with the help of constants.', 'start': 973.81, 'duration': 5.326}, {'end': 981.899, 'text': 'For that what we need, we need placeholders.', 'start': 979.737, 'duration': 2.162}, {'end': 985.763, 'text': 'Now a placeholder is nothing but a promise to provide a value later.', 'start': 982.56, 'duration': 3.203}, {'end': 987.885, 'text': 'Let us understand this with an example.', 'start': 986.244, 'duration': 1.641}, {'end': 992.09, 'text': 'So over here we have couple of placeholders A and B of float 32 bits.', 'start': 988.626, 'duration': 3.464}, {'end': 994.692, 'text': 'and notice that we have initialized no values.', 'start': 992.611, 'duration': 2.081}], 'summary': 'Using placeholders in a graph to accept external input for a use case, with examples of float 32-bit placeholders a and b.', 'duration': 26.794, 'max_score': 967.898, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo967898.jpg'}, {'end': 1110.36, 'src': 'heatmap', 'start': 1076.737, 'weight': 0.721, 'content': [{'end': 1082.221, 'text': 'So the first value one will be assigned to the placeholder A and value two will be assigned to the placeholder B.', 'start': 1076.737, 'duration': 5.484}, {'end': 1083.282, 'text': 'So their sum will be three.', 'start': 1082.221, 'duration': 1.061}, {'end': 1087.465, 'text': 'Similarly A will be assigned to the value three and B will be assigned with value four.', 'start': 1083.782, 'duration': 3.683}, {'end': 1089.366, 'text': 'Hence their sum will become seven.', 'start': 1087.985, 'duration': 1.381}, {'end': 1091.607, 'text': "Now let's get back to our slides once more.", 'start': 1089.946, 'duration': 1.661}, {'end': 1097.892, 'text': 'Now the question here is how to modify the graph if I want new output for the same input?', 'start': 1092.228, 'duration': 5.664}, {'end': 1101.258, 'text': 'I mean, if I want my model to become trainable,', 'start': 1098.757, 'duration': 2.501}, {'end': 1108.4, 'text': 'I need some parameters that can change after every iteration so that the model output can be as close as possible to the actual output.', 'start': 1101.258, 'duration': 7.142}, {'end': 1110.36, 'text': 'For that, we need variables.', 'start': 1109, 'duration': 1.36}], 'summary': 'Using placeholders a and b, their sum is calculated, yielding 3 and 7. variables are needed for model training.', 'duration': 33.623, 'max_score': 1076.737, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1076737.jpg'}, {'end': 1248.721, 'src': 'embed', 'start': 1221.838, 'weight': 1, 'content': [{'end': 1226.859, 'text': 'Then again we check the loss and update the variables and this process keeps on repeating until the loss becomes minimum.', 'start': 1221.838, 'duration': 5.021}, {'end': 1230.26, 'text': 'And now is the time to understand how we calculate loss.', 'start': 1227.379, 'duration': 2.881}, {'end': 1238.579, 'text': 'So, in order to find out the loss, we should first define a placeholder and we have named it as y which will hold the desired output,', 'start': 1231.277, 'duration': 7.302}, {'end': 1240.739, 'text': 'or you can say the output that we already know.', 'start': 1238.579, 'duration': 2.16}, {'end': 1248.721, 'text': 'Then we will find the difference between the linear underscore model or the output of our model with that of the output that we already know.', 'start': 1241.639, 'duration': 7.082}], 'summary': 'Iteratively update variables to minimize loss in linear model.', 'duration': 26.883, 'max_score': 1221.838, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1221838.jpg'}], 'start': 866.472, 'title': 'Tensorflow basics and computation graph visualization', 'summary': 'Covers the visual representation of a computation graph using tensorboard, demonstrating constant nodes and their values, and clarifying that the graph explains but does not execute operations. it also explains the concepts of constants, placeholders, and variables in tensorflow, with examples, and covers the calculation of loss and variable updating process, emphasizing the importance of placeholders and variable initialization.', 'chapters': [{'end': 949.145, 'start': 866.472, 'title': 'Tensorboard and computation graph', 'summary': 'Explains the visual representation of a computation graph through tensorboard, displaying constant nodes with their values and inputs, and clarifying that the graph explains operations specified in the code but does not execute them.', 'duration': 82.673, 'highlights': ['The graph displayed in TensorBoard consists of two constant nodes with values 5 and 6, and a multiplication node with inputs const and const underscore one.', 'TensorBoard serves as a local web app at port 6006, providing visual representation of the computation graph and details about each node, such as type, value, inputs, and outputs.', 'The computation graph in TensorBoard does not execute operations but rather explains the specified operations in the code, as clarified in response to a query from Michelle about the absence of output.']}, {'end': 1587.503, 'start': 950.186, 'title': 'Tensorflow basics: constants, placeholders, and variables', 'summary': 'Explains the concepts of constants, placeholders, and variables in tensorflow, demonstrating their usage with examples. it also covers the calculation of loss and the process of updating variables to reduce the loss, highlighting the importance of placeholders and the initialization of variables.', 'duration': 637.317, 'highlights': ['The chapter demonstrates the usage of constants, placeholders, and variables in TensorFlow, with examples showing how constants produce a constant result and how placeholders are used to accept external inputs. The chapter explains the usage of constant nodes to produce constant results and demonstrates how placeholders are used to accept external inputs, with examples illustrating their functionality.', 'It covers the concept of variables in TensorFlow, highlighting their role in holding and updating parameters when training a model, emphasizing the need for explicit initialization and their significance in making the model trainable. The chapter explains the concept of variables in TensorFlow, emphasizing their role in holding and updating parameters during model training, highlighting the need for explicit initialization and their significance in making the model trainable.', 'The chapter explains the process of calculating loss in TensorFlow, detailing the use of placeholders to hold desired outputs and the calculation of the difference between the model output and the actual output through the loss function. The chapter details the process of calculating loss in TensorFlow, highlighting the use of placeholders to hold desired outputs and the calculation of the difference between the model output and the actual output through the loss function.']}], 'duration': 721.031, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo866472.jpg', 'highlights': ['TensorBoard serves as a local web app at port 6006, providing visual representation of the computation graph and details about each node.', 'The chapter explains the process of calculating loss in TensorFlow, detailing the use of placeholders to hold desired outputs and the calculation of the difference between the model output and the actual output through the loss function.', 'The chapter demonstrates the usage of constants, placeholders, and variables in TensorFlow, with examples showing how constants produce a constant result and how placeholders are used to accept external inputs.']}, {'end': 2049.259, 'segs': [{'end': 1625.201, 'src': 'embed', 'start': 1601.072, 'weight': 5, 'content': [{'end': 1607.156, 'text': 'So when I was explaining variables where I told you that in order to make the model trainable we need to update the values of variables.', 'start': 1601.072, 'duration': 6.084}, {'end': 1608.877, 'text': 'This is what I was talking about.', 'start': 1607.737, 'duration': 1.14}, {'end': 1612.16, 'text': 'So by changing the value of variables we can actually reduce the loss.', 'start': 1609.398, 'duration': 2.762}, {'end': 1615.762, 'text': 'We can make the model output as close as possible to the actual output.', 'start': 1612.68, 'duration': 3.082}, {'end': 1625.201, 'text': 'Now if I make this as minus 1.0, and this as 1.0, we should get zero error.', 'start': 1616.243, 'duration': 8.958}], 'summary': 'Updating variables reduces loss, aims for zero error.', 'duration': 24.129, 'max_score': 1601.072, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1601072.jpg'}, {'end': 1733.813, 'src': 'embed', 'start': 1710.199, 'weight': 1, 'content': [{'end': 1716.78, 'text': "And if the loss is decreasing, then it'll keep on updating the variable in that particular direction so that loss becomes less.", 'start': 1710.199, 'duration': 6.581}, {'end': 1718.841, 'text': 'So I hope you have understood it, Jason, now.', 'start': 1717.26, 'duration': 1.581}, {'end': 1723.242, 'text': "Yeah So here we'll use gradient descent optimizer.", 'start': 1720.141, 'duration': 3.101}, {'end': 1724.942, 'text': 'Let us understand this with an analogy.', 'start': 1723.262, 'duration': 1.68}, {'end': 1730.552, 'text': 'Suppose you are at the top of the mountain and your task is to reach the lake, which is present near the valley.', 'start': 1725.731, 'duration': 4.821}, {'end': 1733.813, 'text': 'And the catch here is that you are blindfolded.', 'start': 1731.412, 'duration': 2.401}], 'summary': 'Explanation of gradient descent using a mountain analogy.', 'duration': 23.614, 'max_score': 1710.199, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1710199.jpg'}, {'end': 1782.976, 'src': 'embed', 'start': 1755.212, 'weight': 0, 'content': [{'end': 1757.474, 'text': 'Now you can apply the same concept on our model as well.', 'start': 1755.212, 'duration': 2.262}, {'end': 1765.278, 'text': 'Where the ground tends to descend, similarly the optimizer checks in which direction should the variables be updated so as to decrease the loss.', 'start': 1757.994, 'duration': 7.284}, {'end': 1768.72, 'text': "And we'll keep on updating the variables in that particular direction.", 'start': 1765.698, 'duration': 3.022}, {'end': 1772.822, 'text': "So let's understand the math behind gradient descent optimizer.", 'start': 1769.22, 'duration': 3.602}, {'end': 1775.09, 'text': 'First we calculate the loss.', 'start': 1773.649, 'duration': 1.441}, {'end': 1776.091, 'text': 'how we do that?', 'start': 1775.09, 'duration': 1.001}, {'end': 1782.976, 'text': 'by summing all the squared differences between the model output and the actual output and then dividing it by 2..', 'start': 1776.091, 'duration': 6.885}], 'summary': 'Model optimizer updates variables to decrease loss using gradient descent method.', 'duration': 27.764, 'max_score': 1755.212, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1755212.jpg'}, {'end': 1881.457, 'src': 'embed', 'start': 1853.757, 'weight': 4, 'content': [{'end': 1857.463, 'text': 'What do you do? You find out the difference between the actual output and the model output.', 'start': 1853.757, 'duration': 3.706}, {'end': 1862.33, 'text': 'Then the difference is squared and you sum all of those squared differences and then divide it by two.', 'start': 1858.003, 'duration': 4.327}, {'end': 1863.612, 'text': 'This is how you calculate the error.', 'start': 1862.35, 'duration': 1.262}, {'end': 1868.531, 'text': 'Then what you need to do is you need to update the variables who has to reduce the loss or the error.', 'start': 1864.349, 'duration': 4.182}, {'end': 1870.332, 'text': 'So how you do that?', 'start': 1869.011, 'duration': 1.321}, {'end': 1874.434, 'text': 'you first find the change in the variable, which is equal to minus of learning rate,', 'start': 1870.332, 'duration': 4.102}, {'end': 1881.457, 'text': 'multiplied by the change rate of change of loss with respect to that variable del J by del W J.', 'start': 1874.434, 'duration': 7.023}], 'summary': 'Calculate error by summing squared differences and update variables to reduce loss.', 'duration': 27.7, 'max_score': 1853.757, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1853757.jpg'}, {'end': 2038.675, 'src': 'heatmap', 'start': 1951.327, 'weight': 3, 'content': [{'end': 1963.497, 'text': "We'll write in here optimizer equal to tf.train.gradientDescentOptimizer with a learning rate of .", 'start': 1951.327, 'duration': 12.17}, {'end': 1968.801, 'text': '01 Learning rate is nothing but the steps, steps in which you change your variable.', 'start': 1963.497, 'duration': 5.304}, {'end': 1981.801, 'text': 'Then what we do, we will write in here train is equal to optimizer.minimize Loss, all right? No doubts till here, guys.', 'start': 1969.662, 'duration': 12.139}, {'end': 1983.882, 'text': 'If you have any questions, write it down in the chat box.', 'start': 1981.841, 'duration': 2.041}, {'end': 1986.644, 'text': 'Fine, so there are no doubts.', 'start': 1985.523, 'duration': 1.121}, {'end': 1990.546, 'text': 'And after initializing the variable, we need to run the session.', 'start': 1987.504, 'duration': 3.042}, {'end': 1992.147, 'text': 'So for that, I will use a for loop.', 'start': 1990.606, 'duration': 1.541}, {'end': 2007.135, 'text': "So I'll type in here, for i in range, thousand, ses.run train, comma, feed in the values of x and y.", 'start': 1992.667, 'duration': 14.468}, {'end': 2011.386, 'text': "So I'll just copy it from here and I'll paste it there.", 'start': 2008.382, 'duration': 3.004}, {'end': 2017.394, 'text': 'Alright Then go ahead and evaluate the variables w and b.', 'start': 2012.227, 'duration': 5.167}, {'end': 2026.87, 'text': 'So we have got the new value of W as minus .', 'start': 2022.608, 'duration': 4.262}, {'end': 2028.991, 'text': '9999969 and the new value of B as .', 'start': 2026.87, 'duration': 2.121}, {'end': 2038.675, 'text': '99999082 Now if you can recall, when we were calculating it manually, we got the output for W as minus one and for B we got it as plus one.', 'start': 2028.991, 'duration': 9.684}], 'summary': 'Using tf.train.gradientdescentoptimizer with a learning rate of .01 to minimize loss and update variables. running the session for 1000 steps yields new values for w and b.', 'duration': 35.317, 'max_score': 1951.327, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1951327.jpg'}], 'start': 1587.503, 'title': 'Optimizing model training with gradient descent', 'summary': 'Explains how updating variables using gradient descent minimizes loss by modifying each variable according to the derivative of the loss, aiming to make the model output close to the actual output, and it provides insights into the gradient descent algorithm with a learning rate of 0.01 for minimizing error in model training.', 'chapters': [{'end': 1776.091, 'start': 1587.503, 'title': 'Optimizing model training with gradient descent', 'summary': 'Explains how updating variables using optimizers, such as gradient descent, can minimize loss by modifying each variable according to the magnitude of the derivative of the loss with respect to that variable, aiming to make the model output as close as possible to the actual output.', 'duration': 188.588, 'highlights': ['Optimizers like gradient descent modify each variable according to the derivative of the loss, aiming to minimize loss and make the model output closer to the actual output. By modifying variables according to the derivative of the loss, optimizers like gradient descent aim to minimize loss and make the model output closer to the actual output.', 'The analogy of descending a mountain while blindfolded is used to explain the concept of gradient descent, where the optimizer updates the variables in the direction that decreases the loss. The analogy of descending a mountain while blindfolded is used to explain the concept of gradient descent, where the optimizer updates the variables in the direction that decreases the loss.', 'The process of gradient descent involves updating the variables in the direction where the loss tends to decrease, similar to moving towards a valley while blindfolded, aiming to reach the global loss minimum. Gradient descent involves updating the variables in the direction where the loss tends to decrease, similar to moving towards a valley while blindfolded, aiming to reach the global loss minimum.']}, {'end': 2049.259, 'start': 1776.091, 'title': 'Gradient descent optimization', 'summary': "Explains the gradient descent algorithm for updating variables in a model to minimize error, using a learning rate of 0.01, by summing squared differences between model output and actual output, then applying the algorithm to modify the model's variables until the loss becomes minimum.", 'duration': 273.168, 'highlights': ['The chapter explains the gradient descent algorithm for updating variables in a model to minimize error, using a learning rate of 0.01. The learning rate is mentioned as 0.01, which determines the steps for changing the variable in the model.', "Summing squared differences between model output and actual output to calculate the error, then applying the algorithm to modify the model's variables until the loss becomes minimum. The process involves summing squared differences between the actual output and the model output to calculate the error, and then updating the variables in the model until the loss becomes minimum.", 'Explaining the process of updating the variables in a model by calculating the change in the variable and the new updated variable. It explains the process of updating the variables in the model by calculating the change in the variable and the new updated variable, which involves the learning rate and the rate of change of loss with respect to the variable.']}], 'duration': 461.756, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo1587503.jpg', 'highlights': ['Optimizers like gradient descent modify each variable according to the derivative of the loss, aiming to minimize loss and make the model output closer to the actual output.', 'The analogy of descending a mountain while blindfolded is used to explain the concept of gradient descent, where the optimizer updates the variables in the direction that decreases the loss.', 'The process of gradient descent involves updating the variables in the direction where the loss tends to decrease, similar to moving towards a valley while blindfolded, aiming to reach the global loss minimum.', 'The chapter explains the gradient descent algorithm for updating variables in a model to minimize error, using a learning rate of 0.01.', "Summing squared differences between model output and actual output to calculate the error, then applying the algorithm to modify the model's variables until the loss becomes minimum.", 'Explaining the process of updating the variables in a model by calculating the change in the variable and the new updated variable.']}, {'end': 2552.3, 'segs': [{'end': 2123.05, 'src': 'embed', 'start': 2093.087, 'weight': 3, 'content': [{'end': 2094.928, 'text': 'Then we need to encode the dependent variable.', 'start': 2093.087, 'duration': 1.841}, {'end': 2097.789, 'text': 'And here the dependent variable is nothing but your label.', 'start': 2095.388, 'duration': 2.401}, {'end': 2100.571, 'text': 'So we are going to encode those dependent variables.', 'start': 2098.35, 'duration': 2.221}, {'end': 2106.3, 'text': 'After that, we are going to divide the data set into two parts, one for training, another for testing.', 'start': 2101.357, 'duration': 4.943}, {'end': 2109.522, 'text': 'And by the end of this step, our data set is now ready.', 'start': 2106.76, 'duration': 2.762}, {'end': 2114.945, 'text': 'Next, we are going to use TensorFlow data structures for holding features, labels, etc.', 'start': 2110.663, 'duration': 4.282}, {'end': 2117.927, 'text': "So here we'll be defining weights, biases.", 'start': 2115.546, 'duration': 2.381}, {'end': 2123.05, 'text': "We'll have a couple of placeholders for inputs as well as for the desired output that we already know.", 'start': 2118.608, 'duration': 4.442}], 'summary': 'The data is encoded, divided for training and testing, and prepared for tensorflow usage.', 'duration': 29.963, 'max_score': 2093.087, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2093087.jpg'}, {'end': 2195.131, 'src': 'embed', 'start': 2164.452, 'weight': 1, 'content': [{'end': 2167.222, 'text': "Now I'll open my PyCharm and execute this practically guys.", 'start': 2164.452, 'duration': 2.77}, {'end': 2171.474, 'text': "So guys, we'll first start with importing the necessary libraries.", 'start': 2168.612, 'duration': 2.862}, {'end': 2178.619, 'text': "First, we'll import matplotlib, which is used for visualization, then TensorFlow, the NumPy, Pandas, and sklearn as well.", 'start': 2171.994, 'duration': 6.625}, {'end': 2182.762, 'text': 'So these are the libraries that we are going to use, so we need to import these libraries.', 'start': 2178.999, 'duration': 3.763}, {'end': 2184.883, 'text': 'Then we are going to read the dataset.', 'start': 2183.442, 'duration': 1.441}, {'end': 2186.364, 'text': 'For that, we are going to use Pandas.', 'start': 2184.984, 'duration': 1.38}, {'end': 2191.108, 'text': 'So we are going to convert this dataset into a data frame, a Pandas data frame,', 'start': 2186.925, 'duration': 4.183}, {'end': 2195.131, 'text': 'and we are going to give the path to where our file is stored or our dataset is stored.', 'start': 2191.108, 'duration': 4.023}], 'summary': 'Importing libraries like matplotlib, tensorflow, numpy, pandas, and sklearn for data analysis and visualization using pycharm.', 'duration': 30.679, 'max_score': 2164.452, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2164452.jpg'}, {'end': 2363.128, 'src': 'embed', 'start': 2336.499, 'weight': 2, 'content': [{'end': 2340.08, 'text': 'Then we are going to define the important parameters and variables to work with tensors.', 'start': 2336.499, 'duration': 3.581}, {'end': 2342.041, 'text': "So first we're gonna define learning rate.", 'start': 2340.6, 'duration': 1.441}, {'end': 2343.781, 'text': 'So we have seen what exactly learning rate is.', 'start': 2342.061, 'duration': 1.72}, {'end': 2345.342, 'text': "Then we're gonna define the epoch.", 'start': 2344.202, 'duration': 1.14}, {'end': 2349.924, 'text': 'Epoch basically means the total number of iterations that will be done in order to minimize the error.', 'start': 2345.402, 'duration': 4.522}, {'end': 2352.925, 'text': "Then we're gonna define a loss function, cost history.", 'start': 2350.544, 'duration': 2.381}, {'end': 2355.065, 'text': 'Then we have N underscore dim.', 'start': 2353.525, 'duration': 1.54}, {'end': 2359.987, 'text': 'So what is N dim? So basically N dim is nothing but the shape of your features which is stored in X.', 'start': 2355.385, 'duration': 4.602}, {'end': 2363.128, 'text': "And that too, it'll only include the columns, the number of columns.", 'start': 2359.987, 'duration': 3.141}], 'summary': 'Defining parameters for tensor work, such as learning rate, epoch, and n dim.', 'duration': 26.629, 'max_score': 2336.499, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2336499.jpg'}, {'end': 2418.207, 'src': 'embed', 'start': 2386.814, 'weight': 0, 'content': [{'end': 2389.775, 'text': 'Then we need to define the number of hidden layers and number of neurons for each layer.', 'start': 2386.814, 'duration': 2.961}, {'end': 2396.32, 'text': "So basically I've talked about hidden layers, neurons, multi-layer perceptron, perceptron, everything in detail in the previous tutorial.", 'start': 2390.538, 'duration': 5.782}, {'end': 2398.24, 'text': 'So you can go through that if you have any doubts.', 'start': 2396.8, 'duration': 1.44}, {'end': 2402.602, 'text': 'Then we have the number of neurons for hidden layer one, two, three, and four.', 'start': 2399.061, 'duration': 3.541}, {'end': 2404.362, 'text': "So we're gonna take four hidden layers.", 'start': 2403.142, 'duration': 1.22}, {'end': 2407.183, 'text': 'So this is nothing but an example of multi-layer perceptron.', 'start': 2404.522, 'duration': 2.661}, {'end': 2412.485, 'text': 'So X is our placeholder in which we are going to feed in the input values, or you can say the data set.', 'start': 2407.864, 'duration': 4.621}, {'end': 2418.207, 'text': 'it is offload 32 bits and then the shape of this particular tensor is none comma ending.', 'start': 2412.485, 'duration': 5.722}], 'summary': 'Discussed defining 4 hidden layers and neurons, using x as a placeholder for input values in multi-layer perceptron.', 'duration': 31.393, 'max_score': 2386.814, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2386814.jpg'}], 'start': 2049.739, 'title': 'Nmi use case implementation and tensorflow data processing', 'summary': 'Covers the implementation steps for neville mine identifier (nmi) with tensorflow, aiming for minimum error and model accuracy. it also delves into data processing, model training, and defining parameters for a multi-layer perceptron model in tensorflow.', 'chapters': [{'end': 2163.809, 'start': 2049.739, 'title': 'Implementing nmi use case with tensorflow', 'summary': 'Covers the implementation steps for neville mine identifier (nmi) including data processing, model building, training, error calculation, and accuracy testing, with the aim of achieving a minimum error and assessing model accuracy.', 'duration': 114.07, 'highlights': ['The use case involves processing a data set, defining features and labels, encoding dependent variables, and dividing the data set into training and testing parts. The use case involves initial steps such as processing data, defining features and labels, and encoding dependent variables before splitting the dataset for training and testing.', 'The implementation utilizes TensorFlow data structures for defining weights, biases, placeholders for inputs and desired outputs, and the model output. The implementation utilizes TensorFlow data structures to define weights, biases, placeholders, and model output for the NMI use case.', 'Training the model involves using the training data set, calculating the error, and continuously reducing the error to achieve minimum error, followed by testing the model on the test data to determine accuracy. The training process involves using the training data, calculating and reducing error, and testing the model on the test data to determine its accuracy.']}, {'end': 2552.3, 'start': 2164.452, 'title': 'Tensorflow data processing and model training', 'summary': 'Covers data processing and model training in tensorflow, including importing libraries, dataset reading, label encoding, one-hot encoding, shuffling, data splitting, defining parameters and variables, creating a multi-layer perceptron model, and defining weights and biases for each layer.', 'duration': 387.848, 'highlights': ['Importing necessary libraries The process starts with importing the necessary libraries including matplotlib, TensorFlow, NumPy, Pandas, and sklearn for data visualization and processing.', 'Data preprocessing: Label encoding and one-hot encoding The transcript explains the process of label encoding and one-hot encoding for the dataset, providing a clear example with quantifiable data.', 'Shuffling and splitting the dataset The importance of shuffling the dataset and splitting it into training and testing sets is emphasized, with a specific test size of 20% mentioned.', 'Defining parameters and variables for model training The chapter delves into defining important parameters such as learning rate, epoch, loss function, cost history, and the number of classes, providing insights into the model setup.', 'Creating a multi-layer perceptron model The detailed process of creating a multi-layer perceptron model is explained, including the use of placeholders, variables, and activation functions for different hidden layers and the output layer.']}], 'duration': 502.561, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2049739.jpg', 'highlights': ['The detailed process of creating a multi-layer perceptron model is explained, including the use of placeholders, variables, and activation functions for different hidden layers and the output layer.', 'The process starts with importing the necessary libraries including matplotlib, TensorFlow, NumPy, Pandas, and sklearn for data visualization and processing.', 'The chapter delves into defining important parameters such as learning rate, epoch, loss function, cost history, and the number of classes, providing insights into the model setup.', 'The use case involves processing a data set, defining features and labels, encoding dependent variables, and dividing the data set into training and testing parts.']}, {'end': 2990.822, 'segs': [{'end': 2609.624, 'src': 'embed', 'start': 2569.97, 'weight': 2, 'content': [{'end': 2573.313, 'text': 'So over here, if you notice, let me just show it to you, yeah.', 'start': 2569.97, 'duration': 3.343}, {'end': 2576.395, 'text': 'So this is how we have defined a cost function.', 'start': 2574.494, 'duration': 1.901}, {'end': 2584.361, 'text': "So we're gonna use a softmax, cross entropy and logics y is nothing but your output and labels y.", 'start': 2577.236, 'duration': 7.125}, {'end': 2586.783, 'text': 'dash is nothing but your actual output.', 'start': 2584.361, 'duration': 2.422}, {'end': 2589.445, 'text': 'or you can say the output that we already know, and this is the model output.', 'start': 2586.783, 'duration': 2.662}, {'end': 2592.122, 'text': "Then we're gonna perform optimization.", 'start': 2590.54, 'duration': 1.582}, {'end': 2598.009, 'text': "We're gonna use gradient descent optimizer and then this will be the learning rate will be 0.03, which we have defined above,", 'start': 2592.142, 'duration': 5.867}, {'end': 2601.173, 'text': 'and then it will be minimizing the cost function or the loss.', 'start': 2598.009, 'duration': 3.164}, {'end': 2603.196, 'text': "So this is how we'll optimize it.", 'start': 2601.914, 'duration': 1.282}, {'end': 2609.624, 'text': 'Then what we need to do, we need to create a session object that will launch the graph and this will initialize all the variables.', 'start': 2603.817, 'duration': 5.807}], 'summary': 'Using softmax, cross entropy, and gradient descent (learning rate: 0.03) to minimize cost function in optimization process.', 'duration': 39.654, 'max_score': 2569.97, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2569970.jpg'}, {'end': 2831.114, 'src': 'embed', 'start': 2803.188, 'weight': 0, 'content': [{'end': 2806.309, 'text': 'So guys, this is the accuracy that we have got on our test data.', 'start': 2803.188, 'duration': 3.121}, {'end': 2812.67, 'text': 'So this y-axis represents the accuracy and the x-axis represents the number of epochs.', 'start': 2806.989, 'duration': 5.681}, {'end': 2816.491, 'text': "So we have 1, 000 epochs here, that means it's gonna train it 1, 000 times.", 'start': 2813.15, 'duration': 3.341}, {'end': 2818.251, 'text': 'So let me just close it now.', 'start': 2816.911, 'duration': 1.34}, {'end': 2826.833, 'text': 'And yeah, so we have got the test accuracy as 85% and the average mean squared error is 24.4505.', 'start': 2818.891, 'duration': 7.942}, {'end': 2831.114, 'text': 'And our model is saved in this particular directory, which is my current working directory, guys.', 'start': 2826.833, 'duration': 4.281}], 'summary': 'Test accuracy: 85%, mean squared error: 24.4505, 1,000 epochs trained.', 'duration': 27.926, 'max_score': 2803.188, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2803188.jpg'}], 'start': 2552.4, 'title': 'Tensorflow model optimization and training', 'summary': 'Covers initializing variables, defining cost functions, implementing gradient descent optimizer, and launching sessions. it also explains the training process using tensorflow, achieving 85% test accuracy and an average mean squared error of 24.4505 after 1,000 epochs.', 'chapters': [{'end': 2609.624, 'start': 2552.4, 'title': 'Tensorflow model optimization', 'summary': 'Guides through initializing variables, defining the cost function using softmax cross entropy, implementing gradient descent optimizer with a learning rate of 0.03, and creating a session object to launch the graph and initialize variables.', 'duration': 57.224, 'highlights': ['The chapter guides through initializing variables, defining the cost function using softmax cross entropy, implementing gradient descent optimizer with a learning rate of 0.03, and creating a session object to launch the graph and initialize variables.', 'The cost function is defined as softmax cross entropy, with y representing the output labels and y_dash representing the model output.', 'The learning rate for the gradient descent optimizer is set to 0.03, which minimizes the defined cost function or loss.']}, {'end': 2990.822, 'start': 2610.245, 'title': 'Training model with tensorflow', 'summary': 'Explains the process of training a model using tensorflow, with key points being the use of for loops to calculate cost and accuracy for each epoch, the calculation of mean squared error (mse) and accuracy for the training data, and the achievement of a 85% test accuracy and an average mean squared error of 24.4505 after 1,000 epochs.', 'duration': 380.577, 'highlights': ['Achievement of 85% test accuracy after 1,000 epochs The model achieved an 85% accuracy on the test data after 1,000 epochs.', 'Average mean squared error of 24.4505 The model achieved an average mean squared error of 24.4505 after 1,000 epochs.', 'Explanation of for loop usage for cost and accuracy calculation The chapter explains the use of for loops to calculate the cost and accuracy for each epoch during model training.', 'Detailing the process of model restoration The process of restoring the trained model, providing input, and predicting outcomes is explained.', 'Introduction and explanation of TensorFlow basics and model implementation The chapter introduces TensorFlow basics, including tensors, and explains the implementation of the model using TensorFlow.']}], 'duration': 438.422, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yX8KuPZCAMo/pics/yX8KuPZCAMo2552400.jpg', 'highlights': ['Achievement of 85% test accuracy after 1,000 epochs', 'Average mean squared error of 24.4505 after 1,000 epochs', 'The chapter guides through initializing variables, defining the cost function using softmax cross entropy, implementing gradient descent optimizer with a learning rate of 0.03, and creating a session object to launch the graph and initialize variables', 'The cost function is defined as softmax cross entropy, with y representing the output labels and y_dash representing the model output', 'The learning rate for the gradient descent optimizer is set to 0.03, which minimizes the defined cost function or loss']}], 'highlights': ['Achievement of 85% test accuracy after 1,000 epochs', 'Average mean squared error of 24.4505 after 1,000 epochs', 'The tutorial focuses on implementing a Naval Mine Identifier (NMI) using TensorFlow to differentiate between rocks and mines based on sonar signals.', 'The potential of the NMI model to save lives by preventing damage to ships and submarines is highlighted.', 'Naval mines can cause significant damage, with nearly 700,000 laid in World War II, resulting in more ships being sunk or damaged than any other weapon.', 'The ultimate goal is to achieve the highest possible accuracy, where the deep learning model is trained with increasing accuracy and decreasing error.', 'The sonar data set contains 111 patterns bounced off a metal cylinder and 97 patterns bounced off a rock, with 208 rows and 61 columns.', 'Representation of Data in Tensors The chapter explains that data in TensorFlow is represented in the form of tensors, which are n-dimensional arrays or lists, and provides examples of tensors with different dimensions such as zero, two, and three.', 'Introduction to TensorFlow and Computation Graph The chapter introduces TensorFlow as an open-source software library released by Google in 2015 for designing, developing, and training deep learning models.', 'Building and Executing Computation Graph in TensorFlow The process of building the computation graph in TensorFlow, where operations are defined without actual computations, and then executing the graph using a session is demonstrated with examples, showcasing the creation of constant nodes, launching the graph, and printing the results.', 'TensorBoard serves as a local web app at port 6006, providing visual representation of the computation graph and details about each node.', 'The chapter explains the process of calculating loss in TensorFlow, detailing the use of placeholders to hold desired outputs and the calculation of the difference between the model output and the actual output through the loss function.', 'The chapter demonstrates the usage of constants, placeholders, and variables in TensorFlow, with examples showing how constants produce a constant result and how placeholders are used to accept external inputs.', 'Optimizers like gradient descent modify each variable according to the derivative of the loss, aiming to minimize loss and make the model output closer to the actual output.', 'The analogy of descending a mountain while blindfolded is used to explain the concept of gradient descent, where the optimizer updates the variables in the direction that decreases the loss.', 'The process of gradient descent involves updating the variables in the direction where the loss tends to decrease, similar to moving towards a valley while blindfolded, aiming to reach the global loss minimum.', 'The chapter explains the gradient descent algorithm for updating variables in a model to minimize error, using a learning rate of 0.01.', "Summing squared differences between model output and actual output to calculate the error, then applying the algorithm to modify the model's variables until the loss becomes minimum.", 'Explaining the process of updating the variables in a model by calculating the change in the variable and the new updated variable.', 'The detailed process of creating a multi-layer perceptron model is explained, including the use of placeholders, variables, and activation functions for different hidden layers and the output layer.', 'The process starts with importing the necessary libraries including matplotlib, TensorFlow, NumPy, Pandas, and sklearn for data visualization and processing.', 'The chapter delves into defining important parameters such as learning rate, epoch, loss function, cost history, and the number of classes, providing insights into the model setup.', 'The use case involves processing a data set, defining features and labels, encoding dependent variables, and dividing the data set into training and testing parts.']}