Deep-Learning Networks Rival Human Vision   
For most of the past 30 years, computer vision technologies have struggled to help humans with visual tasks, even those as mundane as accurately recognizing faces in photographs. Recently, though, breakthroughs in deep learning, an emerging field of artificial intelligence, have finally enabled computers to interpret many kinds of images as successfully as, or better than, people do. Companies are already selling products that exploit the technology, which is likely to take over or assist in a wide range of tasks that people now perform, from driving trucks to reading scans for diagnosing medical disorders. Recent progress in a deep-learning approach known as a convolutional neural network (CNN) is key to the latest strides. To give a simple example of its prowess, consider images of animals. Whereas humans can easily distinguish between a cat and a dog, CNNs allow machines to categorize specific breeds more successfully than people can. It excels because it is better ...

Source: Odd Onion

http://www.oddonion.com/2017/06/28/deep-learning-networks-rival-human-vision-2/


          TuxMachines: OSS Leftovers   
  • AMD Plays Catch-Up in Deep Learning with New GPUs and Open Source Strategy

    AMD is looking to penetrate the deep learning market with a new line of Radeon GPU cards optimized for processing neural networks, along with a suite of open source software meant to offer an alternative to NVIDIA’s more proprietary CUDA ecosystem.

  • Baidu Research Announces Next Generation Open Source Deep Learning Benchmark Tool

    In September of 2016, Baidu released the initial version of DeepBench, which became the first tool to be opened up to the wider deep learning community to evaluate how different processors perform when they are used to train deep neural networks. Since its initial release, several companies have used and contributed to the DeepBench platform, including Intel, Nvidia, and AMD.

  • GitHub Declares Every Friday Open Source Day And Wants You to Take Part

    GitHub is home to many open-source development projects, a lot of which are featured on XDA. The service wants more people to contribute to open-source projects with a new initiative called Open Source Friday. In a nutshell, GitHub will be encouraging companies to allow their employees to work on open-source projects at the end of each working week.

    Even if all of the products you use on a daily basis are based on closed source software, much of the technology world operates using software based on open source software. A lot of servers are based off of various GNU/Linux based operating systems such as Red Hat Enterprise Linux. Much of the world’s infrastructure depends on open source software.

  • Open Source Friday

    GitHub is inviting every one - individuals, teams, departments and companies - to join in Open Source Friday, a structured program for contributing to open source that started inside GitHub and has since expanded.

  • Open Tools Help Streamline Kubernetes and Application Development

    Organizations everywhere are implementing container technology, and many of them are also turning to Kubernetes as a solution for orchestrating containers. Kubernetes is attractive for its extensible architecture and healthy open source community, but some still feel that it is too difficult to use. Now, new tools are emerging that help streamline Kubernetes and make building container-based applications easier. Here, we will consider several open source options worth noting.

  • Survey finds growing interest in Open Source

    Look for increased interest - and growth - in Open Source software and programming options. That's the word from NodeSource, whose recent survey found that most (91%) of enterprise software developers believe new businesses will come from open source projects.

  • Sony Open-Sources Its Deep Learning AI Libraries For Devs

    Sony on Tuesday open-sourced its Neural Network Libraries, a framework meant for developing artificial intelligence (AI) solutions with deep learning capabilities, the Japanse tech giant said in a statement. The company is hoping that its latest move will help grow a development community centered around its software tools and consequently improve the “core libraries” of the framework, thus helping advance this emerging technology. The decision to make its proprietary deep learning libraries available to everyone free of charge mimics those recently made by a number of other tech giants including Google, Amazon, and Facebook, all of whom are currently in the process of trying to incentivize AI developers to use their tools and grow their software ecosystems.

  • RESULTS FROM THE SURVEY ABOUT LIBREOFFICE FEATURES

    Unused features blur the focus of LibreOffice, and maintaining legacy capabilities is difficult and error-prone. The engineering steering committee (ESC) collected some ideas of what features could be flagged as deprecated in the next release – 5.4 – with the plan to remove them later. However, without any good information on what is being used in the wild the decision is very hard. So we run a survey in the last week to get insights into what features are being used.

  • COMPETITION FOR A LIBREOFFICE MASCOT
  • Rehost and Carry On, Redux

    After leaving Sun I was pleased that a group of former employees and partners chose to start a new company. Their idea was to pick up the Sun identity management software Oracle was abandoning and continue to sustain and evolve it. Open source made this possible.

    We had made Sun’s identity management portfolio open source as part of our strategy to open new markets. Sun’s products were technically excellent and applicable to very large-scale problems, but were not differentiated in the market until we added the extra attraction of software freedom. The early signs were very good, with corporations globally seeking the freedoms other IDM vendors denied them. By the time Oracle acquired Sun, there were many new customers approaching full production with our products.

    History showed that Oracle could be expected to silently abandon Sun’s IDM portfolio in favour of its existing products and strong-arm customers to migrate. Forgerock’s founders took the gamble that this would happen and disentangled themselves from any non-competes in good time for the acquisition to close. Sun’s practice was open development as well as open source licensing, so Forgerock maintained a mirror of the source trees ready for the inevitable day when they would disappear.

    Sure enough, Oracle silently stepped back from the products, reassigned or laid off key staff and talked to customers about how the cost of support was rising but offering discounts on Oracle’s products as mitigation. With most of them in the final deployment stages of strategic investments, you can imagine how popular this news was. Oracle become Forgerock’s dream salesman.

  • Boundless Reinforces its Commitment to Open Source with Diamond OSGeo Sponsorship
  • A C++ developer looks at Go (the programming language), Part 2: Modularity and Object Orientation

read more


          The Promise of Artificial Intelligence #AI   

Building on the popular primer on artificial intelligence [a16z.com/2016/06/10/ai-deep-learning-machines/] — and a companion micro-site [aiplaybook.a16z.com/] to help newcomers get started with AI — this presentation shares more about the promise of artificial intelligence, beyond the hype. It’s a ~45-minute narrated walk through of what companies are doing with AI today and what’s bubbling up from the ... [Read more...]

The post The Promise of Artificial Intelligence #AI appeared first on The mind of Mbugua Njihia.


           Weniger Unfälle: Volkswagen will Autos ab 2019 miteinander kommunizieren lassen    

Ab 2019 will Volkswagen Fahrzeuge miteinander und mit der Verkehrsinfrastruktur kommunizieren lassen. Ziel ist es, Unfälle zu vermeiden oder deren Folgen zu vermindern.

Volkswagen: Erste Modellreihe mit pWLAN im Jahr 2019

Schon in zwei Jahren will der deutsche Autobauer Volkswagen eine erste Modellreihe serienmäßig mit pWLAN (IEEE 802.11p) ausrüsten. Dadurch soll die Kommunikation von Autos untereinander (Car-to-Car) und von Fahrzeugen mit der Verkehrsinfrastruktur (Car-to-X) ermöglicht werden. Ziel ist es, Informationen über kurzfristig entstehende Verkehrsrisiken auszutauschen und so Unfälle zu vermeiden oder deren Folgen zu vermindern, wie Volkswagen mitteilt.

So stellt Volkswagen sich die Vorteile der Car-to-Car-Kommunikation vor. (Bild: Volkswagen)

Die speziell für automotive Zwecke entwickelte und validierte pWLAN-Technologie soll den Austausch verkehrsrelevanter Infos, Warnungen oder Sensordaten innerhalb weniger Millisekunden in einem Bereich von rund 500 Metern ermöglichen. Der Erfassungsbereich des Fahrzeugs werde dadurch um mehrere hundert Meter erweitert und quasi ein Blick um die Ecke ermöglicht, heißt es in der Volkswagen-Mitteilung. Für Autofahrer sollen durch die Nutzung eines eigenen Frequenzbandes für die Verkehrssicherheit keine zusätzlichen Kosten für die Kommunikation entstehen. Die Daten würden nicht zentral erfasst, auch ein bestehendes Mobilfunknetz sei nicht notwendig.

Vernetzte Volkswagen: Sicherheit im Straßenverkehr erhöhen

„Wir wollen die Sicherheit im Straßenverkehr mithilfe der Fahrzeugvernetzung erhöhen, und das geht am effizientesten mit einer schnellen Verbreitung einer gemeinsamen Technologie“, sagt Johannes Neft, Leiter der Aufbauentwicklung der Marke Volkswagen. Das System solle von möglichst vielen Herstellern und Partnern eingesetzt werden, wünscht sich Volkswagen. Damit soll ein herstellerübergreifender Austausch wichtiger Informationen ermöglicht werden. Auch Behörden, Verkehrsministerien oder Polizei- und Rettungsdienste seien in die Vorbereitungen des Starts dieses Dienstes eingebunden.

Sedric: Das ist VWs erstes Konzept eines selbstfahrenden Autos

Volkswagen Sedric. (Bild: Volkswagen Group)

1 von 13

Zur Galerie

Erst am Dienstag hatte Volkswagen eine Partnerschaft mit Nvidia bekanntgegeben, mit der der Autobauer seine Aktivitäten im Bereich Künstlicher Intelligenz und Deep Learning ausbauen will. Entsprechende Technologien sollen nicht nur im Bereich Mobilität, sondern auch in der Unternehmens-IT eingesetzt werden und Volkswagen bei der digitalen Transformation helfen. Nvidia arbeitet zudem mit Volvo bei der Entwicklung von selbstfahrenden Autos zusammen. Der Austausch zwischen Fahrzeugen und Infrastruktur wie Ampeln gehört zu den Voraussetzungen für das automatisierte und vernetzte Fahren.


          Senior Cloud Security Architect - NVIDIA - Santa Clara, CA   
Do you visualize your future at NVIDIA? Machine Learning, Deep-Learning, Artificial Intelligence – particularly in Regression or Forecasting,....
From NVIDIA - Sat, 20 May 2017 16:05:22 GMT - View all Santa Clara, CA jobs
          #3: Deep Learning   
Deep Learning
Deep Learning
Ian Goodfellow , Yoshua Bengio , Aaron Courville
(5)

Buy new: CDN$ 106.59 CDN$ 102.64
27 used & new from CDN$ 68.26

(Visit the Bestsellers in Programming list for authoritative information on this product's current rank.)
          The Advanced Guide to Deep Learning and Artificial Intelligence Bundle for $42   
This High-Intensity 14.5 Hour Bundle Will Help You Help Computers Address Some of Humanity's Biggest Problems
Expires November 28, 2021 23:59 PST
Buy now and get 91% off

Deep Learning: Convolutional Neural Networks in Python


KEY FEATURES

In this course, intended to expand upon your knowledge of neural networks and deep learning, you'll harness these concepts for computer vision using convolutional neural networks. Going in-depth on the concept of convolution, you'll discover its wide range of applications, from generating image effects to modeling artificial organs.

  • Access 25 lectures & 3 hours of content 24/7
  • Explore the StreetView House Number (SVHN) dataset using convolutional neural networks (CNNs)
  • Build convolutional filters that can be applied to audio or imaging
  • Extend deep neural networks w/ just a few functions
  • Test CNNs written in both Theano & TensorFlow
Note: we strongly recommend taking The Deep Learning & Artificial Intelligence Introductory Bundle before this course.

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory cnn_class

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Unsupervised Deep Learning in Python


KEY FEATURES

In this course, you'll dig deep into deep learning, discussing principal components analysis and a popular nonlinear dimensionality reduction technique known as t-distributed stochastic neighbor embedding (t-SNE). From there you'll learn about a special type of unsupervised neural network called the autoencoder, understanding how to link many together to get a better performance out of deep neural networks.

  • Access 30 lectures & 3 hours of content 24/7
  • Discuss restricted Boltzmann machines (RBMs) & how to pretrain supervised deep neural networks
  • Learn about Gibbs sampling
  • Use PCA & t-SNE on features learned by autoencoders & RBMs
  • Understand the most modern deep learning developments

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: intermediate, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory unsupervised_class2

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Deep Learning: Recurrent Neural Networks in Python


KEY FEATURES

A recurrent neural network is a class of artificial neural network where connections form a directed cycle, using their internal memory to process arbitrary sequences of inputs. This makes them capable of tasks like handwriting and speech recognition. In this course, you'll explore this extremely expressive facet of deep learning and get up to speed on this revolutionary new advance.

  • Access 32 lectures & 4 hours of content 24/7
  • Get introduced to the Simple Recurrent Unit, also known as the Elman unit
  • Extend the XOR problem as a parity problem
  • Explore language modeling
  • Learn Word2Vec to create word vectors or word embeddings
  • Look at the long short-term memory unit (LSTM), & gated recurrent unit (GRU)
  • Apply what you learn to practical problems like learning a language model from Wikipedia data

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory rnn_class

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Natural Language Processing with Deep Learning in Python


KEY FEATURES

In this course you'll explore advanced natural language processing - the field of computer science and AI that concerns interactions between computer and human languages. Over the course you'll learn four new NLP architectures and explore classic NLP problems like parts-of-speech tagging and named entity recognition, and use recurrent neural networks to solve them. By course's end, you'll have a firm grasp on natural language processing and its many applications.

  • Access 40 lectures & 4.5 hours of content 24/7
  • Discover Word2Vec & how it maps words to a vector space
  • Explore GLoVe's use of matrix factorization & how it contributes to recommendation systems
  • Learn about recursive neural networks which will help solve the problem of negation in sentiment analysis

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: advanced, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory nlp_class2

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

          Practical Deep Learning in Theano and TensorFlow for $29   
Build & Understand Neural Networks Using Two of the Most Popular Deep Learning Techniques
Expires November 02, 2021 23:59 PST
Buy now and get 75% off

KEY FEATURES

The applications of Deep Learning are many, and constantly growing, just like the neural networks that it supports. In this course, you'll delve into advanced concepts of Deep Learning, starting with the basics of TensorFlow and Theano, understanding how to build neural networks with these popular tools. Using these tools, you'll learn how to build and understand a neural network, knowing exactly how to visualize what is happening within a model as it learns.

  • Access 23 lectures & 3 hours of programming 24/7
  • Discover batch & stochastic gradient descent, two techniques that allow you to train on a small sample of data at each iteration, greatly speeding up training time
  • Discuss how momentum can carry you through local minima
  • Learn adaptive learning rate techniques like AdaGrad & RMSprop
  • Explore dropout regularization & other modern neural network techniques
  • Understand the variables & expressions of TensorFlow & Theano
  • Set up a GPU-instance on AWS & compare the speed of CPU vs GPU for training a deep neural network
  • Look at the MNIST dataset & compare against known benchmarks
Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
  • All code for this course is available for download here, in the directory ann_class2

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

          The Deep Learning and Artificial Intelligence Introductory Bundle for $39   
Companies Are Relying on Machines & Networks to Learn Faster Than Ever. Time to Catch Up.
Expires October 31, 2021 23:59 PST
Buy now and get 91% off

Deep Learning Prerequisites: Linear Regression in Python


KEY FEATURES

Deep Learning is a set of powerful algorithms that are the force behind self-driving cars, image searching, voice recognition, and many, many more applications we consider decidedly "futuristic." One of the central foundations of deep learning is linear regression; using probability theory to gain deeper insight into the "line of best fit." This is the first step to building machines that, in effect, act like neurons in a neural network as they learn while they're fed more information. In this course, you'll start with the basics of building a linear regression module in Python, and progress into practical machine learning issues that will provide the foundations for an exploration of Deep Learning.

  • Access 20 lectures & 2 hours of content 24/7
  • Use a 1-D linear regression to prove Moore's Law
  • Learn how to create a machine learning model that can learn from multiple inputs
  • Apply multi-dimensional linear regression to predict a patient's systolic blood pressure given their age & weight
  • Discuss generalization, overfitting, train-test splits, & other issues that may arise while performing data analysis
Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
  • All code for this course is available for download here, in the directory linear_regression_class

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Deep Learning Prerequisites: Logistic Regression in Python


KEY FEATURES

Logistic regression is one of the most fundamental techniques used in machine learning, data science, and statistics, as it may be used to create a classification or labeling algorithm that quite resembles a biological neuron. Logistic regression units, by extension, are the basic bricks in the neural network, the central architecture in deep learning. In this course, you'll come to terms with logistic regression using practical, real-world examples to fully appreciate the vast applications of Deep Learning.

  • Access 31 lectures & 3 hours of content 24/7
  • Code your own logistic regression module in Python
  • Complete a course project that predicts user actions on a website given user data
  • Use Deep Learning for facial expression recognition
  • Understand how to make data-driven decisions
Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
  • All code for this course is available for download here, in the directory logistic_regression_class

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Data Science: Deep Learning in Python


KEY FEATURES

Artificial neural networks are the architecture that make Apple's Siri recognize your voice, Tesla's self-driving cars know where to turn, Google Translate learn new languages, and so many more technological features you have quite possibly taken for granted. The data science that unites all of them is Deep Learning. In this course, you'll build your very first neural network, going beyond basic models to build networks that automatically learn features.

  • Access 37 lectures & 4 hours of content 24/7
  • Extend the binary classification model to multiple classes uing the softmax function
  • Code the important training method, backpropagation, in Numpy
  • Implement a neural network using Google's TensorFlow library
  • Predict user actions on a website given user data using a neural network
  • Use Deep Learning for facial expression recognition
  • Learn some of the newest development in neural networks
Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: intermediate, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
  • All code for this course is available for download here, in the directory ann_class

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Data Science: Practical Deep Learning in Theano & TensorFlow


KEY FEATURES

The applications of Deep Learning are many, and constantly growing, just like the neural networks that it supports. In this course, you'll delve into advanced concepts of Deep Learning, starting with the basics of TensorFlow and Theano, understanding how to build neural networks with these popular tools. Using these tools, you'll learn how to build and understand a neural network, knowing exactly how to visualize what is happening within a model as it learns.

  • Access 23 lectures & 3 hours of programming 24/7
  • Discover batch & stochastic gradient descent, two techniques that allow you to train on a small sample of data at each iteration, greatly speeding up training time
  • Discuss how momentum can carry you through local minima
  • Learn adaptive learning rate techniques like AdaGrad & RMSprop
  • Explore dropout regularization & other modern neural network techniques
  • Understand the variables & expressions of TensorFlow & Theano
  • Set up a GPU-instance on AWS & compare the speed of CPU vs GPU for training a deep neural network
  • Look at the MNIST dataset & compare against known benchmarks
Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
  • All code for this course is available for download here, in the directory ann_class2

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

          Deep Learning Based Large-Scale Automatic Satellite Crosswalk Classification. (arXiv:1706.09302v1 [cs.CV])   

Authors: Rodrigo F. Berriel, Andre Teixeira Lopes, Alberto F. de Souza, Thiago Oliveira-Santos

High-resolution satellite imagery have been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Even though, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this dataset is used to train deep-learning-based models in order to accurately classify satellite images that contains or not zebra crossings. A novel dataset with more than 240,000 images from 3 continents, 9 countries and more than 20 cities was used in the experiments. Experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale.


          Multimedia Hashing and Networking   
This department discusses multimedia hashing and networking. The authors summarize shallow-learning-based hashing and deep-learning-based hashing. By exploiting successful shallow-learning algorithms, state-of-the-art hashing techniques have been widely used in high-efficiency multimedia storage, indexing, and retrieval, especially in multimedia search applications on smartphone devices. The authors also introduce Multimedia Information Networks (MINets) and present one paradigm of leveraging MINets to incorporate both visual and textual information to reach a sensible event coreference resolution. The goal is to make deep learning practical in realistic multimedia applications.
          Senior Cloud Security Architect - NVIDIA - Santa Clara, CA   
Do you visualize your future at NVIDIA? Machine Learning, Deep-Learning, Artificial Intelligence – particularly in Regression or Forecasting,....
From NVIDIA - Sat, 20 May 2017 16:05:22 GMT - View all Santa Clara, CA jobs
          Revolutionary Material: Microbial Nanowires   

The picture above is of the microbe Geobacter (red) expressing electrically conductive nanowires. Such natural nanowires can be mass produced from inexpensive, renewable feedstocks with low energy costs compared to chemical synthesis with toxic chemicals and high energy requirements. Microbiologist Derek Lovley and his team at the University of Massachusetts Amherst report that they have discovered a new type of natural wire produced by bacteria that could greatly accelerate the researchers’ goal of developing sustainable “green” conducting materials for the electronics industry. Learn more.


          Blossoms of Shujun   

Imagine a layered cake, a parfait, or any layered dessert. A similar type of layering occurs in thin films of block copolymers, only the layers are tens of nanometers thick (a hundred thousand times thinner than a sheet of paper)! If we place a liquid on the surface that attracts one of the layers beneath, then the layers within the film will rearrange themselves in an attempt to allow the attracted layer to reach the liquid. The floral arrangement shown above is actually an electron microscope image that captured such a rearrangement, magnified twenty thousand times. This image from the Samuel Gido Research Group doubles as art. See more of these spectacular images at the Materials Research Science and Engineering Center MRSEC  VISUAL gallery.

Photo Courtesy: VISUAL


          Computer Accurately Describes Breast Cancers on Digital Tissue Slides   
A deep-learning computer network has been designed by a research team from Case Western Reserve University to determine whether invasive forms of breast cancer are present in whole biopsy slides.

          Vincent Granville posted a blog post   
Vincent Granville posted a blog post

          These three very different structural elements were designed to carry the same load   

Dinotopia artist Jim Gurney says: "Computer modeling tools such as ZBrush and Maya have made it easier to visualize whatever form that a human designer imagines.|And 3D printing has made it possible to translate that design into physical form."

The generative process yields dozens or even hundreds of options, and the human can select which one to produce.

This new enterprise is variously called "deep-learning generative design," "intuitive AI design," and "algorithmic design." New plugins for Maya have already made such technology available.

The designs generated by this process look like something out of Art Nouveau.

They look biological, resembling skeletal architecture, with curving shapes. As with biological forms there are no straight lines and no right angles. There's no consideration of style. They're not made to look beautiful but rather to be efficient. Generative designs are vastly lighter and stronger than human designs.

The forms are often surprisingly complex, apparently more intricate than they need to be. They're not necessarily easy to produce without a 3D printer.


          Deep Learning in Automotive Software   
Deep-learning-based systems are becoming pervasive in automotive software. So, in the automotive software engineering community, the awareness of the need to integrate deep-learning-based development with traditional development approaches is growing, at the technical, methodological, and cultural levels. In particular, data-intensive deep neural network (DNN) training, using ad hoc training data, is pivotal in the development of software for vehicle functions that rely on deep learning. Researchers have devised a development lifecycle for deep-learning-based development and are participating in an initiative, based on Automotive SPICE (Software Process Improvement and Capability Determination), that's promoting the effective adoption of DNN in automotive software.
          Frost & Sullivan Applauds the Unparalleled Accuracy of Deep Learning's Endpoint and Mobile Security Solution   

LONDON, June 29, 2017 /PRNewswire/ -- Based on its recent analysis of the endpoint and mobile security market for critical national infrastructure, Frost & Sullivan recognizes Deep Instinct with the 2017 Global Frost & Sullivan Award for Technology Innovation. Deep Instinct's...



          Google's multitasking neural net can juggle eight things at once   
Deep-learning systems can struggle to handle more than one task, but a fresh approach by Google Brain could turn neural networks into jacks of all trades
          Senior Cloud Security Architect - NVIDIA - Santa Clara, CA   
Do you visualize your future at NVIDIA? Machine Learning, Deep-Learning, Artificial Intelligence – particularly in Regression or Forecasting,....
From NVIDIA - Sat, 20 May 2017 16:05:22 GMT - View all Santa Clara, CA jobs
          Microsoft acquires Maluuba, a deep learning startup in Montreal   
Microsoft has announced the acquisition of a Montreal-based startup named Maluuba. The acquisition seems to revolve around Maluuba’s natural language work, though the startup works on more than just that, ultimately focusing on the development of artificial intelligence capable of thinking and speaking like a human. Microsoft says that Maluuba’s vision is “exactly in line with ours.” Microsoft describes this … Continue reading