There’s a Major Problem with AI’s Decision Making   
‘Deep learning’ AI should be able to explain its automated decision-making—but it can’t. And even its creators are lost on where to begin. Read more
          Accelerate MXNet R training (deep learning) by GPUs and multiple machines   
Here we focus on the MXNet training acceleration: GPU (device) utilized training, distribution training by multiple machines, and active learning (online learning).
          Scale your deep learning workloads on MXNet R (scoring phase)   
Here I show you the step-by-step for scaling the deep learning workloads with MXNet and R. This time, we focus on the scoring phase.
          RPP #154: Maplesoft Möbius - Interview with Jim Cooper   
  • Interview with Jim Cooper, CEO of Maplesoft, about Möbius their new comprehensive online courseware environment that focuses on science, technology, engineering, and mathematics. We discuss:
    • Maplesoft history
      Jim Cooper, CEO
    • Maplesoft course/module marketplace
    • Möbius platform and toolkit
    • LMS integration
    • Adaptive and customized learning
    • Analytics to improve learning
    • AI / Machine Learning / Deep Learning
    • Building an AI tutor
    • Pricing models
    • Podsafe music selection
    Duration: 36:37

              Artificial intelligence/Machine learning   
    • Is your AI being handed to you by Google? Try Apache open source – Amazon's AWS did

      Surprisingly, the MXNet Machine Learning project was this month accepted by the Apache Software Foundation as an open-source project.

      What's surprising about the announcement isn't so much that the ASF is accepting this face in the crowd to its ranks – it's hard to turn around in the software world these days without tripping over ML tools – but rather that MXNet developers, most of whom are from Amazon, believe ASF is relevant.

    • Current Trends in Tools for Large-Scale Machine Learning

      During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.

    • Your IDE won't change, but YOU will: HELLO! Machine learning

      Machine learning has become a buzzword. A branch of Artificial Intelligence, it adds marketing sparkle to everything from intrusion detection tools to business analytics. What is it, exactly, and how can you code it?

    • Artificial intelligence: Understanding how machines learn

      Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

    • Your future boss? An employee-interrogating bot – it's an open-source gift from Dropbox

      Dropbox has released the code for the chatbot it uses to question employees about interactions with corporate systems, in the hope that it can help other organizations automate security processes and improve employee awareness of security concerns.

      "One of the hardest, most time-consuming parts of security monitoring is manually reaching out to employees to confirm their actions," said Alex Bertsch, formerly a Dropbox intern and now a teaching assistant at Brown University, in a blog post. "Despite already spending a significant amount of time on reach-outs, there were still alerts that we didn't have time to follow up on."


              Deep-Learning Networks Rival Human Vision   
    For most of the past 30 years, computer vision technologies have struggled to help humans with visual tasks, even those as mundane as accurately recognizing faces in photographs. Recently, though, breakthroughs in deep learning, an emerging field of artificial intelligence, have finally enabled computers to interpret many kinds of images as successfully as, or better than, people do. Companies are already selling products that exploit the technology, which is likely to take over or assist in a wide range of tasks that people now perform, from driving trucks to reading scans for diagnosing medical disorders. Recent progress in a deep-learning approach known as a convolutional neural network (CNN) is key to the latest strides. To give a simple example of its prowess, consider images of animals. Whereas humans can easily distinguish between a cat and a dog, CNNs allow machines to categorize specific breeds more successfully than people can. It excels because it is better ...

    Source: Odd Onion

    http://www.oddonion.com/2017/06/28/deep-learning-networks-rival-human-vision-2/


              Deep Learning: New Pedagogies   
    with Joanne Quinn and Max Drummy – November 9, December 14 and January 18 The ‘new pedagogies’ are not just instructional strategies. They are powerful models of teaching and learning, enabled and accelerated by increasingly pervasive digital tools and resources, taking hold within learning environments that measure and support deep learning at all levels of […]
              Shift to an ICT career - 1 year study programme with SIGNAL   
    Shift is a post grad programme of study. For more information contact info@signal.ac.nz

    This immersive year-long studio-based programme is for those wishing to transition into an IT role. Shift is designed for those that already have a degree - that could be in anything, arts, humanities, sciences or engineering.

    There are scholarships of $2,500 available for Shift. Find out more at http://signal.ac.nz/programmes/shift/

    Shift has two streams, a Software Development stream for those that wish to focus on programming and a Development Allied Roles stream for those who wish to focus on other areas, be it testing, user experience or technical writing as a few examples.

    Shift introduces the breadth of the IT industry and career opportunities, frames the professional standards of IT and industry, introduces Agile and Design thinking.

    The immersive studio comes next. Working in small teams on real-world inspired problems, students will be hands-on learning by doing. Seminars and workshops with academics, domain and industry experts will provide deep learning in technical skills areas but also many of the essential skills related to communication, teamwork and innovation.

    In the second half of Shift, students undertake a Industry project, placement or internship and can take two courses from any of the five SIGNAL partner institutions.

    SIGNAL is powered by University of Canterbury, University of Otago, Ara Institute of Canterbury, Otago Polytechnic, and Lincoln University.

              Uber self-driving Volvos use Nvidia Tegra   
    Uber self-driving Volvos use Nvidia Tegra


    Incredibly important win

    Back in January,  Nvidia  revealed that it had a two Parker SoCs on a board that it calls Drive PX 2 and that the Volvo XC90 will be the first car to use it. A few days back, Uber revealed that customers in down town Pittsburg will be able to use Volvo XC90 cars for their Uber rides. 

    The important thing to mention is that Nvidia technology is inside as the Volvo XC90 is using DRIVE PX 2 as the heart of the self-driving vehicle. Tesla which was seen as a leader in the self-driving arena was using Mobileye and it plans to end this relationship as soon as it can, due to a death. Most cars today including expensive BMWs which have the highway autopilot feature use Mobileye.

    Nvidia Drive PX 2 is much more powerful as it  has two  Parker CPUs and two 256-core Pascal based Cuda cores and it can deliver 24 trillion deep learning operations per second to run the most complex inference algorithms. If that is not re-assuring enough, the Drive PX 2 system has three teraflops of performance for  deep learning. In case you don’t speak geek language, Fudzilla spent a lot of time investigating technologies crucial to self-driving including object detection, Lidar, RADAR, ultra-sonic sensors and all in all, self-driving cars are on their way to be safer than humans.

    Uber is a smart business and it wants to get rid of the most expensive part of its transportation business model, a human driver. This won’t happen overnight as the Pittsburg trials will require all Volvo cars to have a person in the driver seat that will sit there “just in case”. The car will drive itself, but the person in the driver's seat will be able to take control at any given moment.

    Nvidia and Volvo expect that self-driving cars should hit the streets by 2020 while Mobileye was targeting 2021. Both of these dates are closer than most of you think, and it will make everything safer. Most daily commuters spend an awful lot of time on their phones, not paying attention, and the self-driving car will watch the street all the time, refreshing information in milliseconds and being able to react much faster than any human.

    Volvo said that combined with Uber, both companies spent some $300 million on a Volvo based self-driving Uber car. Volvo will make the car and Uber will buy it.

    Håkan Samuelsson, president and chief executive of Volvo Cars, said:

    “Volvo is a world leader in the development of active safety and autonomous drive technology and possesses an unrivaled safety credibility. We are very proud to be the partner of choice for Uber, one of the world’s leading technology companies. This alliance places Volvo at the heart of the current technological revolution in the automotive industry.”

    It looks like that Uber, Volvo and Nvidia will profit, as essentially this is Nvidia’s biggest win to data, and the self-driving powered by the Nvidia Drive PX 2 will happen much sooner than anyone expected. The first ride will mark the day when Nvidia finally stops being a GPU-graphics cards company. The change of Tegra strategy we mentioned back in 2014 is finally paying off. 


              Nvidia Parker has Denver 2 with Pascal   
    Nvidia Parker has Denver 2 with Pascal


    Created with automotives in mind

    Danny Shapiro,  Nvidia's Senior Director of Automotives, has revealed a few details about the "Parker", Nvidia’s Newest SOC for Autonomous Vehicles. 

    Nvidia also detailed its new Parker chip at the Hot Chips conference too, but essentially all you need to know is that Parker is a 16nm FinFET ARM V8 CPU with two Denver 2 + 4x A57 Coherent HMP and with a 256 Cuda core Pascal Geforce GPU.

    Just a quick view of the CPU's configuration gives you a clear signal that this SoC is unlikely to end up in any tablet. Two Denver 2 + 4x Cortex A57 seems like something that need to be plugged in to a power source all the time such as an autonomous car, a next generation shield console or the rumored Nintendo NX console.

    The Nvidia Pascal 256-Core GPU supports DirectX 12, OpenGL 4.5, Nvidia Cuda 8.0, Open GL ES 3.1, ARP and Vulkan. This is the same number of cores as Nvidia had with the Erista, Tegra X1 but of course Pascal cores are much more efficient than the Maxwell and they should be able to clock them higher. It was rather clear with Tegra X1 that this SoC will not make it to any tablet shaped device due to its high power consumption and the Google Pixel C design win was the best that Nvidia could achieve.

    22 Parker diagram 1

    The Denver 2 and Cortex A57 have 2MB + 2MB L2 cache in coherent HMP architecture and the SoC supports 128-bit LPDDR4 with ECC.

    The GPU can decode and encode H.265, VP9 both up to 4K at 60FPS. It also supports a 12 Megapixel camera sensor.

    Two Parker SoCs power the Drive PX platform as Nvidia wants to power deep learning applications, even in a motor car.

    More than 80 carmakers, tier one suppliers and university research centers around the world are now using our DRIVE PX 2 system to develop autonomous vehicles. This doesn’t mean that Nvidia will have as many design wins, but it looks like a big opportunity.

    parker specifications two

    The Volvo XC90 is the first vehicle supporting DRIVE PX 2 and since Parker delivers up to 1.5 teraflops of performance for deep learning-based self-driving AI cockpit systems, Volvo could end up with a rather safe self-driving car. Since DRIVE PX 2 has two Parker SoCs, you end up with 3 teraflops of performance for the deep learning, self-driving AI cockpit system.

    The combination of two Parker SoCs with a total of four CPU cores each and 2x256 Cuda cores, means that the Drive PX 2 might deliver 24 trillion deep learning operations per second to run the most complex deep learning based inference algorithms. 

    Parker supports dual-CAN (controller area network) interface, a standard connector in the car industry and Gigabit Ethernet to transport audio and video streams. 



     


              Comment on A Robust Adaptive Stochastic Gradient Method for Deep Learning by anak usia 20 bulan susah makan   
    Pretty! This has been an extremely wonderful article. Thanks for supplying this information.
              Comment on China’s National Engineering Laboratory of Deep Learning Technology was Established at Baidu Campus: We are the National Team of Deep Learning by build online store   
    Thanks for the great post, I am delighted with the achievements in the field of artificial intelligence.
              Deep Learning for Executives: What Exactly is it Again?   
    none
              Lucid Planet Radio with Dr. Kelly: Encore: The Singularity is HERE: Interface the Future, Explore Paradox, and Recode Mythology with Mitch Schultz    
    GuestToday visionary thinker, futurist and filmmaker Mitch Schultz joins Dr. Kelly to explore humanity as we approach the technological singularity. What is the singularity, and what does it mean for humanity?   Explore a transdisciplinary approach at the intersection of the arts, cognitive psychology, deep learning, and philosophy. Guided by Consciousness, Evolution, and Story. Beginning with what conscious state of being (terrestrial and universal) is perceiving. Followed the one consta ...
              Lucid Planet Radio with Dr. Kelly: The Singularity is HERE: Interface the Future, Explore Paradox, and Recode Mythology with Mitch Schultz    
    EpisodeToday visionary thinker, futurist and filmmaker Mitch Schultz joins Dr. Kelly to explore humanity as we approach the technological singularity. What is the singularity, and what does it mean for humanity?   Explore a transdisciplinary approach at the intersection of the arts, cognitive psychology, deep learning, and philosophy. Guided by Consciousness, Evolution, and Story. Beginning with what conscious state of being (terrestrial and universal) is perceiving. Followed the one consta ...
              L'Intelligence Artificielle ou l'accomplissement des utopies   

    "L'Intelligence Artificielle (IA) est le domaine de l'informatique qui étudie comment faire faire à l'ordinateur des tâches pour lesquelles l'homme est aujourd'hui encore le meilleur" (1).

    Après l'euphorie presque utopique des années 60-70 et les espoirs déçus des années 1980 qui ont vu reculer l'approche symbolique, l'IA a su renaitre de ses cendres à cette même période au travers d'une approche connexionniste qui voit l'avènement de systèmes multi-agents, de mémoires auto-associatives et de réseaux de neurones artificiels (RNA) performants.

    Une soixantaine d'années de recherches et d'avancées majeures font ainsi de l'IA un puissant vecteur de transformation du monde qui bouleverse aujourd’hui l'ensemble des activités humaines, l'entreprise et les modèles économiques. Oscar Wilde considérait le progrès comme l'accomplissement des utopies. C'est finalement cette approche qui convient le mieux aux évolutions actuelles de l'intelligence artificielle qui augure les plus grandes innovations.

    L'apprentissage machine (Machine Learning)

    Apparus au début des années 1950, les réseaux de neurones constituent les éléments fondateurs de l'apprentissage automatisé. Grâce à eux, un programme est désormais capable "d'apprendre" et d'améliorer ses réponses par l'expérience. C'est cette capacité d'apprentissage (supervisé ou non supervisé) transférée à une machine qui révolutionne les pratiques numériques et font le succès de l'IA. Les progrès de l'IA impactent en effet l'ensemble des activités humaines, de l'industrie aux services, de la santé à l'enseignement, de l'agriculture aux transports, de la sécurité à la défense.

    Aucune expertise ne peut se prévaloir aujourd'hui de spécificités qui la rendrait incompatible avec les capacités fonctionnelles de l'IA. Accompagnant l'augmentation des puissances de calculs (la loi de Moore), l'IA constitue le principal moteur de la révolution numérique, premier enjeu des entreprises.

    Elle fait l'objet d'une course à l'innovation de la part des grands acteurs du domaine. Qu'ils soient privés ou étatiques, ces acteurs ont parfaitement mesuré le caractère "stratégique" de son développement et tentent pour cela d'imposer leurs normes en mettant à disposition des plates-formes de briques algorithmiques "Opensource". D'une manière générale, l'IA permet d'exploiter de façon pertinente les mégadonnées (Big Data) issues des capteurs, des objets connectés, et toutes les données produites sur internet et les réseaux sociaux.

    L'apprentissage machine se révèle ainsi des plus performants dans de nombreuses tâches : traitement du signal, maîtrise des processus, robotique, classification, pré-traitement des données, reconnaissance de formes, analyse de l'image et synthèse vocale, cybersécurité, diagnostics et suivi médical, marché boursier et prévisions, demande de crédits et de prêts immobiliers, recrutement et analyse automatique de cv...

    Un savoir-faire européen et une French Tech hyperactive

    En matière d'IA, les géants américains GAFAM (Google, Apple, Facebook, Amazon, Microsoft) occupent une position dominante.  Ce leadership ne doit pourtant pas masquer le fort potentiel européen et l'excellence française, régulièrement reconnus à l'international.

    En choisissant d'implanter à Zurich son groupe de recherche (GRE) dédié au Machine Learning et en confiant sa direction au français Emmanuel Mogenet, Google mise pleinement sur l'excellence européenne.  Sa filiale londonienne Google Deep Mind, fleuron mondial de l'IA, enchaîne les succès d'innovation avec notamment les victoires d'AlphaGo contre le champion du monde Lee Sedol.

    Facebook a installé ses trois laboratoires "Facebook Artificial Intelligence Research (FAIR)" à Paris, dirigés par le français Yann Le Cun, considéré comme l'un des meilleurs spécialistes au monde du Deep Learning. Ces implantations "stratégiques" dessinent un axe européen de l'IA qui témoigne de l'intérêt des GAFAM pour le savoir-faire européen.

    L'Europe et notamment la France font preuve d'un réel dynamisme dans la création de startups centrées sur l'Intelligence Artificielle. De nombreux élèves ingénieurs et doctorants travaillent pendant leur scolarité sur un projet embarquant de  l'IA puis concrétisent ce projet par la création d'une startup soutenue par l'incubateur de l'école d'ingénieurs. Ce mode opératoire (qui a fait ses preuves)  permet d'accompagner efficacement l'entreprise et de la stabiliser durant ses premiers mois d'existence. 

    Parmi les startups françaises qui ont fait le pari de l'Intelligence Artificielle, on peut citer Alkemics pour une connexion intelligente des marques et des distributeurs afin de mieux servir l’expérience omnicanal des consommateurs, Blue Frogs Robotics pour les robots compagnons (Buddy), Cardiologs Technologies pour la prise en charge des pathologies cardiaques, Elum Energy pour la gestion intelligente de l'énergie photovoltaïque, Scortex et Craft.Ai pour l'application de l'IA aux objets connectés, Julie Desk pour l'assistant personnel, ou encore Smart Me Up pour la reconnaissance faciale en temps réel. 

    On notera que plusieurs startups de cette liste ont remporté des prix d'innovation en 2015 et 2016. Soutenues par des incubateurs académiques (ParisTech Entrepreneurs, X-UP l'accélérateur de l'École polytechnique...), ces startups font preuve aujourd'hui d'un fort dynamisme susceptible d'inspirer les différents acteurs de l'économie numérique et les décideurs politiques. Les grands groupes industriels français doivent eux-aussi jouer leur rôle en acceptant le risque et en rachetant ces startups lorsqu'elles sont mises en vente pour ne plus laisser s'échapper des concentrés d'excellences technologiques.

    La France ne manquera pas le train de l'Intelligence Artificielle. Elle n'a pas d'autres choix que de soutenir cet écosystème en créant un environnement favorable à l'innovation numérique. Elle dispose pour cela d'un vivier de compétences et d'expertises unanimement reconnu qui devrait favoriser une transformation réussie de ses entreprises et par là, de l’économie française !

    1Elaine Rich et Kevin Knight – Artificial Intelligence – McGraw-Hill

    Eric Cohen
    Fondateur & PD-G de Keyrus
    Intelligence artificielle, Machine learning

              Job Opening: Christian Education, Union Presbyterian Seminary   

    UNION PRESBYTERIAN SEMINARY Position Description Christian Education – Charlotte Campus Mission Statement Union Presbyterian Seminary equips Christian leaders for ministry in the world – a sacred vocation that requires deep learning, commitment to service, and an ability to read culture … Continue reading

    The post Job Opening: Christian Education, Union Presbyterian Seminary appeared first on Association of Practical Theology.


              RE•WORK Announces Their First Deep Learning Summit in Asia   
    NewswireToday (newswire) - 2015/12/17 London, United Kingdom - RE•WORK are pleased to announce their first event in Asia, following successful editions in San Francisco, Boston and London
              TuxMachines: OSS Leftovers   
    • AMD Plays Catch-Up in Deep Learning with New GPUs and Open Source Strategy

      AMD is looking to penetrate the deep learning market with a new line of Radeon GPU cards optimized for processing neural networks, along with a suite of open source software meant to offer an alternative to NVIDIA’s more proprietary CUDA ecosystem.

    • Baidu Research Announces Next Generation Open Source Deep Learning Benchmark Tool

      In September of 2016, Baidu released the initial version of DeepBench, which became the first tool to be opened up to the wider deep learning community to evaluate how different processors perform when they are used to train deep neural networks. Since its initial release, several companies have used and contributed to the DeepBench platform, including Intel, Nvidia, and AMD.

    • GitHub Declares Every Friday Open Source Day And Wants You to Take Part

      GitHub is home to many open-source development projects, a lot of which are featured on XDA. The service wants more people to contribute to open-source projects with a new initiative called Open Source Friday. In a nutshell, GitHub will be encouraging companies to allow their employees to work on open-source projects at the end of each working week.

      Even if all of the products you use on a daily basis are based on closed source software, much of the technology world operates using software based on open source software. A lot of servers are based off of various GNU/Linux based operating systems such as Red Hat Enterprise Linux. Much of the world’s infrastructure depends on open source software.

    • Open Source Friday

      GitHub is inviting every one - individuals, teams, departments and companies - to join in Open Source Friday, a structured program for contributing to open source that started inside GitHub and has since expanded.

    • Open Tools Help Streamline Kubernetes and Application Development

      Organizations everywhere are implementing container technology, and many of them are also turning to Kubernetes as a solution for orchestrating containers. Kubernetes is attractive for its extensible architecture and healthy open source community, but some still feel that it is too difficult to use. Now, new tools are emerging that help streamline Kubernetes and make building container-based applications easier. Here, we will consider several open source options worth noting.

    • Survey finds growing interest in Open Source

      Look for increased interest - and growth - in Open Source software and programming options. That's the word from NodeSource, whose recent survey found that most (91%) of enterprise software developers believe new businesses will come from open source projects.

    • Sony Open-Sources Its Deep Learning AI Libraries For Devs

      Sony on Tuesday open-sourced its Neural Network Libraries, a framework meant for developing artificial intelligence (AI) solutions with deep learning capabilities, the Japanse tech giant said in a statement. The company is hoping that its latest move will help grow a development community centered around its software tools and consequently improve the “core libraries” of the framework, thus helping advance this emerging technology. The decision to make its proprietary deep learning libraries available to everyone free of charge mimics those recently made by a number of other tech giants including Google, Amazon, and Facebook, all of whom are currently in the process of trying to incentivize AI developers to use their tools and grow their software ecosystems.

    • RESULTS FROM THE SURVEY ABOUT LIBREOFFICE FEATURES

      Unused features blur the focus of LibreOffice, and maintaining legacy capabilities is difficult and error-prone. The engineering steering committee (ESC) collected some ideas of what features could be flagged as deprecated in the next release – 5.4 – with the plan to remove them later. However, without any good information on what is being used in the wild the decision is very hard. So we run a survey in the last week to get insights into what features are being used.

    • COMPETITION FOR A LIBREOFFICE MASCOT
    • Rehost and Carry On, Redux

      After leaving Sun I was pleased that a group of former employees and partners chose to start a new company. Their idea was to pick up the Sun identity management software Oracle was abandoning and continue to sustain and evolve it. Open source made this possible.

      We had made Sun’s identity management portfolio open source as part of our strategy to open new markets. Sun’s products were technically excellent and applicable to very large-scale problems, but were not differentiated in the market until we added the extra attraction of software freedom. The early signs were very good, with corporations globally seeking the freedoms other IDM vendors denied them. By the time Oracle acquired Sun, there were many new customers approaching full production with our products.

      History showed that Oracle could be expected to silently abandon Sun’s IDM portfolio in favour of its existing products and strong-arm customers to migrate. Forgerock’s founders took the gamble that this would happen and disentangled themselves from any non-competes in good time for the acquisition to close. Sun’s practice was open development as well as open source licensing, so Forgerock maintained a mirror of the source trees ready for the inevitable day when they would disappear.

      Sure enough, Oracle silently stepped back from the products, reassigned or laid off key staff and talked to customers about how the cost of support was rising but offering discounts on Oracle’s products as mitigation. With most of them in the final deployment stages of strategic investments, you can imagine how popular this news was. Oracle become Forgerock’s dream salesman.

    • Boundless Reinforces its Commitment to Open Source with Diamond OSGeo Sponsorship
    • A C++ developer looks at Go (the programming language), Part 2: Modularity and Object Orientation

    read more


              Deep Learning Scientist/Senior Scientist - EXL - Jersey City, NJ   
    EXL Analytics offers an exciting, fast-paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager...
    From EXL - Tue, 18 Apr 2017 00:09:55 GMT - View all Jersey City, NJ jobs
              VW, Nvidia to cooperate on artificial intelligence   
    Volkswagen will cooperate with U.S. chipmaker Nvidia on deep learning software that could be used to manage future mobility projects or make it easier for humans to work with robots.
              The wonderful and terrifying implications of computers that can learn | Jeremy Howard   
    What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of "cats.") Get caught up on a field that will change the way the computers around you behave ... sooner than you probably think.
              Arya.AI Gets $750K In Pre-Series A For Its Developer Focused Deep Learning Solutions   
    Arya.AI, an Artificial intelligence startup providing deep learning algorithms for developers to deploy on Intelligent AI systems, has grabbed a pre-series A round worth $750K from YourNest and VentureNursery. The AI startup, which perhaps is a first-of-its-kind in the country, will be using this freshly infused capital to accelerate product development and to “grow the AI […]
              Google แปลภาษา เวอร์ชั่นใหม่ล่าสุด   

    Google แปลภาษา เวอร์ชั่นใหม่ล่าสุด

     

    เชื่อว่าบางคนมี ประสบการณ์ตรงกับการใช้ Google แปลภาษา (Google Translate) บนเว็บกันมาบ้าง แม้ไม่เคยด้วยตัวเองก็อาจจะเคยเห็นหรือได้ยินได้อ่านที่มีคนพูดถึง โดยเฉพาะในแง่มุมที่น่าตลกขบขันของผลลัพธ์ที่ได้จากการแปลเป็นภาษาไทยโดยใช้คอมพิวเตอร์และดิกชันนารีออนไลน์ เครื่องมือในการแปลข้ามภาษาเป็นสิ่งที่พัฒนากันมานาน ถือเป็นความใฝ่ฝันอย่างหนึ่งของคนในโลกไอทีและอาจรวมไปถึงวงการอื่น ๆเพราะมันจะช่วยทะลวงอุปสรรคในการสื่อสารกันของคนทั่วโลกที่มีกำแพงด้านภาษานำไปสู่อะไรต่ออะไร อื่น ๆ ได้อีกมากมาย

    เครื่องสแกนแบบพกพา แปลภาษา
    ก่อนหน้านี้ก็มีผลิตภัณฑ์ของบริษัท Unichal จากแดนกิมจิชื่อว่า Dixau ซึ่งประกอบด้วย เครื่องสแกนแบบพกพา (Portable Scanner) ขนาดเล็กที่สามารถสแกนจับภาพคำศัพท์ แล้วส่งต่อไปให้กับคอมพิวเตอร์ที่เชื่อมต่ออินเตอร์เน็ตและรันซอฟท์แวร์ที่มาด้วยกัน โดยทันทีที่คลิกปุ่มที่อยู่บนเครื่องสแกนแบบพกพา (Portable Scanner) โปรแกรมในเครื่องคอมพิวเตอร์รับภาพมาจาก เครื่องสแกน (Scanner) จะอ่านออกเสียงข้อความนั้นให้ได้ยิน พร้อมทั้งส่งข้อความดังกล่าวไปค้นหาความหมายใน Wikipedia, Google หรือดิกชันนารีออนไลน์ที่คุณเลือก และแสดงผลลัพธ์กลับมาให้ทราบได้ภายในอึดใจ แต่ก็แปลยังห่างไกลจากภาษามนุษย์มาก  

    Google พัฒนา โปรแกรมแปลภาษา ใหม่ล่าสุด
    อย่างไรก็ดีขีดจำกัดของโปรแกรม Google แปลภาษา (Google Translate) ยังคงเป็นปัญหาจนถึงทุกวันนี้ เนื่องจากภาษาที่ถูกแปลออกมาไม่มีความเป็นธรรมชาติอีกทั้งบางประโยคก็ไม่ใช่ข้อความที่เราใช้พูดกันทั่วไป อย่างไรก็ดีนักวิจัยของ Google ไม่ได้หยุดนิ่งเพียงเท่านั้น เพราะล่าสุดได้ทำการพัฒนาระบบแปลภาษาแบบใหม่ให้ก้าวล้ำยิ่งขึ้นจนเกือบทัดเทียมมนุษย์ไปเป็นที่เรียบร้อยแล้วภายใต้ชื่อ Google Neural Machine Translation System (GNMT) เวอร์ชั่นใหม่ล่าสุด โดยหนึ่งในนักวิจัยจาก Google อย่าง Quoc Le ระบุว่าการอัปเกรดระบบแปลภาษาครั้งใหญ่ในคราวนี้จะช่วยเพิ่มความสัมพันธ์ระหว่างมนุษย์และคอมพิวเตอร์ และด้วยความสามารถของระบบแปลภาษาโฉมใหม่จะช่วยให้ซอฟท์แวร์เรียนรู้ในการทำสิ่งที่ยากยิ่งขึ้น เช่น การอ่านข้อมูลจาก Wikipedia เพื่อเรียนรู้และตอบคำถามอันซับซ้อนเกี่ยวกับโลกมนุษย์

    Google Translate สู่ GNMT เวอร์ชั่นใหม่ล่าสุด
    ในวันนี้ Google ได้ประกาศนำระบบแปลภาษา GNMT มาใช้เป็นครั้งแรกหลังจากที่ได้ทำการเปิดตัว Google Translate ไปเมื่อ 10 ปีที่แล้ว พร้อมเริ่มทยอยอัปเดตเป็นที่เรียบร้อย โดยในขณะนี้รองรับแค่การแปลจากภาษาจีนเป็นภาษาอังกฤษเท่านั้น ซึ่งทาง Google หวังว่าจะมีการนำระบบใหม่เข้ามาใช้งานอย่างเต็มตัวและแทนที่ระบบเดิม Phrase Based Machine Translation ที่ตัดประโยคออกเป็นคำๆ แล้วแปลแต่ละส่วนแยกออกจากกัน โดยความสามารถของระบบแปลภาษารูปแบบใหม่นอกเหนือจากการแปลหน้าเว็บไซต์ หรือแม้แต่ทำลายกำแพงด้านภาษาทั่วโลกเพื่อให้สามารถสื่อสารกันอย่างเข้าใจโดยง่ายแล้ว ยังช่วยลดความผิดพลาดจากการแปลภาษาได้ถึง 55-85% ด้วยกัน

    โปรแกรมอัจฉริยะ AlphaGo
    ระบบการแปลภาษา GNMT  ถูกพัฒนาภายใต้เทคโนโลยีที่มีชื่อเรียกว่า Deep Learning ที่จใช้ประโยชน์จากการจำลองเครือข่ายประสาทเทียมคล้ายกับสมองมนุษย์ อย่างไรก็ดีระบบ Deep Learning ไม่ใช่เรื่องใหม่แต่อย่างใด เพราะตลอดระยะเวลาที่ผ่านมาได้แสดงให้เห็นถึงการพัฒนาอย่างรวดเร็วแก่สายตาชาวโลก โดยนำไปใช้กับการกับการคัดแยกรูปภาพหรือการจดจำเสียงพูด และหากใครยังจำกันได้ โปรแกรมอัจฉริยะ AlphaGo หนึ่งในผลงานจาก Google DeepMind โปรแกรมหมากล้อมที่สามารถเอาชนะแชมป์โลกโกะแบบหักปากกาเซียนก็ถูกพัฒนาภายใต้ระบบการเรียนรู้แบบ Deep Learning เช่นเดียวกัน โดย Google ได้เริ่มค้นคว้าถึงการนำระบบ Deep Learning มาใช้กับวงการแปลภาษานับย้อนไปตั้งแต่ปี 2014 แล้ว ซึ่งหากดูจากผลลัพธ์ที่ GNMT สามารถทำได้ในขณะนี้ ก็ถึงแก่เวลาที่จะเข้าสู่กระบวนการใช้งานจริง

    เทคโนโลยี Deep Learning & แปลภาษา GNMT
    สำหรับระบบการแปลภาษาแบบใหม่ที่ผนวกเทคโนโลยี Deep Learning เข้ามาช่วยนั้นค่อนข้างจะแตกต่างกับระบบการแปลภาษาแบบเดิมๆ เป็นอย่างมาก เนื่องจากมันไม่ได้ถูกป้อนสูตรสำเร็จในการแปลภาษาในแต่ละประโยคออกมา แต่มันสามารถอ่านและเรียนรู้ได้ด้วยตัวเองถึงแนวทางที่จะเปลี่ยนแปลงคำจากภาษาหนึ่งไปอีกภาษาปลายทางอย่างน่าเชื่อถือ ยกตัวอย่างเช่นประโยคภาษาจีนด้านต้น ระบบจะถอดรหัสตัวอักษรให้อยู่ในรูปแบบ Vector ก่อน โดยภายหลังจากที่ระบบ GNMT อ่านข้อความเสร็จเป็นที่เรียบร้อยก็จะเริ่มถอดรหัสแต่ละคำโดยการแปลงเป็นภาษาอังกฤษ แต่ไม่ได้หมายความว่าระบบจะแปลทุกคำแล้วจับมาเรียงกันเสมอไป เนื่องจากมีการให้น้ำหนักของแต่ละคำแตกต่างกันแล้วแต่ว่าคำไหนมีความเกี่ยวข้องมากที่สุด และสามารถพิจาณาบริบทของคำที่อยู่โดยรอบได้ว่ามีความเกี่ยวข้องกันหรือไม่

    GNMT แปลภาษา ใกล้เคียงกับภาษามนุษย์มากที่สุด
    นอกจากนี้ Google ได้ปล่อยผลงานวิจัยซึ่งโชว์ความสามารถระบบแปลภาษาแบบใหม่ออกมาให้รับชมเช่นเดียวกัน โดยทดสอบให้ผู้ที่มีความเชี่ยวชาญทั้ง 2 ภาษาเปรียบเทียบประโยคที่แปลโดยฝีมือมนุษย์และอีกหนึ่งชิ้นถูกแปลโดยระบบของ Google ซึ่งบางครั้งพบว่ามนุษย์ไม่พบข้อแตกต่างระหว่างงานแปลทั้ง 2 ชิ้น ซึ่งหากดูจากคะแนนที่ได้อาจน่าตกใจไม่น้อย เพราะงานแปลจากมนุษย์ทำคะแนนเฉลี่ยได้ 5.55 คะแนน จากคะแนนเต็ม 6 ส่วนระบบแปลภาษา GNMT แปลภาษา ใกล้เคียงกับภาษามนุษย์มากที่สุด ซึ่งทำคะแนนเกือบทัดเทียมฝีมือมนุษย์ที่ 5.43 คะแนน อย่างไรก็ดีการแปลจากภาษาจีนสู่ภาษาอังกฤษเป็นเพียงแค่ 1 ภาษาจากทั้งหมดกว่า 100 ภาษาที่ Google Translate รองรับ ซึ่งทางบริษัทระบุว่าจะพยายามพัฒนาให้ครอบคลุมให้ได้มากที่สุดเท่าที่จะทำได้ ซึ่งน่าจับตามองดูทีเดียวว่าอนาคตของโลกนี้จะเป็นไปอย่างไร ในยุคที่เทคโนโลยีกำลังเติบโตไปพร้อมกับมนุษย์หรือบางครั้งอาจก้าวนำมนุษย์ไปเป็นที่เรียบร้อยแล้ว

    Cr.ข่าว Techmoblog,DailyGizmo


              5 new leadership books to read this summer   
    From key party political leadership campaigns in the UK (the fallout of last month’s fear-based Brexit campaigning) to the European football championship in France, the US Presidency race and the Russian Olympic state-sponsored doping scandal, rarely has such a vast quantity of ‘leadership’ related content been doing the rounds. As many prepare to take their summer vacation, I thought it timely to focus attention beyond sound bites and reactionary articles, towards new books offering deep learning and hope.
               Weniger Unfälle: Volkswagen will Autos ab 2019 miteinander kommunizieren lassen    

    Ab 2019 will Volkswagen Fahrzeuge miteinander und mit der Verkehrsinfrastruktur kommunizieren lassen. Ziel ist es, Unfälle zu vermeiden oder deren Folgen zu vermindern.

    Volkswagen: Erste Modellreihe mit pWLAN im Jahr 2019

    Schon in zwei Jahren will der deutsche Autobauer Volkswagen eine erste Modellreihe serienmäßig mit pWLAN (IEEE 802.11p) ausrüsten. Dadurch soll die Kommunikation von Autos untereinander (Car-to-Car) und von Fahrzeugen mit der Verkehrsinfrastruktur (Car-to-X) ermöglicht werden. Ziel ist es, Informationen über kurzfristig entstehende Verkehrsrisiken auszutauschen und so Unfälle zu vermeiden oder deren Folgen zu vermindern, wie Volkswagen mitteilt.

    So stellt Volkswagen sich die Vorteile der Car-to-Car-Kommunikation vor. (Bild: Volkswagen)

    Die speziell für automotive Zwecke entwickelte und validierte pWLAN-Technologie soll den Austausch verkehrsrelevanter Infos, Warnungen oder Sensordaten innerhalb weniger Millisekunden in einem Bereich von rund 500 Metern ermöglichen. Der Erfassungsbereich des Fahrzeugs werde dadurch um mehrere hundert Meter erweitert und quasi ein Blick um die Ecke ermöglicht, heißt es in der Volkswagen-Mitteilung. Für Autofahrer sollen durch die Nutzung eines eigenen Frequenzbandes für die Verkehrssicherheit keine zusätzlichen Kosten für die Kommunikation entstehen. Die Daten würden nicht zentral erfasst, auch ein bestehendes Mobilfunknetz sei nicht notwendig.

    Vernetzte Volkswagen: Sicherheit im Straßenverkehr erhöhen

    „Wir wollen die Sicherheit im Straßenverkehr mithilfe der Fahrzeugvernetzung erhöhen, und das geht am effizientesten mit einer schnellen Verbreitung einer gemeinsamen Technologie“, sagt Johannes Neft, Leiter der Aufbauentwicklung der Marke Volkswagen. Das System solle von möglichst vielen Herstellern und Partnern eingesetzt werden, wünscht sich Volkswagen. Damit soll ein herstellerübergreifender Austausch wichtiger Informationen ermöglicht werden. Auch Behörden, Verkehrsministerien oder Polizei- und Rettungsdienste seien in die Vorbereitungen des Starts dieses Dienstes eingebunden.

    Sedric: Das ist VWs erstes Konzept eines selbstfahrenden Autos

    Volkswagen Sedric. (Bild: Volkswagen Group)

    1 von 13

    Zur Galerie

    Erst am Dienstag hatte Volkswagen eine Partnerschaft mit Nvidia bekanntgegeben, mit der der Autobauer seine Aktivitäten im Bereich Künstlicher Intelligenz und Deep Learning ausbauen will. Entsprechende Technologien sollen nicht nur im Bereich Mobilität, sondern auch in der Unternehmens-IT eingesetzt werden und Volkswagen bei der digitalen Transformation helfen. Nvidia arbeitet zudem mit Volvo bei der Entwicklung von selbstfahrenden Autos zusammen. Der Austausch zwischen Fahrzeugen und Infrastruktur wie Ampeln gehört zu den Voraussetzungen für das automatisierte und vernetzte Fahren.


              Datameer Adds Google AI Software to BI Platform Based on Hadoop   
    A Datameer SmartAI platform embeds a deep learning engine within a data preparation and analytics..
              Egnyte Applies Algorithms to Advance Document Governance   
    Egnyte Protect applies machine and deep learning algorithms in real time to classify content residing..
              How deep learning and AI techniques accelerate domain-driven design   
    Is the codebase aligned with the enterprise model? Deep learning and other AI technologies are helping to align domain-driven design with the organization's business objectives.
              Baidu's DeepBench can now measure inference performance   

    A lot of companies are pouring a lot of money into accelerating and improving deep learning tasks, but how do you tell which hardware offers are the best value for this purpose? Naturally, with benchmarks. China's Baidu—a company that can be fairly described as a Sino-Asian Google—operates a research division called Baidu Research. Last year, the research launched the open-source DeepBench tool to wide acclaim. Now the company has released a major update for the tool that includes benchmarks for deep learning inference as well as training.

    ...

    Read more...


              Comment on For Jesuits Satan no longer exists by kathleen   
    Just one more thought at this late hour (late in <i>Europe</i>, that is) which can best describe the tragic descent into Modernism of the once great Order of the Jesuits.... <strong><i>"Corruptio optimi pessima est"</i></strong>, which can be translated as, "The corruption of what is best is the worst ..." For surely, they were once upon a time some of "the best" defenders of the Faith. It is common knowledge among Catholics of the amazing number of heroic, holy martyrs and saints this once faithful Order of Catholic preachers gave to the Church. Almost single-handed it held back the spreading evil forces of the Reformation with its powerful Counter-Reformation. Countless millions have had the Jesuits to thank for coming to a fuller knowledge of the Truth. The Jesuits were tireless missionaries at home and abroad, even reaching the furthest corners of the earth; their deep learning prepared them for discussions with the learned and elite of other beliefs, leading many to embrace the Catholic Church and Her saving grace. Their unequalled obedience to the Church and the Papacy has left us with a staggering legacy of faithfulness.... So what went wrong? The Jesuits were some of the most scholarly and intelligent of men, so how did the Devil manage to successfully infiltrate such a stronghold as the Jesuits, as we witness in current times? Fr Malachi Martin gives a detailed background to the reasons behind the gradual slide into the errors of Modernism of the Jesuits in his fascinating book that I have often mentioned before - "<a href="http://www.goodreads.com/book/show/664440.The_Jesuits" rel="nofollow">The Jesuits: The Society of Jesus and the Betrayal of the Roman Catholic Church</a>" - but one can still ask oneself how men of prayer and learning could not be aware of the trap the Devil was preparing for them!
              Futures: Deep learning and health - the hurdles machine learning must leap   
    Startups and Silicon Valley giants are pushing into medicine with artificial intelligence and deep learning.
              (Associate) Data Scientist for Deep Learning Center of Excellence - SAP - Sankt Leon-Rot   
    Build Machine Learning models to solve real problems working with real data. Software-Design and Development....
    Gefunden bei SAP - Fri, 23 Jun 2017 08:50:58 GMT - Zeige alle Sankt Leon-Rot Jobs
              Solutions Architect - Autonomous Driving - NVIDIA - Santa Clara, CA   
    Be an internal champion for Deep Learning and HPC among the Nvidia technical community. You will assist field business development in guiding the customer...
    From NVIDIA - Fri, 23 Jun 2017 07:33:44 GMT - View all Santa Clara, CA jobs
              Baidu's DeepBench can now measure inference performance   
    A lot of companies are pouring a lot of money into accelerating and improving deep learning tasks, but how do you tell which hardware offers are the best value for this purpose? Naturally, with benchmarks. China's Baidu—a company that can be fairly described as a Sino-Asian Google—operates a research...
              (USA-WA-Seattle) Applied Scientist   
    Seeking Applied Researchers to build the future of the Alexa Shopping Experience at Amazon. At Alexa Shopping, we strive to enable shopping in everyday life. We allow customers to instantly order whatever they need, by simply interacting with their smart devices such as Echo, Fire TV, and beyond. Our services allow you to shop, anywhere, easily without interrupting what you’re doing – to go from “I want” to “It’s on the way” in a matter of seconds. We are seeking the industry's best applied scientists to help us create new ways to shop. Join us, and help invent the future of everyday life. The products you would envision and craft require ambitious thinking and a tireless focus on inventing solution to solve customer problems. You must be passionate about creating algorithms and models that can scale to hundreds of millions of customers, and insanely curious about building new technology and unlocking its potential. The Alexa Shopping team is seeking an Applied Scientist who will partner with technology and business leaders to build new state-of-the-art algorithms, models and services that surprise and delight our voice customers. As part of the new Alexa Shopping team you will use ML techniques such as deep learning to create and put into production models that deliver personalized shopping recommendations, allow to answer customer questions and enable human-like dialogs with our devices. The ideal candidate will have a PhD in Mathematics, Statistics, Machine Learning, Economics, or a related quantitative field, and 5+ years of relevant work experience, including: · Proven track record of achievements in natural language processing, search and personalization. · Expertize on a broad set of ML approaches and techniques, ranging from Artificial Neural Networks to Bayesian Non-Parametrics methods. · Experience in Structured Prediction and Dimensionality Reduction. · Strong fundamentals in problem solving, algorithm design and complexity analysis. · Proficiency in at least one scripting languages (e.g. Python) and one large-scale data processing platform (e.g. Hadoop, Hive, Spark). · Experience with using could technologies (e.g. S3, Dynamo DB, Elastic Search) and experience in data warehousing. · Strong personal interest in learning, researching, and creating new technologies with high commercial impact. + · Track record of peer reviewed academic publications. · Strong verbal/written communication skills, including an ability to effectively collaborate with both research and technical teams and earn the trust of senior stakeholders. AMZR Req ID: 551723 External Company URL: www.amazon.com
              Futures: Deep learning and health - the hurdles machine learning must leap   
    Startups and Silicon Valley giants are pushing into medicine with artificial intelligence and deep learning.
              torutkの日記: [Java]「Deep Learning Javaプログラミング」のパーセプトロンのサンプルプログラムにGUIを付けてみた #javareading   
    12月からJava読書会BOF*1で読み始めた「Deep Learning Javaプログラミング」で、2.5.1節のパーセプトロン(単層ニューラルネットワーク)のサンプルコードをJavaFXでGUIを追加して作成してみました。 Deep Learning Javaプログラミング 深層学習の理論と実装 (impress top gear) 本書籍のサンプルコードは出版社(インプレス)の書籍ページからダウンロードできます。 http://book.impress.co.jp/books/11151011 ...
              Avigilon Joins NVIDIA in Silicon Valley to Showcase Advanced AI Video Analytics at GPU Technology Conference   
    PRZOOM - Newswire (press release) - 2017/05/09, Vancouver, British Columbia Canada - Avigilon to provide thought leadership on latest deep learning technologies for video surveillance - GPUTechConf.com / Avigilon.com
              Futures: Deep learning and health - the hurdles machine learning must leap   
    Startups and Silicon Valley giants are pushing into medicine with artificial intelligence and deep learning.
              Front End Software Engineer   
    CA-Sunnyvale, Our client's Company is building highly scalable advanced analytics product utilizing state of the art research in Computer Vision, Artificial Intelligence and Deep Learning for the Defence Intelligence market place. They are looking for Front End Software Engineers with good experience with "React Framework" Responsibilities Build very intuitive and attractive web applications with a focus on the
              VW, Nvidia to cooperate on artificial intelligence   
    Volkswagen will cooperate with U.S. chipmaker Nvidia on deep learning software that could be used to manage future mobility projects or make it easier for humans to work with robots.
              #3: Deep Learning   
    Deep Learning
    Deep Learning
    Ian Goodfellow , Yoshua Bengio , Aaron Courville
    (5)

    Buy new: CDN$ 106.59 CDN$ 102.64
    27 used & new from CDN$ 68.26

    (Visit the Bestsellers in Programming list for authoritative information on this product's current rank.)
              Partners HealthCare and GE Healthcare Launch 10-year Collaboration on Artificial Intelligence   
    Partners HealthCare and GE Healthcare Launch 10-year Collaboration on Artificial Intelligence

    May 17, 2017 — Partners HealthCare and GE Healthcare announced a 10-year collaboration to rapidly develop, validate and strategically integrate deep learning technology across the entire continuum of care. The collaboration will be executed through the newly formed Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science and will feature co-located, multidisciplinary teams with broad access to data, computational infrastructure and clinical expertise.

    The initial focus of the relationship will be on the development of applications aimed to improve clinician productivity and patient outcomes in diagnostic imaging. Over time, the groups will create new business models for applying artificial intelligence (AI) to healthcare and develop products for additional medical specialties like molecular pathology, genomics and population health.

    “This is an important moment for medicine,” said David Torchiana, M.D., CEO of Partners HealthCare. “Clinicians are inundated with data, and the patient experience suffers from inefficiencies in the healthcare industry. This partnership has the resources and vision to accelerate the development and adoption of deep learning technology and empower clinicians with the tools needed to store, analyze and leverage the flood of information to more rapidly and effectively deliver care.”

    The vision for the collaboration is to implement AI into every aspect of a patient journey – from admittance through discharge. Once the deep learning applications are developed and deployed, clinicians and patients will benefit from a variety of tools that span disease areas, diagnostic modalities and treatment strategies and have the potential to do everything from decrease unnecessary biopsies to streamline clinical workflows to increase the amount of time clinicians spend with patients versus performing administrative tasks. Additionally, the teams will co-develop an open platform on which Partners HealthCare, GE Healthcare and third-party developers can rapidly prototype, validate and share the applications with hospitals and clinics around the world.

    With the initial diagnostic imaging focus, early applications will address cases like:

    • Determining the prognostic impact of stroke,
    • Identifying fractures in the emergency room;
    • Tracking how tumors grow or shrink after the administration of novel therapies; and
    • Indicating the likelihood of cancer on ultrasound.

    The applications are being developed based on three criteria:

    1. Patient impact;
    2. Technical capability; and
    3. Market appetite.

    This is to ensure that the solutions being developed are not solely dependent on the data that’s available but specifically target the top clinician pain points and the most critically ill patients. The goal is to bring the most promising solutions to market faster, so they can start making an impact for hospitals, health systems and patients globally sooner.

    Spinal injury patients represent the types of cases where deep learning applications can help clinicians deliver faster, more efficient care, as the patients need to be treated immediately or run the risk of significant and permanent damage. For a single patient, a lumbar spine magnetic resonance imaging (MRI) exam may generate up to 300 images. In addition, a doctor may need to review prior scans and notes in a patient’s electronic medical record before making a diagnosis. A deep learning application could be leveraged to quickly analyze the data and determine the most critical images for the radiologist to read, shortening the time to treatment for trauma patients, and enabling the clinician to deliver more personalized and comprehensive care for all patients – critically injured or not.

    “We’re evolving the healthcare system to be able to take advantage of the benefits of deep learning, bringing together hospitals, data sets and clinical and technical minds unlike ever before,” said Keith Dreyer, DO, Ph.D., chief data science officer, Departments of Radiology at MGH and BWH. “The scope reflects the reality that advancements in clinical data science require substantial commitments of capital, expertise, personnel and cooperation between the system and industry.”

    Watch a VIDEO interview with MGH Center for Clinical Data Science director Mark Michalski on the development of artificial intelligence to aid radiology.

    Read the article "How Artificial Intelligence Will Change Medical Imaging."

    For more information: www.gehealthcare.com, www.partners.org


              How Artificial Intelligence Will Change Medical Imaging   
    AI, deep learning, artificial intelligence, medical imaging, cardiology, echo AI, clinical decision support, echocardiography

    An example of artificial intelligence from the start-up company Viz. The image shows how the AI software automatically reviews an echocardiogram, completes an automated left ventricular ejection fraction quantification and then presents the data side by side with the original cardiology report. The goal of the software is to augment clinicians and cardiologists by helping them speed workflow, act as a second set of eyes and aid clinical decision support.

    An example of how Agfa is integrating IBM Watson into its radiology workflow. Watson reviewed the X-ray images and the image order and determined the patient had lung cancer and a cardiac history and pulled in the relevant prior exams, sections of the patient history, cardiology and oncology department information. It also pulled in recent lab values, current drugs being taken. This allows for a more complete view of the patient's condition and may aid in diagnosis or determining the next step in care.  

    Artificial intelligence (AI) has captured the imagination and attention of doctors over the past couple years as several companies and large research hospitals work to perfect these systems for clinical use. The first concrete examples of how AI (also called deep learning, machine learning or artificial neural networks) will help clinicians are now being commercialized. These systems may offer a paradigm shift in how clinicians work in an effort to significantly boost workflow efficiency, while at the same time improving care and patient throughput. 

    Today, one of the biggest problems facing physicians and clinicians in general is the overload of too much patient information to sift through. This rapid accumulation of electronic data is thanks to the advent of electronic medical records (EMRs) and the capture of all sorts of data about a patient that was not previously recorded, or at least not easily data mined. This includes imaging data, exam and procedure reports, lab values, pathology reports, waveforms, data automatically downloaded from implantable electrophysiology devices, data transferred from the imaging and diagnostics systems themselves, as well as the information entered in the EMR, admission, discharge and transfer (ADT), hospital information system (HIS) and billing software. In the next couple years there will be a further data explosion with the use of bidirectional patient portals, where patients can upload their own data and images to their EMRs. This will include images shot with their phones of things like wound site healing to reduce the need for in-person follow-up office visits. It also will include medication compliance tracking, blood pressure and weight logs, blood sugar, anticoagulant INR and other home monitoring test results, and activity tracking from apps, wearables and the evolving Internet of things (IoT) to aid in keeping patients healthy.

    Physicians liken all this data to drinking from a firehose because it is overwhelming. Many say it is very difficult or impossible to go through the large volumes of data to pick out what is clinically relevant or actionable. It is easy for things to fall through the cracks or for things to be lost to patient follow-up. This issue is further compounded when you add factors like increasing patient volumes, lower reimbursements, bundled payments and the conversion from fee-for-service to a fee-for-value reimbursement system. 

    This is where artificial intelligence will play a key role in the next couple years. AI will not be diagnosing patients and replacing doctors — it will be augmenting their ability to find the key, relevant data they need to care for a patient and present it in a concise, easily digestible format. When a radiologist calls up a chest computed tomography (CT) scan to read, the AI will review the image and identify potential findings immediately — from the image and also by combing through the patient history  related to the particular anatomy scanned. If the exam order is for chest pain, the AI system will call up:

    • All the relevant data and prior exams specific to prior cardiac history;
    • Pharmacy information regarding drugs specific to COPD, heart failure, coronary disease and anticoagulants;
    • Prior imaging exams from any modality of the chest that may aid in diagnosis;
    • Prior reports for that imaging;
    • Prior thoracic or cardiac procedures;
    • Recent lab results; and
    • Any pathology reports that relate to specimens collected from the thorax.

    Patient history from prior reports or the EMR that may be relevant to potential causes of chest pain will also be collected by the AI and displayed in brief with links to the full information (such as history of aortic aneurism, high blood pressure, coronary blockages, history of smoking, prior pulmonary embolism, cancer, implantable devices or deep vein thrombosis). This information would otherwise would take too long to collect, or its existence might not be known, by the physician so they would not have spent time looking for it.   

    Watch the VIDEO “Examples of Artificial Intelligence in Medical Imaging Diagnostics.” This shows an example of how AI can assess aortic dissection CT images.
     

    Watch the VIDEO “Development of Artificial Intelligence to Aid Radiology,” an interview with Mark Michalski, M.D., director of the Center for Clinical Data Science at Massachusetts General Hospital, explaining the basis of artificial intelligence in radiology.

    At the 2017 Health Information and Management Systems Society (HIMSS) annual conference in February, several vendors showed some of the first concrete examples of how this type of AI works. IBM/Merge, Philips, Agfa and Siemens have already started integrating AI into their medical imaging software systems. GE showed predictive analytics software using elements of AI for the impact on imaging departments when someone calls in sick, or if patient volumes increase. Vital showed a similar work-in-progress predictive analytics software for imaging equipment utilization. Others, including several analytics companies and startups, showed software that uses AI to quickly sift through massive amounts of big data or offer immediate clinical decision support for appropriate use criteria, the best test or imaging to make a diagnosis or even offer differential diagnoses.  

    Philips uses AI as a component of its new Illumeo software with adaptive intelligence, which automatically pulls in related prior exams for radiology. The user can click on an area of the anatomy in a specific MPI view, and AI will find and open prior imaging studies to show the same anatomy, slice and orientation. For oncology imaging, with a couple clicks on the tumor in the image, the AI will perform an automated quantification and then perform the same measures on the priors, presenting a side-by-side comparison of the tumor assessment. This can significantly reduce the time involved with tumor tracking assessment and speed workflow.  

    Read the blog about AI at HIMSS 2017 "Two Technologies That Offer a Paradigm Shift in Medicine at HIMSS 2017."

     

    AI is Elementary to Watson

    IBM Watson has been cited for the past few years as being in the forefront of medical AI, but has yet to commercialize the technology. Some of the first versions of work-in-progress software were shown at HIMSS by partner vendors Agfa and Siemens. Agfa showed an impressive example of how the technology works. A digital radiography (DR) chest X-ray exam was called up and Watson reviewed the image and determined the patient had small-cell lung cancer and evidence of both lung and heart surgery. Watson then searched the picture archiving and communication system (PACS), EMR and departmental reporting systems to bring in:

    • Prior chest imaging studies;
    • Cardiology report information;
    • Medications the patient is currently taking;
    • Patient history relevant to them having COPD and a history of smoking that might relate to their current exam;
    • Recent lab reports;
    • Oncology patient encounters including chemotherapy; and
    • Radiation therapy treatments.

    When the radiologist opens the study, all this information is presented in a concise format and greatly enhances the picture of this patient’s health. Agfa said the goal is to improve the radiologist’s understanding of the patient to improve the diagnosis, therapies and resulting patient outcomes without adding more burden on the clinician. 

    IBM purchased Merge Healthcare in 2015 for $1 billion, partly to get an established foothold in the medical IT market. However, the purchase also gave Watson millions of radiology studies and a vast amount of existing medical record data to help train the AI in evaluating patient data and get better at reading imaging exams. IBM Watson is now licensing its software through third-party agreements with other health IT vendors. The contracts stipulate that each vendor needs to add additional value to Watson with their own programming, not just become a reseller. Probably the most important stipulation of these new contracts is that vendors also are required to share access to all the patient data and imaging studies they have access to. This allows Watson to continue to hone its clinical intelligence with millions of new patient records.  
     

    The Basics of Machine Learning

    Access to vast quantities of patient data and images is needed to feed the AI software algorithms educational materials to learn from. Sorting through massive amounts of big data is a major component of how AI learns what is important for clinicians, what data elements are related to various disease states and gains clinical understanding. It is a similar process to medical students learning the ropes, but uses much more educational input than what is comprehensible by humans. The first step in machine learning software is for it to ingest medical textbooks and care guidelines and then review examples of clinical cases. Unlike human students, the number of cases AI uses to learn numbers in the millions. 

    For cases where the AI did not accurately determine the disease state or found incorrect or irrelevant data, software programers go back and refine the AI algorithm iteration after iteration until the AI software gets it right in the majority of cases. In medicine, there are so many variables it is difficult to always arrive at the correct diagnosis for people or machines. However, percentage wise, experts now say AI software reading medical imaging studies can often match, or in some cases, outperform human radiologists. This is especially true for rare diseases or presentations, where a radiologist might only see a handful of such cases during their entire career. AI has the advantage of reviewing hundreds or even thousands of these rare studies from archives to become proficient at reading them and identify a proper diagnosis. Also, unlike the human mind, it always remains fresh in the computer’s mind. 

    AI algorithms read medical images similar to radiologists, by identifying patterns. AI systems are trained using vast numbers of exams to determine what normal anatomy looks like on scans from CT, magnetic resonance imaging (MRI), ultrasound or nuclear imaging. Then abnormal cases are used to train the eye of the AI system to identify anomalies, similar to computer-aided detection software (CAD). However, unlike CAD, which just highlights areas a radiologist may want to take a closer look at, AI software has a more analytical cognitive ability based on much more clinical data and reading experience that previous generations of CAD software. For this reason, experts who are helping develop AI for medicine often refer to the cognitive ability as “CAD that works.”

       

    AI All Around Us and the Next Step in Radiology

    Deep learning computers are already driving cars, monitoring financial data for theft, able to translate languages and recognize people’s moods based on facial recognition, said Keith Dreyer, DO, Ph.D., vice chairman of radiology computing and information sciences at Massachusetts General Hospital, Boston. He was among the key speakers at the opening session of the 2016 Radiological Society of North America (RSNA) meeting in November, where he discussed AI’s entry into medical imaging. He is also in charge of his institution’s development of its own AI system to assist physicians at Mass General. 

    “The data science revolution started about five years ago with the advent of IBM Watson and Google Brain,” Dreyer explained. He said the 2012 introduction of deep learning algorithms really pushed AI forward and by 2014 the scales began to tip in terms of machines reading radiology studies correctly, reaching around 95 percent accuracy.

    Dreyer said AI software for imaging is not new, as most people already use it on Facebook to automatically tag friends the platform identities using facial recognition algorithms. He said training AI is a similar concept, where you can start with showing a computer photos of cats and dogs and it can be trained to determine the difference after enough images are used. 

    AI requires big data, massive computing power, powerful algorithms, broad investments and then a lot of translation and integration from a programming standpoint before it can be commercialized, Dreyer said. 

    From a radiology standpoint, he said there are two types of AI. The first type that is already starting to see U.S. Food and Drug Administration approval is for quantification AI, which only requires a 510(k) approval. AI developed for clinical interpretation will require FDA pre-market approval (PMA), which involves clinical trials.

    Before machines start conducting primary or peer review reads, Dreyer said it is much more likely AI will be used to read old exams retrospectively to help hospitals find new patients for conditions the patient may not realize they have. He said about 9 million Americans qualify for low-dose CT scans to screen them for lung cancer. He said AI can be trained to search through all the prior chest CT exams on record in the health system to help identify patients that may have lung cancer. This type of retrospective screening may apply to other disease states as well, especially if the AI can pull in genomic testing results to narrow the review to patients who are predisposed to some diseases. 

    He said overall, AI offers a major opportunity to enhance and augment radiology reading, not to replace radiologists. 

    “We are focused on talking into a microphone and we are ignoring all this other data that is out there in the patient record,” Dreyer said. “We need to look at the imaging as just another source of data for the patient.” He said AI can help automate qualification and quickly pull out related patient data from the EMR that will aid diagnosis or the understanding of a patient’s condition.  

    Watch a VIDEO interview with Eliot L. Siegel, M.D., Dwyer Lecturer; Closing Keynote Speaker, Vice Chair of Radiology at the University of Maryland and the Chief of Radiology for VA Maryland Healthcare System, talks about the current state of the industry in computer-aided detection and diagnosis at SIIM 2016. 

    Read the blog “How Intelligent Machines Could Make a Difference in Radiology.”


              Deep Learning in Medical Imaging to Create $300 Million Market by 2021   
    deep learning, medical imaging, Signify Research market report, 2021
    Signifiy Research, deep learning, medical imaging, product hierarchy, image analysis
    Signify Research, world market, medical image analysis, deep learning, artificial intelligence

    February 15, 2017 — Deep learning, also known as artificial intelligence, will increasingly be used in the interpretation of medical images to address many long-standing industry challenges. This will lead to a $300 million market by 2021, according to a new report by Signify Research, an independent supplier of market intelligence and consultancy to the global healthcare information technology industry.

    In most countries, there are not enough radiologists to meet the ever-increasing demand for medical imaging. Consequently, many radiologists are working at full capacity. The situation will likely get worse, as imaging volumes are increasing at a faster rate than new radiologists entering the field. Even when radiology departments are well-resourced, radiologists are under increasing pressure due to declining reimbursement rates and the transition from volume-based to value-based care delivery. Moreover, the manual interpretation of medical images by radiologists is subjective, often based on a combination of experience and intuition, which can lead to clinical errors.

    A new breed of image analysis software that uses advanced machine learning methods, e.g. deep learning, is tackling these problems by taking on many of the repetitive and time-consuming tasks performed by radiologists. There is a growing array of “intelligent” image analysis products that automate various stages of the imaging diagnosis workflow. In cancer screening, computer-aided detection can alert radiologists to suspicious lesions. In the follow-up diagnosis, quantitative imaging tools provide automated measurements of anatomical features. At the top-end of the scale of diagnostic support, computer-aided diagnosis provides probability-driven, differential diagnosis options for physicians to consider as they formulate their diagnostic and treatment decisions.

    “Radiology is evolving from a largely descriptive field to a more quantitative discipline. Intelligent software tools that combine quantitative imaging and clinical workflow features will not only enhance radiologist productivity, but also improve diagnostic accuracy,” said Simon Harris, principal analyst at Signify Research and author of the report.

    However, it is early days for deep learning in medical imaging. There are only a handful of commercial products and it is uncertain how well deep learning will cope with variations in patient demographics, imaging protocols, image artifacts, etc. Many radiologists were left underwhelmed by early-generation computer-aided detection, which used traditional machine learning and relied heavily on feature engineering. They remain skeptical of machine learning’s abilities, despite the leap in performance of today’s deep learning solutions, which automatically learn about image features from radiologist-annotated images and a "ground-truth”. Furthermore, the “black box” nature of deep learning and the lack of traceability as to how results are obtained could lead to legal implications. While none of these problems are insurmountable, healthcare providers are likely to take a ‘wait and see’ approach before investing in deep learning-based solutions.

    “Deep learning is a truly transformative technology and the longer-term impact on the radiology market should not be underestimated. It’s more a question of when, not if, machine learning will be routinely used in imaging diagnosis”, Harris concluded.

    “Machine Learning in Medical Imaging – 2017 Edition” provides a data-centric and global outlook on the current and projected uptake of machine learning in medical imaging. The report blends primary data collected from in-depth interviews with healthcare professionals and technology vendors, to provide a balanced and objective view of the market.

    For more information: www.signifyresearch.net


              Researchers Develop "Value-Aware" SSD OPtimized For Image Recognition Systems   
    .textsize { font-size: small;}Japanese researchers at Chuo University developed an SSD (solid state drive) that combines new data-aware techniques with deep neural network's error tolerance, making it well-suited for deep learning-based image recognition applications.
              分分钟带你杀入Kaggle Top 1%   

    分分钟带你杀入Kaggle Top 1%

    作者:吴晓晖

    不知道你有没有这样的感受,在刚刚入门机器学习的时候,我们一般都是从MNIST、CIFAR-10这一类知名公开数据集开始快速上手,复现别人的结果,但总觉得过于简单,给人的感觉太不真实。因为这些数据太“完美”了(干净的输入,均衡的类别,分布基本一致的测试集,还有大量现成的参考模型),要成为真正的数据科学家,光在这些数据集上跑模型却是远远不够的。而现实中你几乎不可能遇到这样的数据(现实数据往往有着残缺的输入,类别严重不均衡,分布不一致甚至随时变动的测试集,几乎没有可以参考的论文),这往往让刚进入工作的同学手忙脚乱,无所适从。

    Kaggle则提供了一个介于“完美”与真实之间的过渡,问题的定义基本良好,却夹着或多或少的难点,一般没有完全成熟的解决方案。在参赛过程中与论坛上的其他参赛者互动,能不断地获得启发,受益良多。即使对于一些学有所成的高手乃至大牛,参加Kaggle也常常会获得很多启发,与来着世界各地的队伍进行厮杀的刺激更让人欲罢不能。更重要的是,Kaggle是业界普遍承认的竞赛平台,能从Kaggle上的一些高质量竞赛获取好名次,是对自己实力极好的证明,还能给自己的履历添上光辉的一笔。如果能获得金牌,杀入奖金池,那更是名利兼收,再好不过。

    Kaggle适用于以下人群:

    我是小白,但是对数据科学充满求知欲。 我想要历练自己的数据挖掘和机器学习技能,成为一名真正的数据科(lao)学(si)家(ji)。 我想赢取奖金,成为人生赢家。 0 简介

    Kaggle创办于2010年,目前已经被Google收购,是全球顶级的数据科学竞赛平台,在数据科学领域中享有盛名。笔者参加了由Quora举办的Quora Question Pairs比赛,并且获得了前1%的成绩(3307支队伍)。这是笔者Kaggle首战,所以写下此文来系统化地梳理比赛的思路,并且和大家分享我们参赛的一些心得。

    Quora Question Pairs是一个自然语言(NLP)比赛,比赛的题目可以简单地概括为“预测两个问句的语义相似的概率”。其中的样本如下:


    分分钟带你杀入Kaggle Top 1%

    也许是作为Kaggle上为数不多的NLP比赛,这看似简单的比赛却吸引了众多的参赛队伍。由于这是NLP问题,所以接下来的介绍都会偏向于NLP,本文会分为以下三个部分:

    打Kaggle比赛的大致套路。(比赛篇) 我们队伍和其他出色队伍的参赛经验。(经验篇) 完成Kaggle比赛需要学会哪些实用的工具。(工具篇) 1 比赛篇

    为了方便,我们先定义几个名词:

    Feature
    特征变量,也叫自变量,是样本可以观测到的特征,通常是模型的输入Label
    标签,也叫目标变量,需要预测的变量,通常是模型的标签或者输出Train Data
    训练数据,有标签的数据,由举办方提供。 Test Data
    测试数据,标签未知,是比赛用来评估得分的数据,由举办方提供。 Train Set
    训练集,从Train Data中分割得到的,用于训练模型(常用于交叉验证)。 Valid Set
    验证集,从Train Data中分割得到的,用于验证模型(常用于交叉验证)。 1.1 分析题目

    拿到赛题以后,第一步就是要破题,我们需要将问题转化为相应的机器学习问题。其中,Kaggle最常见的机器学习问题类型有:

    回归问题 分类问题(二分类、多分类、多标签)
    多分类只需从多个类别中预测一个类别,而多标签则需要预测出多个类别。

    比如Quora的比赛就是二分类问题,因为只需要判断两个问句的语义是否相似。

    1.2 数据分析(Data Exploration)

    所谓数据挖掘,当然是要从数据中去挖掘我们想要的东西,我们需要通过人为地去分析数据,才可以发现数据中存在的问题和特征。我们需要在观察数据的过程中思考以下几个问题:

    数据应该怎么清洗和处理才是合理的? 根据数据的类型可以挖掘怎样的特征? 数据中的哪些特征会对标签的预测有帮助? 1.2.1 统计分析

    对于数值类变量(Numerical Variable),我们可以得到min,max,mean,meduim,std等统计量,用pandas可以方便地完成,结果如下:


    分分钟带你杀入Kaggle Top 1%

    从上图中可以观察Label是否均衡,如果不均衡则需要进行over sample少数类,或者down sample多数类。我们还可以统计Numerical Variable之间的相关系数,用pandas就可以轻松获得相关系数矩阵


    分分钟带你杀入Kaggle Top 1%

    观察相关系数矩阵可以让你找到高相关的特征,以及特征之间的冗余度。而对于文本变量,可以统计词频(TF),TF-IDF,文本长度等等,更详细的内容可以参考这里

    1.2.2 可视化

    人是视觉动物,更容易接受图形化的表示,因此可以将一些统计信息通过图表的形式展示出来,方便我们观察和发现。比如用直方图展示问句的频数:


    分分钟带你杀入Kaggle Top 1%

    或者绘制相关系数矩阵:


    分分钟带你杀入Kaggle Top 1%

    常用的可视化工具有matplotlib和seaborn。当然,你也可以跳过这一步,因为可视化不是解决问题的重点。

    1.3 数据预处理(Data Preprocessing)

    刚拿到手的数据会出现噪声,缺失,脏乱等现象,我们需要对数据进行清洗与加工,从而方便进行后续的工作。针对不同类型的变量,会有不同的清洗和处理方法:

    对于数值型变量(Numerical Variable),需要处理离群点,缺失值,异常值等情况。 对于类别型变量(Categorical Variable),可以转化为one-hot编码。 文本数据是较难处理的数据类型,文本中会有垃圾字符,错别字(词),数学公式,不统一单位和日期格式等。我们还需要处理标点符号,分词,去停用词,对于英文文本可能还要词性还原(lemmatize),抽取词干(stem)等等。 1.4 特征工程(Feature Engineering)

    都说特征为王,特征是决定效果最关键的一环。我们需要通过探索数据,利用人为先验知识,从数据中总结出特征。

    1.4.1 特征抽取(Feature Extraction)

    我们应该尽可能多地抽取特征,只要你认为某个特征对解决问题有帮助,它就可以成为一个特征。特征抽取需要不断迭代,是最为烧脑的环节,它会在整个比赛周期折磨你,但这是比赛取胜的关键,它值得你耗费大量的时间。

    那问题来了,怎么去发现特征呢?光盯着数据集肯定是不行的。如果你是新手,可以先耗费一些时间在Forum上,看看别人是怎么做Feature Extraction的,并且多思考。虽然Feature Extraction特别讲究经验,但其实还是有章可循的:

    对于Numerical Variable,可以通过线性组合、多项式组合来发现新的Feature。 对于文本数据,有一些常规的Feature。比如,文本长度,Embeddings,TF-IDF,LDA,LSI等,你甚至可以用深度学习提取文本特征(隐藏层)。 如果你想对数据有更深入的了解,可以通过思考数据集的构造过程来发现一些magic feature,这些特征有可能会大大提升效果。在Quora这次比赛中,就有人公布了一些magic feature。 通过错误分析也可以发现新的特征(见1.5.2小节)。 1.4.2 特征选择(Feature Selection)

    在做特征抽取的时候,我们是尽可能地抽取更多的Feature,但过多的Feature会造成冗余,噪声,容易过拟合等问题,因此我们需要进行特征筛选。特征选择可以加快模型的训练速度,甚至还可以提升效果。

    特征选择的方法多种多样,最简单的是相关度系数(Correlation coefficient),它主要是衡量两个变量之间的线性关系,数值在[-1.0, 1.0]区间中。数值越是接近0,两个变量越是线性不相关。但是数值为0,并不能说明两个变量不相关,只是线性不相关而已。

    我们通过一个例子来学习一下怎么分析相关系数矩阵:


    分分钟带你杀入Kaggle Top 1%

    相关系数矩阵是一个对称矩阵,所以只需要关注矩阵的左下角或者右上角。我们可以拆成两点来看:

    Feature和Label的相关度可以看作是该Feature的重要度,越接近1或-1就越好。 Feature和Feature之间的相关度要低,如果两个Feature的相关度很高,就有可能存在冗余。

    除此之外,还可以训练模型来筛选特征,比如带L1或L2惩罚项的Linear Model、Random Forest、GDBT等,它们都可以输出特征的重要度。在这次比赛中,我们对上述方法都进行了尝试,将不同方法的平均重要度作为最终参考指标,筛选掉得分低的特征。

    1.5 建模(Modeling)

    终于来到机器学习了,在这一章,我们需要开始炼丹了。

    1.5.1 模型

    机器学习模型有很多,建议均作尝试,不仅可以测试效果,还可以学习各种模型的使用技巧。其实,几乎每一种模型都有回归和分类两种版本,常用模型有:

    KNN SVM Linear Model(带惩罚项) ExtraTree RandomForest Gradient Boost Tree Neural Network

    幸运的是,这些模型都已经有现成的工具(如scikit-learn、XGBoost、LightGBM等)可以使用,不用自己重复造轮子。但是我们应该要知道各个模型的原理,这样在调参的时候才会游刃有余。当然,你也使用PyTorch/Tensorflow/Keras等深度学习工具来定制自己的Deep Learning模型,玩出自己的花样。

    1.5.2 错误分析

    人无完人,每个模型不可能都是完美的,它总会犯一些错误。为了解某个模型在犯什么错误,我们可以观察被模型误判的样本,总结它们的共同特征,我们就可以再训练一个效果更好的模型。这种做法有点像后面Ensemble时提到的Boosting,但是我们是人为地观察错误样本,而Boosting是交给了机器。通过错误分析->发现新特征->训练新模型->错误分析,可以不断地迭代出更好的效果,并且这种方式还可以培养我们对数据的嗅觉。

    举个例子,这次比赛中,我们在错误分析时发现,某些样本的两个问句表面上很相似,但是句子最后提到的地点不一样,所以其实它们是语义不相似的,但我们的模型却把它误判为相似的。比如这个样本:

    Question1: Which is the best digital marketing institution in banglore? Question2: Which is the best digital marketing institute in Pune?

    为了让模型可以处理这种样本,我们将两个问句的最长公共子串(Longest Common Sequence)去掉,用剩余部分训练一个新的深度学习模型,相当于告诉模型看到这种情况的时候就不要判断为相似的了。因此,在加入这个特征后,我们的效果得到了一些提升。

    1.5.3 调参

    在训练模型前,我们需要预设一些参数来确定模型结构(比如树的深度)和优化过程(比如学习率),这种参数被称为超参(Hyper-parameter),不同的参数会得到的模型效果也会不同。总是说调参就像是在“炼丹”,像一门“玄学”,但是根据经验,还是可以找到一些章法的:

    根据经验,选出对模型效果影响较大的超参。 按照经验设置超参的搜索空间,比如学习率的搜索空间为[0.001,0.1]。 选择搜索算法,比如Random Search、Grid Search和一些启发式搜索的方法。 验证模型的泛化能力(详见下一小节)。 1.5.4 模型验证(Validation)

    在Test Data的标签未知的情况下,我们需要自己构造测试数据来验证模型的泛化能力,因此把Train Data分割成Train Set和Valid Set两部分,Train Set用于训练,Valid Set用于验证。

    简单分割

    将Train Data按一定方法分成两份,比如随机取其中70%的数据作为Train Set,剩下30%作为Valid Set,每次都固定地用这两份数据分别训练模型和验证模型。这种做法的缺点很明显,它没有用到整个训练数据,所以验证效果会有偏差。通常只会在训练数据很多,模型训练速度较慢的时候使用。

    交叉验证

    交叉验证是将整个训练数据随机分成K份,训练K个模型,每次取其中的K-1份作为Train Set,留出1份作为Valid Set,因此也叫做K-fold。至于这个K,你想取多少都可以,但一般选在3~10之间。我们可以用K个模型得分的mean和std,来评判模型得好坏(mean体现模型的能力,std体现模型是否容易过拟合),并且用K-fold的验证结果通常会比较可靠。

    如果数据出现Label不均衡情况,可以使用Stratified K-fold,这样得到的Train Set和Test Set的Label比例是大致相同。

    1.6 模型集成(Ensemble)

    曾经听过一句话,”Feature为主,Ensemble为后”。Feature决定了模型效果的上限,而Ensemble就是让你更接近这个上限。Ensemble讲究“好而不同”,不同是指模型的学习到的侧重面不一样。举个直观的例子,比如数学考试,A的函数题做的比B好,B的几何题做的比A好,那么他们合作完成的分数通常比他们各自单独完成的要高。

    常见的Ensemble方法有Bagging、Boosting、Stacking、Blending。

    1.6.1 Bagging

    Bagging是将多个模型(基学习器)的预测结果简单地加权平均或者投票。Bagging的好处在于可以并行地训练基学习器,其中Random Forest就用到了Bagging的思想。举个通俗的例子,如下图:


    分分钟带你杀入Kaggle Top 1%

    老师出了两道加法题,A同学和B同学答案的加权要比A和B各自回答的要精确。

    Bagging通常是没有一个明确的优化目标的,但是有一种叫Bagging Ensemble Selection的方法,它通过贪婪算法来Bagging多个模型来优化目标值。在这次比赛中,我们也使用了这种方法。

    1.6.2 Boosting

    Boosting的思想有点像知错能改,每训练一个基学习器,是为了弥补上一个基学习器所犯的错误。其中著名的算法有AdaBoost,Gradient Boost。Gradient Boost Tree就用到了这种思想。

    我在1.2.3节(错误分析)中提到Boosting,错误分析->抽取特征->训练模型->错误分析,这个过程就跟Boosting很相似。

    1.6.3 Stacking

    Stacking是用新的模型(次学习器)去学习怎么组合那些基学习器,它的思想源自于Stacked Generalization这篇论文。如果把Bagging看作是多个基分类器的线性组合,那么Stacking就是多个基分类器的非线性组合。Stacking可以很灵活,它可以将学习器一层一层地堆砌起来,形成一个网状的结构,如下图:

    举个更直观的例子,还是那两道加法题:


    分分钟带你杀入Kaggle Top 1%

    这里A和B可以看作是基学习器,C、D、E都是次学习器。

    Stage1: A和B各自写出了答案。 Stage2: C和D偷看了A和B的答案,C认为A和B一样聪明,D认为A比B聪明一点。他们各自结合了A和B的答案后,给出了自己的答案。 Stage3: E偷看了C和D的答案,E认为D比C聪明,随后E也给出自己的答案作为最终答案。

    在实现Stacking时,要注意的一点是,避免标签泄漏(Label Leak)。在训练次学习器时,需要上一层学习器对Train Data的测试结果作为特征。如果我们在Train Data上训练,然后在Train Data上预测,就会造成Label Leak。为了避免Label Leak,需要对每个学习器使用K-fold,将K个模型对Valid Set的预测结果拼起来,作为下一层学习器的输入。如下图:


    分分钟带你杀入Kaggle Top 1%

    由图可知,我们还需要对Test Data做预测。这里有两种选择,可以将K个模型对Test Data的预测结果求平均,也可以用所有的Train Data重新训练一个新模型来预测Test Data。所以在实现过程中,我们最好把每个学习器对Train Data和对Test Data的测试结果都保存下来,方便训练和预测。

    对于Stacking还要注意一点,固定K-fold可以尽量避免Valid Set过拟合,也就是全局共用一份K-fold,如果是团队合作,组员之间也是共用一份K-fold。如果想具体了解为什么需要固定K-fold,请看这里。

    1.6.4 Blending

    Blending与Stacking很类似,它们的区别可以参考这里

    1.7 后处理

    有些时候在确认没有过拟合的情况下,验证集上做校验时效果挺好,但是将测试结果提交后的分数却不如人意,这时候就有可能是训练集的分布与测试集的分布不一样而导致的。这时候为了提高LeaderBoard的分数,还需要对测试结果进行分布调整。

    比如这次比赛,训练数据中正类的占比为0.37,那么预测结果中正类的比例也在0.37左右,然后Kernel上有人通过测试知道了测试数据中正类的占比为0.165,所以我们也对预测结果进行了调整,得到了更好的分数。具体可以看这里。

    2 经验篇 2.1 我们的方案(33th)

    深度学习具有很好的模型拟合能力,使用深度学习可以较快得获取一个不错的Baseline,对这个问题整体的难度有一个初始的认识。虽然使用深度学习可以免去繁琐的手工特征,但是它也有能力上限,所以提取传统手工特征还是很有必要的。我们尝试Forum上别人提供的方法,也尝试自己思考去抽取特征。总结一下,我们抽取的手工特征可以分为以下4种:

    Text Mining Feature,比如句子长度;两个句子的文本相似度,如N-gram的编辑距离,Jaccard距离等;两个句子共同的名词,动词,疑问词等。 Embedding Feature,预训练好的词向量相加求出句子向量,然后求两个句子向量的距离,比如余弦相似度、欧式距离等等。 Vector Space Feature,用TF-IDF矩阵来表示句子,求相似度。 Magic Feature,是Forum上一些选手通过思考数据集构造过程而发现的Feature,这种Feature往往与Label有强相关性,可以大大提高预测效果。

    我们的系统整体上使用了Stacking的框架,如下图:


    分分钟带你杀入Kaggle Top 1%
    Stage1: 将两个问句与Magic Feature输入Deep Learning中,将其输出作为下一层的特征(这里的Deep Learning相当于特征抽取器)。我们一共训练了几十个Deep Learning Model。 Stage2: 将Deep Learning特征与手工抽取的几百个传统特征拼在一起,作为输入。在这一层,我们训练各种模型,有成百上千个。 Stage3: 上一层的输c进行Ensemble Selection。

    比赛中发现的一些深度学习的局限:

    通过对深度学习产生的结果进行错误分析,并且参考论坛上别人的想法,我们发现深度学习没办法学到的特征大概可以分为两类:

    对于一些数据的Pattern,在Train Data中出现的频数不足以让深度学习学到对应的特征,所以我们需要通过手工提取这些特征。 由于Deep Learning对样本做了独立同分布假设(iid),一般只能学习到每个样本的特征,而学习到数据的全局特征,比如TF-IDF这一类需要统计全局词频才能获取的特征,因此也需要手工提取这些特征。

    传统的机器学习模型和深度学习模型之间也存在表达形式上的不同。虽然传统模型的表现未必比深度学习好,但它们学到的Pattern可能不同,通过Ensemble来取长补短,也能带来性能上的提升。因此,同时使用传统模型也是很有必要的。

    2.2 第一名的解决方案

    比赛结束不久,第一名也放出了他们的解决方案,我们来看看他们的做法。他们的特征总结为三个类别:

    Embedding Feature Text Mining Feature Structural Feature(他们自己挖掘的Magic Feature)

    并且他们也使用了Stacking的框架,并且使用固定的k-fold:

    Stage1: 使用了Deep Learning,XGBoost,LightGBM,ExtraTree,Random Forest,KNN等300个模型。 Stage2: 用了手工特征和第一层的预测和深度学习模型的隐藏层,并且训练了150个模型。 Stage3: 使用了分别是带有L1和L2的两种线性模型。 Stage4: 将第三层的结果加权平均。

    对比以后发现我们没有做LDA、LSI等特征,并且N-gram的粒度没有那么细(他们用了8-gram),还有他们对Magic Feature的挖掘更加深入。还有一点是他们的Deep Learning模型设计更加合理,他们将筛选出来的手工特征也输入到深度学习模型当中,我觉得这也是他们取得好效果的关键。因为显式地将手工特征输入给深度学习模型,相当于告诉“它你不用再学这些特征了,你去学其他的特征吧”,这样模型就能学到更多的语义信息。所以,我们跟他们的差距还是存在的。

    3. 工具篇

    工欲善其事,必先利其器。

    Kaggle 的上常工具除了大家耳熟能详的XGBoost之外, 这里要着重推荐的是一款由微软推出的LightGBM,这次比赛中我们就用到了。LightGBM的用法与XGBoost相似,两者使用的区别是XGBoost调整的一个重要参数是树的高度,而LightGBM调整的则是叶子的数目。与XGBoost 相比, 在模型训练时速度快, 单模型的效果也略胜一筹。

    调参也是一项重要工作,调参的工具主要是Hyperopt,它是一个使用搜索算法来优化目标的通用框架,目前实现了Random Search和Tree of Parzen Estimators (TPE)两个算法。

    对于 Stacking,Kaggle 的一位名为Μαριο Μιχαηλιδη的GrandMaster使用Java开发了一款集成了各种机器学习算法的工具包StackNet,据说在使用了它以后你的效果一定会比原来有所提升,值得一试。

    以下总结了一些常用的工具:

    Numpy | 必用的科学计算基础包,底层由C实现,计算速度快。 Pandas | 提供了高性能、易用的数据结构及数据分析工具。 NLTK | 自然语言工具包,集成了很多自然语言相关的算法和资源。 Stanford CoreNLP | Stanford的自然语言工具包,可以通过NLTK调用。 Gensim | 主题模型工具包,可用于训练词向量,读取预训练好的词向量。 scikit-learn | 机器学习python包 ,包含了大部分的机器学习算法。 XGBoost/LightGBM | Gradient Boosting 算法的两种实现框架。 PyTorch/TensorFlow/Keras | 常用的深度学习框架。 StackNet | 准备好特征之后,可以直接使用的Stacking工具包。 Hyperopt | 通用的优化框架,可用于调参。 4. 总结与建议 在参加某个比赛前,要先衡量自己的机器资源能否足够支撑你完成比赛。比如一个有几万张图像的比赛,而你的显存只有2G,那很明显你是不适合参加这个比赛的。当你选择了一个比赛后,可以先“热热身”,稍微熟悉一下数据,粗略地跑出一些简单的模型,看看自己在榜上的排名,然后再去慢慢迭代。 Kaggle有许多大牛分享Kernel, 有许多Kernel有对于数据精辟的分析,以及一些baseline 模型, 对于初学者来说是很好的入门资料。在打比赛的过程中可以学习别人的分析方法,有利于培养自己数据嗅觉。甚至一些Kernel会给出一些data leak,会对于比赛提高排名有极大的帮助。 其次是Kaggle已经举办了很多比赛, 有些比赛有类似之处, 比如这次的Quora比赛就与之前的Home Depot Product Search Relevance 有相似之处,而之前的比赛前几名已经放出了比赛的idea甚至代码,这些都可以借鉴。 另外,要足够地重视Ensemble,这次我们组的最终方案实现了paper ” Ensemble Selection from Libraries of Models” 的想法,所以有些比赛可能还需要读一些paper,尤其对于深度学习相关的比赛,最新paper,最新模型的作用就举足轻重了。 而且,将比赛代码的流程自动化,是提高比赛效率的一个关键,但是往往初学者并不能很好地实现自己的自动化系统。我的建议是初学者不要急于构建自动化系统,当你基本完成整个比赛流程后,自然而然地就会在脑海中形成一个框架,这时候再去构建你的自动化系统会更加容易。 最后,也是最重要的因素之一就是时间的投入,对于这次比赛, 我们投入了差不多三个多月,涉及到了对于各种能够想到的方案的尝试。尤其最后一个月,基本上每天除了睡觉之外的时间都在做比赛。所以要想在比赛中拿到好名次,时间的投入必不可少。另外对于国外一些介绍kaggle比赛的博客(比如官方博客)也需要了解学习,至少可以少走弯路,本文的结尾列出了一些参考文献,都值得细细研读。 最后的最后,请做好心理准备,这是一场持久战。因为比赛会给你带来压力,也许过了一晚,你的排名就会一落千丈。还有可能造成出现失落感,焦虑感,甚至失眠等症状。但请你相信,它会给你带来意想不到的惊喜,认真去做,你会觉得这些都是值得的。

    End.

    转载请注明来自36大数据(36dsj.com):36大数据 分分钟带你杀入Kaggle Top 1%


              Comment on Object Recognition with Convolutional Neural Networks in the Keras Deep Learning Library by Bruce Wind   
    Hi, Jason, thanks for sharing. I test the code you provided, but my machine does not support CUDA, so it runs very slowly( half an hour per epoch). Since you have such a powerful computer, could you please show the results after hundreds or thousands epoches later? Thanks.
              Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Jason Brownlee   
    Very nice Candida.
              Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Candida   
    scipy: 0.19.0 numpy: 1.12.1 matplotlib: 2.0.2 pandas: 0.20.1 statsmodels: 0.8.0 sklearn: 0.18.1
                 
    《金融時報》提及我以深度學習—依家興叫「人工智能」—估計車牌拍賣價錢的研究。 My deep learning paper on predicting license plate auction prices made it to Financial Times! This is actually the first paper in a series: followup papers will use the prediction model to look into various investment anomalies. Somewhere down the road there will also be a field experiment where we will provide auction participants […]
              Write a research paper from scratch till publication by zwebmaster   
    Given a research paper on Machine Learning, Specifically DEEP LEARNING. (you can also select a research paper yourself based on your own research): You have to do academic research and suggest an honest genuine improvement to this research paper and then write a new paper... (Budget: $250 - $750 USD, Jobs: Artificial Intelligence, Machine Learning, Research)
              Frighteningly accurate ‘mind reading’ AI reads brain scans to guess what you’re thinking   

    Carnegie Mellon University researchers have developed a deep learning neural network that's able to read complex thoughts based on brain scans -- even interpreting complete sentences.

    The post Frighteningly accurate ‘mind reading’ AI reads brain scans to guess what you’re thinking appeared first on Digital Trends.


              Datameer Adds Google AI Software to BI Platform Based on Hadoop   

    Datameer Adds Google AI Software to BI Platform Based on Hadoop

    Rather than being dependent on data scientists to take advantage of deep learning algorithms, Morrell says TensorFlow has been embedded into the …

    Link to Full Article

    [Link to Full Article]
              I would like to hire a Python Developer by somah2015   
    I need help with deep learning in natural langauge processing (Budget: $30 - $250 AUD, Jobs: Java, Perl, Python, Software Development)
              Deeply Artificial Trees   

    areben.com This artwork represents what it would be like for an AI to watch Bob Ross on LSD (once someone invents digital drugs). It shows some of the unreasonable effectiveness and strange inner workings of deep learning systems. The unique characteristics of the human voice are learned and generated as well as hallucinations of a system trying to find images which are not there.

    Cast: artBoffin


              NYC FLOW   

    It is slow motion Scanner-Darkly-like trippy experience made possible by using open source modification of neural-style deep learning code. I hope you`ll like it.

    Het bericht NYC FLOW verscheen eerst op Motion Graphics.


              【特集】 ディープラーニング(Deep Learning、深層学習)とは   
    AIを活用したビジネス開発が活発だ。なかでもディープラーニングは注目を集めているが、その理由は? 私たちの業務はどう変わる? どこで使える? 基礎知識から技術応用のアイデアまでを解説する。>>続きを読む
              The Coding Powerhouse eBook Bundle for $29   
    Here's a 9-Book Digital Library to Be Your Reference For Everything From Web Development to Software Engineering
    Expires July 13, 2018 23:59 PST
    Buy now and get 91% off

    Learning Angular 2


    KEY FEATURES

    Angular 2 was conceived as a complete rewrite in order to fulfill the expectations of modern developers who demand blazing fast performance and responsiveness from their web applications. This book will help you learn the basics of how to design and build Angular 2 components, providing full coverage of the TypeScript syntax required to follow the examples included.

    • Access 352 pages of content 24/7
    • Set up your working environment to have all the tools you need to start building Angular 2 components w/ minimum effort
    • Get up to speed w/ TypeScript, a powerful typed superset of JavaScript that compiles to plain JavaScript
    • Take full control of how your data is rendered & updated upon data changes
    • Build powerful web applications based on structured component hierarchies that emit & listen to events & data changes throughout the elements tree
    • Explore how to consume external APIs & data services & allow data editing by harnessing the power of web forms made with Angular 2
    • Deliver seamless web navigation experiences w/ application routing & state handling common features w/ ease
    • Discover how to bulletproof your applications by introducing smart unit testing techniques & debugging tools

      PRODUCT SPECS

      Details & Requirements

      • Length of time users can access this course: lifetime
      • Access options: web streaming, mobile streaming
      • Certification of completion not included
      • Redemption deadline: redeem your code within 30 days of purchase
      • Experience level required: all levels

      Compatibility

      • Internet required

      THE EXPERT

      Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Java Deep Learning Essentials


    KEY FEATURES

    AI and Deep Learning are transforming the way we understand software, making computers more intelligent than we could even imagine just a decade ago. Starting with an introduction to basic machine learning algorithms, this course takes you further into this vital world of stunning predictive insights and remarkable machine intelligence.

    • Access 254 pages of content 24/7
    • Get a practical deep dive into machine learning & deep learning algorithms
    • Implement machine learning algorithms related to deep learning
    • Explore neural networks using some of the most popular Deep Learning frameworks
    • Dive into Deep Belief Nets & Stacked Denoising Autoencoders algorithms
    • Discover more deep learning algorithms w/ Dropout & Convolutional Neural Networks
    • Gain an insight into the deep learning library DL4J & its practical uses
    • Get to know device strategies to use deep learning algorithms & libraries in the real world
    • Explore deep learning further w/ Theano & Caffe

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Python


    KEY FEATURES

    Python is a dynamic programming language known for its high readability, hence it is often the first language learned by new programmers. Being multi-paradigm, it can be used to achieve the same thing in different ways and it is compatible across different platforms. This book is an authoritative guide that will help you learn new advanced methods in a clear and contextualized way.

    • Access 486 pages of content 24/7
    • Create a virtualenv & start a new project
    • Understand how & when to use the functional programming paradigm
    • Get familiar w/ the different ways the decorators can be written in
    • Understand the power of generators & coroutines without digressing into lambda calculus
    • Generate HTML documentation out of documents & code using Sphinx
    • Learn how to track & optimize application performance, both memory & cpu
    • Use the multiprocessing library, not just locally but also across multiple machines

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering React


    KEY FEATURES

    React stands out in the web framework crowd through its approach to composition which yields blazingly fast rendering capabilities. This book will help you understand what makes React special. It starts with the fundamentals and uses a pragmatic approach, focusing on clear development goals. You'll learn how to combine many web technologies surrounding React into a complete set for constructing a modern web application.

    • Access 254 pages of content 24/7
    • Understand the React component lifecycle & core concepts such as props & states
    • Craft forms & implement form validation patterns using React
    • Explore the anatomy of a modern single-page web application
    • Develop an approach for choosing & combining web technologies without being paralyzed by the options available
    • Create a complete single-page application
    • Start coding w/ a plan using an application design process
    • Add to your arsenal of prototyping techniques & tools
    • Make your React application feel great using animations

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering JavaScript


    KEY FEATURES

    JavaScript is the browser language that supports object-oriented, imperative, and functional programming styles, focusing on website behavior. JavaScript provides web developers with the knowledge to program more intelligently and idiomatically—and this course will help you explore the best practices for building an original, functional, and useful cross-platform library. At course's end, you'll be equipped with all the knowledge, tips, and hacks you need to stand out in the advanced world of web development.

    • Access 250 pages of content 24/7
    • Get a run through of the basic JavaScript language constructs
    • Familiarize yourself w/ the Functions & Closures of JavaScript
    • Explore Regular Expressions in JavaScript
    • Code using the powerful object-oriented feature in JavaScript
    • Test & debug your code using JavaScript strategies
    • Master DOM manipulation, cross-browser strategies, & ES6
    • Understand the basic concurrency constructs in JavaScript & best performance strategies
    • Learn to build scalable server applications in JavaScript using Node.js

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Git


    KEY FEATURES

    A powerful source control management program, Git will allow you to track changes and revert to any previous versions of your code, helping you implement an efficient, effective workflow. With this course, you'll master everything from setting up your Git environment, to writing clean code using the Reset and Revert features, to ultimately understanding the entire Git workflow from start to finish.

    • Access 418 pages of content 24/7
    • Explore project history, find revisions using different criteria, & filter & format how history looks
    • Manage your working directory & staging area for commits & interactively create new revisions & amend them
    • Set up repositories & branches for collaboration
    • Submit your own contributions & integrate contributions from other developers via merging or rebasing
    • Customize Git behavior system-wide, on a per-user, per-repository, & per-file basis
    • Take up the administration & set up of Git repositories, configure access, find & recover from repository errors, & perform repository maintenance
    • Choose a workflow & configure & set up support for the chosen workflow

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Xamarin Cross-Platform Development Cookbook


    KEY FEATURES

    The Xamarin Forms platform lets you create native mobile applications for iOS, Android, and Windows Phone all at the same time. With Xamarin you can share large amounts of code, such as the UI, business logic, data models, SQLite data access, HTTP data access, and file storage across all three platforms. That is a huge consolidation of time. This book provides recipes on how to create an architecture that will be maintainable and extendable.

    • Access 416 pages of content 24/7
    • Create & customize your cross-platform UI
    • Understand & explore cross-platform patterns & practices
    • Use the out-of-the-box services to support third-party libraries
    • Find out how to get feedback while your application is used by your users
    • Bind collections to ListView & customize its appearance w/ custom cells
    • Create shared data access using a local SQLite database & a REST service
    • Test & monitor your applications

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Swift 3 Functional Programming


    KEY FEATURES

    Whether you're new to functional programming and Swift or experienced, this book will strengthen the skills you need to design and develop high-quality, scalable, and efficient applications. Based on the Swift 3 Developer preview version, it focuses on simplifying functional programming (FP) paradigms to solve many day-to-day development problems.

    • Access 296 pages of content 24/7
    • Learn first-class, higher-order, & pure functions
    • Explore closures & capturing values
    • Understand value & reference types
    • Discuss enumerations, algebraic data types, patterns, & pattern matching
    • Combine FP paradigms w/ OOP, FRP, & POP in your day-to-day development activities
    • Develop a back end application w/ Swift

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Scala High Performance Programming


    KEY FEATURES

    Scala is a statically and strongly typed language that blends functional and object-oriented paradigms. It has grown in popularity as an appealing and pragmatic choice to write production-ready software in the functional paradigm, enabling you to solve problems with less code and lower maintenance costs than alternative. This book arms you with the knowledge you need to create performant Scala applications, starting with the basics.

    • Access 274 pages of content 24/7
    • Analyze the performance of JVM applications by developing JMH benchmarks & profiling with Flight Recorder
    • Discover use cases & performance tradeoffs of Scala language features, & eager & lazy collections
    • Explore event sourcing to improve performance while working w/ stream processing pipelines
    • Dive into asynchronous programming to extract performance on multicore systems using Scala Future & Scalaz Task
    • Design distributed systems w/ conflict-free replicated data types (CRDTs) to take advantage of eventual consistency without synchronization
    • Understand the impact of queues on system performance & apply the free monad to build systems robust to high levels of throughput

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

              Machine Learning with Python Course and E-Book Bundle for $49   
    4 E-Books & 5 Courses to Help You Perform Machine Learning Analytics & Command High-Paying Jobs
    Expires January 22, 2022 23:59 PST
    Buy now and get 92% off

    Deep Learning with TensorFlow


    KEY FEATURES

    Deep learning is the intersection of statistics, artificial intelligence, and data to build accurate models, and is one of the most important new frontiers in technology. TensorFlow is one of the newest and most comprehensive libraries for implementing deep learning. Over this course you'll explore some of the possibilities of deep learning, and how to use TensorFlow to process data more effectively than ever.

    • Access 22 lectures & 2 hours of content 24/7
    • Discover the efficiency & simplicity of TensorFlow
    • Process & change how you look at data
    • Sift for hidden layers of abstraction using raw data
    • Train your machine to craft new features to make sense of deeper layers of data
    • Explore logistic regression, convolutional neural networks, recurrent neural networks, high level interfaces, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Dan Van Boxel is a Data Scientist and Machine Learning Engineer with over 10 years of experience. He is most well-known for "Dan Does Data," a YouTube livestream demonstrating the power and pitfalls of neural networks. He has developed and applied novel statistical models of machine learning to topics such as accounting for truck traffic on highways, travel time outlier detection, and other areas. Dan has also published research and presented findings at the Transportation Research Board and other academic journals.

    Beginning Python


    KEY FEATURES

    Python is the general purpose, multi-paradigm programming language that many professionals consider one of the best beginner language due its relative simplicity and applicability to many coding arenas. This course assumes no prior experience and helps you dive into Python fundamentals to come to grips with this popular language and start your coding odyssey off right.

    • Access 43 lectures & 4.5 hours of content 24/7
    • Learn variables, numbers, strings, & more essential components of Python
    • Make decisions on your programs w/ conditional statements
    • See how functions play a major role in providing a high degree of code recycling
    • Create modules in Python
    • Perform image manipulations w/ Python

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    William Fiset is a Mathematics and Computer Science Honors student at Mount Allison University with in interest in competitive programming. William has been a Python developer for +4 years, starting his early Python experience with game development. He owns a popular YouTube channel that teaches Python to beginners and the basics of game development.

    Deep Learning with Python


    KEY FEATURES

    You've seen deep learning everywhere, but you may not have realized it. This discipline is one of the leading solutions for image recognition, speech recognition, object recognition, and language translation - basically the tools you see Google roll out every day. Over this course, you'll use Python to expand your deep learning knowledge to cover backpropagation and its ability to train neural networks.

    • Access 19 lectures & 2 hours of content 24/7
    • Train neural networks in deep learning & to understand automatic differentiation
    • Cover convolutional & recurrent neural networks
    • Build up the theory that covers supervised learning
    • Integrate search & image recognition, & object processing
    • Examine the performance of the sentimental analysis model

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Eder Santana is a PhD candidate in Electrical and Computer Engineering. His thesis topic is on Deep and Recurrent neural networks. After working for 3 years with Kernel Machines (SVMs, Information Theoretic Learning, and so on), Eder moved to the field of deep learning 2.5 years ago, when he started learning Theano, Caffe, and other machine learning frameworks. Now, Eder contributes to Keras: Deep Learning Library for Python. Besides deep learning, he also likes data visualization and teaching machine learning, either on online forums or as a teacher assistant.

    Data Mining with Python


    KEY FEATURES

    Every business wants to gain insights from data to make more informed decisions. Data mining provides a way of finding these insights, and Python is one of the most popular languages with which to perform it. In this course, you will discover the key concepts of data mining and learn how to apply different techniques to gain insight to real-world data. By course's end, you'll have a valuable skill that companies are clamoring to hire for.

    • Access 21 lectures & 2 hours of content 24/7
    • Discover data mining techniques & the Python libraries used for data mining
    • Tackle notorious data mining problems to get a concrete understanding of these techniques
    • Understand the process of cleaning data & the steps involved in filtering out noise
    • Build an intelligent application that makes predictions from data
    • Learn about classification & regression techniques like logistic regression, k-NN classifier, & mroe
    • Predict house prices & the number of TV show viewers

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Saimadhu Polamuri is a data science educator and the founder of Data Aspirant, a Data Science portal for beginners. He has 3 years of experience in data mining and 5 years of experience in Python. He is also interested in big data technologies such as Hadoop, Pig, and Spark. He has a good command of the R programming language and Matlab. He has a rudimentary understanding of Cpp Computer vision library (opencv) and big data technologies.

    Data Visualization: Representing Information on the Modern Web E-Book


    KEY FEATURES

    You see graphs all over the internet, the workplace, and your life - but do you ever stop to consider how all that data has been visualized? There are many tools and programs that data scientists use to visualize massive, disorganized sets of data. This e-book contains content from "Data Visualization: A Successful Design Process" by Andy Kirk, "Social Data Visualization with HTML5 and JavaScript" by Simon Timms," and "Learning d3.js Data Visualization, Second Edition" by Andrew Rininsland and Swizec Teller, all professionally curated to give you an easy-to-follow track to master data visualization in your own work.

    • Harness the power of D3 by building interactive & real-time data-driven web visualizations
    • Find out how to use JavaScript to create compelling visualizations of social data
    • Identify the purpose of your visualization & your project’s parameters to determine overriding design considerations across your project’s execution
    • Apply critical thinking to visualization design & get intimate with your dataset to identify its potential visual characteristics
    • Explore the various features of HTML5 to design creative visualizations
    • Discover what data is available on Stack Overflow, Facebook, Twitter, & Google+
    • Gain a solid understanding of the common D3 development idioms

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python: Master the Art of Design Patterns E-Book


    KEY FEATURES

    Get a complete introduction to the many uses of Python in this curated e-book drawing content from "Python 3 Object-Oriented Programming, Second Edition" by Dusty Phillips, "Learning Python Design Patterns, Second Edition" by Chetan Giridhar, and "Mastering Python Design Patterns" by Sakis Kasampalis. Once you've got your feet wet, you'll focus in on the most common and useful design patterns from a Python perspective. By course's end, you'll have a complex understanding of designing patterns with Python, allowing you to develop better coding practices and create systems architectures.

    • Discover what design patterns are & how to apply them to writing Python
    • Implement objects in Python by creating classes & defining methods
    • Separate related objects into a taxonomy of classes & describe the properties & behaviors of those objects via the class interface
    • Understand when to use object-oriented features & when not to use them
    • Explore the design principles that form the basis of software design, such as loose coupling, the Hollywood principle, & the Open Close principle, & more
    • Use Structural Design Patterns to find out how objects & classes interact to build larger applications
    • Improve the productivity & code base of your application using Python design patterns
    • Secure an interface using the Proxy pattern

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python: Deeper Insights into Machine Learning E-Book


    KEY FEATURES

    Machine learning and predictive analytics are becoming one of the key strategies for unlocking growth in a challenging contemporary marketplace. Consequently, professionals who can run machine learning systems are in high demand and are commanding high salaries. This e-book will help you get a grip on advanced Python techniques to design machine learning systems.

    • Learn to write clean & elegant Python code that will optimize the strength of your algorithms
    • Uncover hidden patterns & structures in data w/ clustering
    • Improve accuracy & consistency of results using powerful feature engineering techniques
    • Gain practical & theoretical understanding of cutting-edge deep learning algorithms
    • Solve unique tasks by building models
    • Come to grips w/ the machine learning design process

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python: Real-World Data Science E-Book


    KEY FEATURES

    Data science is one of the most in-demand fields today, and this e-book will guide you to becoming an efficient data science practitioner in Python. Once you've nailed down Python fundamentals, you'll learn how to perform data analysis with Python in an example-driven way. From there, you'll learn how to scale your knowledge to processing machine learning algorithms.

    • Implement objects in Python by creating classes & defining methods
    • Get acquainted w/ NumPy to use it w/ arrays & array-oriented computing in data analysis
    • Create effective visualizations for presenting your data using Matplotlib
    • Process & analyze data using the time series capabilities of pandas
    • Interact w/ different kind of database systems, such as file, disk format, Mongo, & Redis
    • Apply data mining concepts to real-world problems
    • Compute on big data, including real-time data from the Internet
    • Explore how to use different machine learning models to ask different questions of your data

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Python


    KEY FEATURES

    Python is one of the most popular programming languages today, enabling developers to write efficient, reusable code. Here, you'll add Python to your repertoire, learning to set up your development environment, master use of its syntax, and much more. You'll soon understand why engineers at startups like Dropbox rely on Python: it makes the process of creating and iterating upon apps a piece of cake.

    • Master Python w/ 3 hours of content
    • Build Python packages to efficiently create reusable code
    • Creating tools & utility programs, and write code to automate software
    • Distribute computation tasks across multiple processors
    • Handle high I/O loads w/ asynchronous I/O for smoother performance
    • Utilize Python's metaprogramming & programmable syntax features
    • Implement unit testing to write better code, faster

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done –whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

              Machine Learning & AI for Business Bundle for $39   
    Discover Artificial Intelligence, Machine Learning & the R Programming Language in This 4-Course Bundle
    Expires January 08, 2022 23:59 PST
    Buy now and get 96% off

    Artificial Intelligence & Machine Learning Training


    KEY FEATURES

    Artificial intelligence is the simulation of human intelligence through machines using computer systems. No, it's not just a thing of the movies, artificial intelligence systems are used today in medicine, robotics, remote sensors, and even in ATMs. This booming field of technology is one of the most exciting frontiers in science and this course will give you a solid introduction.

    • Access 91 lectures & 17 hours of content 24/7
    • Identify potential areas of applications of AI
    • Learn basic ideas & techniques in the design of intelligent computer systems
    • Discover statistical & decision-theoretic modeling paradigms
    • Understand how to build agents that exhibit reasoning & learning
    • Apply regression, classification, clustering, retrieval, recommender systems, & deep learning

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule. For more details on this course and instructor, click here.

    Introduction to Machine Learning


    KEY FEATURES

    Machine learning is the science of getting computers to act without being explicitly programmed by harvesting data and using algorithms to determine outputs. You see this science in action all the time in spam filtering, search engines, and online ad space, and its uses are only expanding into more powerful applications like self-driving cars and speech recognition. In this crash course, you'll get an introduction to the mechanisms of algorithms and how they are used to drive machine learning.

    • Access 10 lectures & 2 hours of content 24/7
    • Learn machine learning concepts like K-nearest neighbor learning, non-symbolic machine learning, & more
    • Explore the science behind neural networks
    • Discover data mining & statistical pattern recognition
    • Gain practice implementing the most effective machine learning techniques

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule. For more details on this course and instructor, click here.

    Data Science and Machine Learning with R (Part #1): Understanding R


    KEY FEATURES

    The R programming language has become the most widely use language for computational statistics, visualization, and data science - all essential tools in artificial intelligence and machine learning. Companies like Google, Facebook, and LinkedIn use R to perform business data analytics and develop algorithms that help operations move fluidly. In this introductory course, you'll learn the basics of R and get a better idea of how it can be applied.

    • Access 33 lectures & 6 hours of content 24/7
    • Install R studio & learn the basics of R functions
    • Understand data types in R, the recycling rule, special numerical values, & more
    • Explore parallel summary functions, logical conjunctions, & pasting strings together
    • Discover the evolution of business analytics

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule.

    Data Science and Machine Learning with R (Part #2): Statistics with R


    KEY FEATURES

    Further your understanding of R with this immersive course on one of the most important tools for business analytics. You'll discuss data manipulation and statistics basics before diving into practical, functional use of R. By course's end, you'll have a strong understanding of R that you can leverage on your resume for high-paying analytics jobs.

    • Access 30 lectures & 6 hours of content 24/7
    • Understand variables, quantiles, data creation, & more
    • Calculate variance, covariance, & build scatter plots
    • Explore probability & distribution
    • Use practice problems to reinforce your learning

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule. For more details on this course and instructor, click here.

              The Advanced Guide to Deep Learning and Artificial Intelligence Bundle for $42   
    This High-Intensity 14.5 Hour Bundle Will Help You Help Computers Address Some of Humanity's Biggest Problems
    Expires November 28, 2021 23:59 PST
    Buy now and get 91% off

    Deep Learning: Convolutional Neural Networks in Python


    KEY FEATURES

    In this course, intended to expand upon your knowledge of neural networks and deep learning, you'll harness these concepts for computer vision using convolutional neural networks. Going in-depth on the concept of convolution, you'll discover its wide range of applications, from generating image effects to modeling artificial organs.

    • Access 25 lectures & 3 hours of content 24/7
    • Explore the StreetView House Number (SVHN) dataset using convolutional neural networks (CNNs)
    • Build convolutional filters that can be applied to audio or imaging
    • Extend deep neural networks w/ just a few functions
    • Test CNNs written in both Theano & TensorFlow
    Note: we strongly recommend taking The Deep Learning & Artificial Intelligence Introductory Bundle before this course.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
    • All code for this course is available for download here, in the directory cnn_class

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Unsupervised Deep Learning in Python


    KEY FEATURES

    In this course, you'll dig deep into deep learning, discussing principal components analysis and a popular nonlinear dimensionality reduction technique known as t-distributed stochastic neighbor embedding (t-SNE). From there you'll learn about a special type of unsupervised neural network called the autoencoder, understanding how to link many together to get a better performance out of deep neural networks.

    • Access 30 lectures & 3 hours of content 24/7
    • Discuss restricted Boltzmann machines (RBMs) & how to pretrain supervised deep neural networks
    • Learn about Gibbs sampling
    • Use PCA & t-SNE on features learned by autoencoders & RBMs
    • Understand the most modern deep learning developments

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
    • All code for this course is available for download here, in the directory unsupervised_class2

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Deep Learning: Recurrent Neural Networks in Python


    KEY FEATURES

    A recurrent neural network is a class of artificial neural network where connections form a directed cycle, using their internal memory to process arbitrary sequences of inputs. This makes them capable of tasks like handwriting and speech recognition. In this course, you'll explore this extremely expressive facet of deep learning and get up to speed on this revolutionary new advance.

    • Access 32 lectures & 4 hours of content 24/7
    • Get introduced to the Simple Recurrent Unit, also known as the Elman unit
    • Extend the XOR problem as a parity problem
    • Explore language modeling
    • Learn Word2Vec to create word vectors or word embeddings
    • Look at the long short-term memory unit (LSTM), & gated recurrent unit (GRU)
    • Apply what you learn to practical problems like learning a language model from Wikipedia data

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
    • All code for this course is available for download here, in the directory rnn_class

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Natural Language Processing with Deep Learning in Python


    KEY FEATURES

    In this course you'll explore advanced natural language processing - the field of computer science and AI that concerns interactions between computer and human languages. Over the course you'll learn four new NLP architectures and explore classic NLP problems like parts-of-speech tagging and named entity recognition, and use recurrent neural networks to solve them. By course's end, you'll have a firm grasp on natural language processing and its many applications.

    • Access 40 lectures & 4.5 hours of content 24/7
    • Discover Word2Vec & how it maps words to a vector space
    • Explore GLoVe's use of matrix factorization & how it contributes to recommendation systems
    • Learn about recursive neural networks which will help solve the problem of negation in sentiment analysis

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: advanced, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
    • All code for this course is available for download here, in the directory nlp_class2

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

              Practical Deep Learning in Theano and TensorFlow for $29   
    Build & Understand Neural Networks Using Two of the Most Popular Deep Learning Techniques
    Expires November 02, 2021 23:59 PST
    Buy now and get 75% off

    KEY FEATURES

    The applications of Deep Learning are many, and constantly growing, just like the neural networks that it supports. In this course, you'll delve into advanced concepts of Deep Learning, starting with the basics of TensorFlow and Theano, understanding how to build neural networks with these popular tools. Using these tools, you'll learn how to build and understand a neural network, knowing exactly how to visualize what is happening within a model as it learns.

    • Access 23 lectures & 3 hours of programming 24/7
    • Discover batch & stochastic gradient descent, two techniques that allow you to train on a small sample of data at each iteration, greatly speeding up training time
    • Discuss how momentum can carry you through local minima
    • Learn adaptive learning rate techniques like AdaGrad & RMSprop
    • Explore dropout regularization & other modern neural network techniques
    • Understand the variables & expressions of TensorFlow & Theano
    • Set up a GPU-instance on AWS & compare the speed of CPU vs GPU for training a deep neural network
    • Look at the MNIST dataset & compare against known benchmarks
    Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
    • All code for this course is available for download here, in the directory ann_class2

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

              The Deep Learning and Artificial Intelligence Introductory Bundle for $39   
    Companies Are Relying on Machines & Networks to Learn Faster Than Ever. Time to Catch Up.
    Expires October 31, 2021 23:59 PST
    Buy now and get 91% off

    Deep Learning Prerequisites: Linear Regression in Python


    KEY FEATURES

    Deep Learning is a set of powerful algorithms that are the force behind self-driving cars, image searching, voice recognition, and many, many more applications we consider decidedly "futuristic." One of the central foundations of deep learning is linear regression; using probability theory to gain deeper insight into the "line of best fit." This is the first step to building machines that, in effect, act like neurons in a neural network as they learn while they're fed more information. In this course, you'll start with the basics of building a linear regression module in Python, and progress into practical machine learning issues that will provide the foundations for an exploration of Deep Learning.

    • Access 20 lectures & 2 hours of content 24/7
    • Use a 1-D linear regression to prove Moore's Law
    • Learn how to create a machine learning model that can learn from multiple inputs
    • Apply multi-dimensional linear regression to predict a patient's systolic blood pressure given their age & weight
    • Discuss generalization, overfitting, train-test splits, & other issues that may arise while performing data analysis
    Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
    • All code for this course is available for download here, in the directory linear_regression_class

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Deep Learning Prerequisites: Logistic Regression in Python


    KEY FEATURES

    Logistic regression is one of the most fundamental techniques used in machine learning, data science, and statistics, as it may be used to create a classification or labeling algorithm that quite resembles a biological neuron. Logistic regression units, by extension, are the basic bricks in the neural network, the central architecture in deep learning. In this course, you'll come to terms with logistic regression using practical, real-world examples to fully appreciate the vast applications of Deep Learning.

    • Access 31 lectures & 3 hours of content 24/7
    • Code your own logistic regression module in Python
    • Complete a course project that predicts user actions on a website given user data
    • Use Deep Learning for facial expression recognition
    • Understand how to make data-driven decisions
    Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
    • All code for this course is available for download here, in the directory logistic_regression_class

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Data Science: Deep Learning in Python


    KEY FEATURES

    Artificial neural networks are the architecture that make Apple's Siri recognize your voice, Tesla's self-driving cars know where to turn, Google Translate learn new languages, and so many more technological features you have quite possibly taken for granted. The data science that unites all of them is Deep Learning. In this course, you'll build your very first neural network, going beyond basic models to build networks that automatically learn features.

    • Access 37 lectures & 4 hours of content 24/7
    • Extend the binary classification model to multiple classes uing the softmax function
    • Code the important training method, backpropagation, in Numpy
    • Implement a neural network using Google's TensorFlow library
    • Predict user actions on a website given user data using a neural network
    • Use Deep Learning for facial expression recognition
    • Learn some of the newest development in neural networks
    Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
    • All code for this course is available for download here, in the directory ann_class

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Data Science: Practical Deep Learning in Theano & TensorFlow


    KEY FEATURES

    The applications of Deep Learning are many, and constantly growing, just like the neural networks that it supports. In this course, you'll delve into advanced concepts of Deep Learning, starting with the basics of TensorFlow and Theano, understanding how to build neural networks with these popular tools. Using these tools, you'll learn how to build and understand a neural network, knowing exactly how to visualize what is happening within a model as it learns.

    • Access 23 lectures & 3 hours of programming 24/7
    • Discover batch & stochastic gradient descent, two techniques that allow you to train on a small sample of data at each iteration, greatly speeding up training time
    • Discuss how momentum can carry you through local minima
    • Learn adaptive learning rate techniques like AdaGrad & RMSprop
    • Explore dropout regularization & other modern neural network techniques
    • Understand the variables & expressions of TensorFlow & Theano
    • Set up a GPU-instance on AWS & compare the speed of CPU vs GPU for training a deep neural network
    • Look at the MNIST dataset & compare against known benchmarks
    Like what you're learning? Try out the The Advanced Guide to Deep Learning and Artificial Intelligence next.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, and Numpy
    • All code for this course is available for download here, in the directory ann_class2

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

              The Complete Machine Learning Bundle for $39   
    Master AI & Achieve the Impossible with 10 Courses & 63.5 Hours of Training in Machine Learning
    Expires January 24, 2018 23:59 PST
    Buy now and get 95% off

    Quant Trading Using Machine Learning


    KEY FEATURES

    Financial markets are fickle beasts that can be extremely difficult to navigate for the average investor. This course will introduce you to machine learning, a field of study that gives computers the ability to learn without being explicitly programmed, while teaching you how to apply these techniques to quantitative trading. Using Python libraries, you'll discover how to build sophisticated financial models that will better inform your investing decisions. Ideally, this one will buy itself back and then some!

    • Access 64 lectures & 11 hours of content 24/7
    • Get a crash course in quantitative trading from stocks & indices to momentum investing & backtesting
    • Discover machine learning principles like decision trees, ensemble learning, random forests & more
    • Set up a historical price database in MySQL using Python
    • Learn Python libraries like Pandas, Scikit-Learn, XGBoost & Hyperopt
    • Access source code any time as a continuing resource

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but working knowledge of Python would be helpful

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students. For more details on the course and instructor, click here.

    Learn By Example: Statistics and Data Science in R


    KEY FEATURES

    R is a programming language and software environment for statistical computing and graphics that is widely used among statisticians and data miners for data analysis. In this course, you'll get a thorough run-through of how R works and how it's applied to data science. Before you know it, you'll be crunching numbers like a pro, and be better qualified for many lucrative careers.

    • Access 82 lectures & 9 hours of content 24/7
    • Cover basic statistical principles like mean, median, range, etc.
    • Learn theoretical aspects of statistical concepts
    • Discover datatypes & data structures in R, vectors, arrays, matrices & more
    • Understand Linear Regression
    • Visualize data in R using a variety of charts & graphs
    • Delve into descriptive & inferential statistics

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn By Example: Hadoop & MapReduce for Big Data Problems


    KEY FEATURES

    Big Data sounds pretty daunting doesn't it? Well, this course aims to make it a lot simpler for you. Using Hadoop and MapReduce, you'll learn how to process and manage enormous amounts of data efficiently. Any company that collects mass amounts of data, from startups to Fortune 500, need people fluent in Hadoop and MapReduce, making this course a must for anybody interested in data science.

    • Access 71 lectures & 13 hours of content 24/7
    • Set up your own Hadoop cluster using virtual machines (VMs) & the Cloud
    • Understand HDFS, MapReduce & YARN & their interaction
    • Use MapReduce to recommend friends in a social network, build search engines & generate bigrams
    • Chain multiple MapReduce jobs together
    • Write your own customized partitioner
    • Learn to globally sort a large amount of data by sampling input files

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Byte Size Chunks: Java Object-Oriented Programming & Design


    KEY FEATURES

    Java seems an appropriate name for a language that seems so dense, you may need a cuppa joe after 10 minutes of self-study. Luckily, you can learn all you need to know in this short course. You'll scale the behemoth that is object-oriented programming, mastering classes, objects, and more to conquer a language that powers everything from online games to chat platforms.

    • Learn Java inside & out w/ 35 lectures & 7 hours of content
    • Master object-oriented (OO) programming w/ classes, objects & more
    • Understand the mechanics of OO: access modifiers, dynamic dispatch, etc.
    • Dive into the underlying principles of OO: encapsulation, abstraction & polymorphism
    • Comprehend how information is organized w/ packages & jars

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but basic knowledge of Java is suggested

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students. For more details on the course and instructor, click here.

    An Introduction to Machine Learning & NLP in Python


    KEY FEATURES

    Are you familiar with self-driving cars? Speech recognition technology? These things would not be possible without the help of Machine Learning--the study of pattern recognition and prediction within the field of computer science. This course is taught by Stanford-educated, Silicon Valley experts that have decades of direct experience under their belts. They will teach you, in the simplest way possible (and with major visual techniques), to put Machine Learning and Python into action. With these skills under your belt, your programming skills will take a whole new level of power.

    • Get introduced to Machine Learning w/ 14.5 hours of instruction
    • Learn from a team w/ decades of practical experience in quant trading, analytics & e-commerce
    • Understand complex subjects w/ the help of animations
    • Use hundreds of lines of source code w/ comments to implement natural language processing & machine learning for text summarization, text classification in Python
    • Study natural language processing & sentiment analysis w/ Python

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but some knowledge of Python is suggested

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students. For more details on the course and instructor, click here.

    Byte-Sized-Chunks: Twitter Sentiment Analysis (in Python)


    KEY FEATURES

    Sentiment Analysis or Opinion Mining is a field of Neuro-linguistic Programming (NLP) that aims to extract subjective information like positive/negative, like/dislike, emotional reactions, and the like. It's an essential component to Machine Learning as it provides valuable training data to a machine. Over this course, you'll learn real examples why Sentiment Analysis is important and how to approach specific problems using Sentiment Analysis.

    • Access 19 lectures & 4 hours of content 24/7
    • Learn Rule-Based & Machine Learning-Based approaches to solving Sentiment Analysis problems
    • Understand Sentiment Lexicons & Regular Expressions
    • Design & implement a Sentiment Analysis measurement system in Python
    • Grasp the underlying Sentiment Analysis theory & its relation to binary classification
    • Identify use-cases for Sentiment Analysis
    • Perform a real Twitter Sentiment Analysis

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but some experience with Python is suggested

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students. For more details on the course and instructor, click here.

    Byte-Sized-Chunks: Decision Trees and Random Forests


    KEY FEATURES

    Decision trees and random forests are two intuitive and extremely effective Machine Learning techniques that allow you to better predict outcomes from a selected input. Both methods are commonly used in business, and knowing how to implement them can put you ahead of your peers. In this course, you'll learn these techniques by exploring a famous (but morbid) Machine Learning problem: predicting the survival of a passenger on the Titanic.

    • Access 19 lectures & 4.5 hours of content 24/7
    • Design & implement a decision tree to predict survival probabilities aboard the Titanic
    • Understand the risks of overfitting & how random forests help overcome them
    • Identify the use-cases for decision trees & random forests
    • Use provided source code to build decision trees & random forests

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students. For more details on the course and instructor, click here.

    An Introduction To Deep Learning & Computer Vision


    KEY FEATURES

    Deep Learning is an exciting branch of Machine Learning that provide solutions for processing the high-dimensional data produced by Computer Vision. This introductory course brings you into the complex, abstract world of Computer Vision and artificial neural networks. By the end, you'll have a solid foundation in a core principle of Machine Learning.

    • Access 9 lectures & 2 hours of content 24/7
    • Design & implement a simple computer vision use-case: digit recognition
    • Train a neural network to classify handwritten digits in Python
    • Build a neural network & specify the training process
    • Grasp the central theory underlying Deep Learning & Computer Vision
    • Understand use-cases for Computer Vision & Deep Learning

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but some knowledge of Python is suggested

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Byte-Sized-Chunks: Recommendation Systems


    KEY FEATURES

    Assuming you're an internet user (which seems likely), you use or encounter recommendation systems all the time. Whenever you see an ad or product that seems eerily in tune with whatever you were just thinking about, it's because of a recommendation system. In this course, you'll learn how to build a variety of these systems using Python, and be well on your way to a high-paying career.

    • Access 20 lectures & 4.5 hours of content 24/7
    • Build Recommendation Engines that use content based filtering to find products that are most relevant to users
    • Discover Collaborative Filtering, the most popular approach to recommendations
    • Identify similar users using neighborhood models like Euclidean Distance, Pearson Correlation & Cosine
    • Use Matrix Factorization to identify latent factor methods
    • Learn recommendation systems by building a movie-recommending app in Python

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but some knowledge of Python is suggested

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students. For more details on the course and instructor, click here.

    From 0 to 1: Learn Python Programming


    KEY FEATURES

    Python's one of the easiest yet most powerful programming languages you can learn, and it's proven its utility at top companies like Dropbox and Pinterest. In this quick and dirty course, you'll learn to write clean, efficient Python code, learning to expedite your workflow by automating manual work, implementing machine learning techniques, and much more.

    • Dive into Python w/ 10.5 hours of content
    • Acquire the database knowledge you need to effectively manipulate data
    • Eliminate manual work by creating auto-generating spreadsheets w/ xlsxwriter
    • Master machine learning techniques like sk-learn
    • Utilize tools for text processing, including nltk
    • Learn how to scrape websites like the NYTimes & Washington Post using Beautiful Soup
    • Complete drills to consolidate your newly acquired knowledge

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

              Machine Learning / Deep Learning Internship(s)   
    TERADEEP INC. develops Deep Learning HW & SW Acceleration solutions for datacenter applications. As a hands-on intern you will reproduce / benchmark / enhance

    -- Delivered by Feed43 service


              HCII Students Bedri and Zhang Win Qualcomm Fellowship   
    Mon, 06/26/2017

    Human-Computer Interaction Institute Ph.D students Abdelkareem Bedri and Yang Zhang are the recipients of this year's Qualcomm Fellowship. They are one of just eight student teams selected for the 2017 fellowship.

    In their proposal, "Towards General Purpose Sensing With Synthetic Sensors," Bedri and Zhang introduce a super sensor that can cover an entire environment. Rather than requiring a multitude of individual ones, one super sensor can collect raw sensor data without requiring a programed sensor on each action the user desires to log.

    Bedri and Zhang were nominated by advisors Chad Starner and Chris Harrison, an HCII assistant professor who received the Qualcomm Fellowship in 2012.

    The Qualcomm Fellowship is uniquely setup, as it requires a pair of students to submit a proposal together. This partner structure is meant as a reflection of Qualcomm's core value, but in the case of Bedri and Zhang, this was an opportunity to bring together their background in sensing and activity recognition research. 

    "This proposed project resonates with both of our research interests very well," said Zhang.

    The proposal was based on some of Zhang's work presented at CHI 2017 and featured in publications like Wired and the MIT Tech Review. Moving forward with Bedri's strong background in electronic engineering means adding a more refined sensor tag, advanced deep learning approach and a long-term development plan.

    Harrison describes his students' work as tackling a "thorny problem." This is, how to interpret what happens in our everyday environments without draping every room in wires. If they succeed, Harrison believes it will offer an important beachhead into contexts like the home.

    "They are truly exceptional students. This fellowship will afford them the intellectual freedom to operate right at the cutting edge," said Harrison.

    "The Qualcomm Fellowship helped to fund my graduate education, and now I get to return that investment though mentorship of my students."

    The Qualcomm Fellowship awards each winning team $100,000 and a Qualcomm engineer mentor.

    "It is a huge motivator to see our proposal well received by industry, specially big companies like Qualcomm," said Zhang. "We will make good use of the funding as well as the collaboration with Qualcomm’s engineers to fully explore and further develop the idea we proposed. We will work closely together to push this project to the next level."


              #AI News: Baidu Research (NASDAQ: $BIDU) Announces Next Generation Open Source Deep Learning Benchmark Tool   
    SAN FRANCISCO - June 28, 2017 (Investorideas.com Newswire) Baidu Research, a division of Baidu Inc. (NASDAQ:BIDU), today unveiled the next generation of DeepBench, the open source deep learning benchmark that now includes measurement for inference.
              NVIDIA DGX-1, il primo supercomputer per il Deep Learning al mondo   
    SAN JOSE, Calif.—GPU Technology Conference—4/7 Aprile 2016—In occasione della GTC 2016 NVIDIA presenta NVIDIA® DGX-1™, il primo supercomputer per il deep learning al mondo, in grado di soddisfare le sconfinate richieste di computing dell'intelligenza artificiale.
              NVIDIA Tesla P100 offre 12x di performance per il Deep Learning   
    SAN JOSE, Calif.—GPU Technology Conference—4/7 Aprile 2016— NVIDIA annuncia NVIDIA® Tesla® P100 GPU, il più avanzato acceleratore hyperscale data center mai realizzato.
              Deep Learning Inference-Kernel and Performance Software Engineer   
    CA-Santa Clara, We are now looking for a Deep Learning Inference-Kernel and Performance Software Engineer: We are rapidly growing our research and development for Inference and are seeking excellent Software Engineers and Senior Software Engineers to join our team. We specialize in developing GPU-accelerated Deep learning software. Researchers around the world are using NVIDIA GPUs to power a revolution in deep l
              Sr Deep Learning Software Engineer   
    CO-Boulder, We are now looking for a Senior Deep Learning Software Engineer: Come join NVIDIAs Autonomous Vehicles team to develop state of the art Deep Learning / AI algorithms for our advanced Autonomous driving platform. What you'll be doing: You will be responsible for developing Deep Learning network architectures for image and video processing tasks. The range of applications you'll work on includes: au
              Compute Performance Developer Technology Engineer   
    CA-Santa Clara, We are now looking for a Compute Performance Developer Technology Engineer: NVIDIA is hiring passionate, world class computer scientists to work in its Compute Developer Technology (Devtech) team. What you will be doing: In this role, you will research and develop techniques to GPU-accelerate leading applications in high performance computing fields within machine and deep learning, scientific com
              Sr CUDA Algorithms DevTech Engineer   
    CA-Santa Clara, We are now looking for a Sr CUDA Algorithms DevTech Engineer ​ NVIDIA is now hiring software engineers for its GPU-accelerated Numerical, Analytics, and Deep learning algorithms team. Academic and commercial groups around the world are using GPUs to revolutionize deep learning and data analytics, and to power data centers. Join the team which is building software which will be used by the entire w
              Sr Deep Learning Software Development Engineer   
    CA-Santa Clara, We are now looking for a Deep Learning Software Development Engineer: NVIDIA is hiring software engineers for its GPU-accelerated deep learning platform team. Academic and commercial groups around the world are using GPUs to power a revolution in deep learning, enabling breakthroughs in problems from image classification to speech recognition to natural language processing. Join the team which is
              Microsoft annonce que la version 2.0 de Cognitive Toolkit, sa boîte à outils dédiée au Deep Learning, est disponible en version finale   
    Microsoft annonce que la version 2.0 de Cognitive Toolkit, sa boîte à outils dédiée au Deep Learning,
    est disponible en version finale

    Microsoft a publié la version 2 de Cognitive Toolkit (CNTK) une boîte à outils dédiée au Deep Learning et à l'intelligence artificielle. Microsoft explique que Cognitive Toolkit permet de disposer d'une IA de qualité prête pour l'entreprise en permettant aux utilisateurs de créer, de former et d'évaluer leurs propres réseaux de neurones, qui peuvent ensuite...
              (Associate) Data Scientist for Deep Learning Center of Excellence - SAP - Sankt Leon-Rot   
    Build Machine Learning models to solve real problems working with real data. Software-Design and Development....
    Gefunden bei SAP - Fri, 23 Jun 2017 08:50:58 GMT - Zeige alle Sankt Leon-Rot Jobs
               Gartner's Cool Vendors in Retail Merchandising and Marketing    
    Customer-centric merchandising and marketing require deep learning algorithms that can predict who, what, where and when consumers will browse and transact. This research will introduce CIOs to five innovative vendors...
              Ricoh announces Pentax KP with new Shake Reduction system and 24MP sensor   

    Ricoh has announced the Pentax KP, a more compact, modernized version (but not a replacement, after late confirmation from Ricoh) of the K-3 II which features a new 'high sensitivity' 24MP sensor and improved in-body image stabilization system.

    The new CMOS sensor brings with it a top ISO of 819,200 and an electronic shutter that tops out at 1/24000 sec (the mechanical shutter goes to 1/6000 sec). The KP uses the new 5-axis 'Shake Reduction II' IBIS system, first seen on the K-1 full-framer, which offers up to 5 stops of stabilization according to Ricoh. As with other Pentax models, the KP supports Pixel Shift Resolution as well as AA Filter Simulation. The KP uses the same SAFOX 11 autofocus system as the K-3 II, meaning that it has 27 points, 25 of which are cross-type.

    The KP's body is relatively compact, sealed against dust and moisture, and functional down to +14F/-10C. It has a pentaprism viewfinder with 'nearly' 100% coverage and a 0.63x equivalent magnification, as well as a tilting 3" touchscreen display. A nice extra is the ability to change the camera's grip, with three sizes to choose from.

    Typical of Pentax DSLRs, the KP is heavily customizable and features both Sensitivity and Shutter & Aperture Priority modes, a star tracking feature and built-in wireless flash control. The KP has added new Motion and Depth-of-Field options to the already large selection of bracketing modes it's inherited from its predecessors. It can capture Full HD at 60i, 50i, 30p, 25p and 24p. The KP can shoot continuously at up to 7 fps. It also has built-in Wi-Fi. Something the KP doesn't have is an HDMI port, instead using something called SlimPort, which can send HD video over a microUSB port. If you want HDMI, you're going to have to drop $25 on a dongle.

    The KP's battery life is rated at 390 shots/charge – which is on the low end for a DSLR – though an optional battery grip can hold an additional D-LI109 battery or the significantly more powerful D-LI90.

    The KP will be available in your choice of silver or black in late February for $1099/£1099 body-only.

    Ricoh Unveils Ultra-Compact PENTAX KP, a Weatherproof DSLR That Provides Outdoor Photographers with New Standard for Quality, Customization and Ease of Use

    Heir to K-3 legacy, Slim-Body Camera Incorporates New Features and Controls Optimized to Deliver Outstanding Images, Even in the Most Challenging Conditions

    First PENTAX APS-C Camera to offer Shake Reduction II

    WEST CALDWELL, NJ, January 25, 2017—Ricoh Imaging Americas Corporation today announced the PENTAX KP, an ultra-compact and highly portable DSLR with features and controls that facilitate capturing outstanding images, even in the most demanding conditions. The PENTAX KP packs many of the advanced capabilities of the award-winning PENTAX K-3 series into a modern, slim-body design that lends itself to applications from casual snapshots to serious outdoor photography while mountain climbing or trekking. The PENTAX KP also adds a new generation of innovations including a new, highly sensitive APS-C CMOS sensor and is the first PENTAX APS-C camera to incorporate Shake Reduction II (SR II), which features a five-axis mechanism to compensate for camera shake up to 5 steps.

    The new 24-megapixel CMOS sensor enables shooting in extremely low-light conditions, with sensitivity to ISO 819200, making the camera ideal for night photography. The PENTAX KP features an electronic shutter option in live-view to enable high-speed shooting up to 1/24,000-second, which greatly broadens shooting capabilities when using large aperture lenses to achieve a shallow depth of field on a bright sunny day.

    The PENTAX KP’s compact body is the result of a complete internal re-design to produce an advanced DSLR camera with an extremely slim profile for optimal comfort and handling. The KP’s rugged exterior is dustproof and weather-sealed to enable use in the most challenging outdoor conditions. The camera will perform in temperatures as low as 14 degrees F (-10 degrees C).

    The PENTAX KP incorporates additional advanced technologies and ease-of-use features that have long been the hallmark of PENTAX cameras and enable them to be used comfortably and reliably in a wide range of conditions. These include:

    • 5-Axis Shake Reduction System: The PENTAX KP is the first PENTAX APS-C DSLR to offer the new generation SR II system, which uses a five-axis mechanism to compensate for camera shake caused by horizontal and vertical shift (often generated in macro photography), roll (difficult to handle by lens-installed shake reduction mechanisms), as well as pitch and yaw. The SR II unit is controlled with great precision as soon as the camera’s power is turned on, providing a wide compensation range—as much as five shutter steps—to further expand the limits of handheld shooting. With the addition of an optional accessory GPS module (O-GPS1 GPS unit), the PENTAX KP, simplifies astro-photography, making it possible to record stars as points of light rather than star trails during extremely long exposures.
    • Pixel Shift Resolution: This acclaimed PENTAX technology enables producing color-accurate still-life subjects with the highest resolving power. The technology uses the KP’s in-body Shake Reduction System to move the image sensor in single-pixel increments, to capture four separate images that are subsequently combined into a single, high-definition image.
    • A vertical-tilt LCD monitor that facilitates high- and low-angle shooting.
    • A grip replacement system that lets photographers choose their preference of grip based on shooting style or lens choice. In addition to the standard grip that comes with the PENTAX KP, accessory grips include medium (M) and large (L) grips (these will come packaged with KP bodies sold in North America), as well as the optional D-BG7 Battery Grip.
    • Control panels, button settings and dial controls that can all be customized, based on a user’s preference.

    “We designed the PENTAX KP to appeal to the world’s most discerning outdoor photographers, who will appreciate its rich and powerful feature set and rugged, compact design, whether they are shooting a landscape on a trek in Patagonia or capturing an eclipse,” said Kaz Eguchi, president, Ricoh Imaging Americas. “From Pixel Shift Resolution to our new generation of Shake Reduction, PENTAX proudly continues to lead the way in photographer-friendly innovation.”

    | Pricing and Availability |

     The PENTAX KP camera will be available on February 25 for a suggested list price of $1,099.95 at www.us.ricoh-imaging.com as well as at Ricoh Imaging-authorized retail outlets throughout North America.

    Main Features 

    1.Super-high-resolution images assured by approximately 24.32 effective megapixels and super-high-sensitivity photography at a top sensitivity of ISO 819200

    The PENTAX KP features a new-generation APS-C-sized CMOS image sensor with approximately 24.32 effective megapixels to produce super-high-resolution images. By coupling this sensor with an AA-filter-free optical design, it optimizes the image sensor’s imaging power to deliver well-defined images with true-to-life reproduction of gradation and texture. Thanks to the combination of the PRIME IV imaging engine and a state-of-the-art accelerator unit, it assures dependable, high-speed operation and highly effective noise reduction to optimize both image resolution and super-high-sensitivity performance. As the result, it allows the photographer to handhold it in snapshot photography of night scenes at the super-high sensitivity of ISO 819200.

    2.Compact, portable body perfect for snapshots, with a weather-resistant structure for harsh outdoor shooting

    After a thorough review of the camera’s internal structure, PENTAX designed a completely new body that was far more compact and slim than existing models to optimize the PENTAX KP’s performance, operability and portability. When combined with a compact, lightweight PENTAX-DA-series lens, it can be carried comfortably and effortlessly for a wide range of applications, from casual snapshots to serious outdoor photography while mountain climbing or trekking. Its front, back and bottom exterior panels are all made of durable, lightweight magnesium alloy. With 67 sealing parts applied across the body, it provides a dustproof, weather-resistant structure, with outstanding cold-proof performance at temperatures down to -10°C. Thanks to these features, the PENTAX KP performs superbly and dependably even in such demanding settings as in the rain or at locations prone to dust and freezing temperatures. 

    3.A range of customization features, including an exchangeable grip

    The PENTAX KP provides a grip replacement system for easy, quick change of a grip to accommodate the photographer’s shooting style or a mounted lens. In addition to the standard Grip S, it offers a choice of two replacement grips (Grip M and Grip L). It also provides a variety of customization functions to simplify and enhance camera operation, including Smart Function for speedy selection and easy setting of desired camera functions using the Fx (Function) and setting dials; and control panel customization to change the panel’s layout to suit the photographer’s preference. 

    4.PENTAX-original SR II five-axis shake-reduction system featuring the Pixel Shift Resolution System

    (1) In-body SR mechanism

    Thanks to the built-in SR II shake-reduction mechanism, the PENTAX KP effectively minimizes camera shake and delivers sharp, blur-free images, even in camera-shake-prone conditions such as when using a telephoto lens, shooting low-light scenes without flash illumination, or photographing sunset scenes. In addition to more common camera shake caused by pitch and yaw, this five-axis mechanism also compensates for camera shake caused by horizontal and vertical shift (often generated in macro photography) and camera shake caused by roll. It assures a compensation effect of approximately five shutter steps (CIPA standard compliant, smc PENTAX-DA 18-135mmF3.5-5.6ED AL [IF] DC WR、f=135mm) — a level equivalent to that of PENTAX’s flagship model — to expand the limits of handheld photography. When taking a panning shot, this mechanism efficiently controls the SR unit to compensate for all affecting factors without requiring any switching action. 

    (2) Pixel Shift Resolution System

    The PENTAX KP features Pixel Shift Resolution System,* the latest super-resolution technology, which captures four images of the same scene by shifting the image sensor by a single pixel for each image, then synthesizes them into a single composite image. Compared to the conventional Bayer system, in which each pixel has only a single color-data unit, this innovative system obtains all color data in each pixel to deliver super-high-resolution images with far more truthful colors and much finer details than those produced by conventional APS-C-sized image sensors. To make this system more useful with a wider range of scenes and subjects, the PENTAX KP also provides ON/OFF switching of the motion correction function,** which automatically detects a moving object during continuous shooting and minimizes negative effects during the synthesizing process.

    (3) PENTAX-original AA filter simulator

    By applying microscopic vibrations to the image sensor unit at the sub-pixel level during image exposure, the PENTAX KP’s AA (anti-aliasing) filter simulator*** provides the same level of moiré reduction as an optical AA filter. Unlike an optical AA filter, which always creates the identical result, this innovative simulator lets the user switch the AA filter effect on and off and adjust the level of the effect, making it possible to set the ideal effect for a particular scene or subject based on the prevailing photographic conditions.

    * When using this system, the user is advised to stabilize the camera firmly on a tripod. When a moving subject is captured in the camera’s image field, its image may not be reproduced clearly, either in part or as a whole.

    ** The movement may not be sufficiently corrected when the object is moving in a certain direction and/or pattern. This function does not guarantee that the movement is properly corrected with all subjects.

    *** This function works more effectively with a shutter speed of 1/1000 second or slower. This function may not be compatible with some shooting modes, including the Pixel Shift Resolution System.

    5.Electronically controlled shutter unit for super-high-speed shooting at 1/24000 second

    The PENTAX KP’s shutter unit combines a reliable mechanical shutter mechanism (with a top speed of 1/6000 second) with an electronically controlled shutter mechanism.* The electronic shutter mode provides a super-high shutter speed of 1/24000 second with reduced noise and vibration at shutter release, making it ideal for low-noise, low-vibration shooting in Live-view and mirror-up applications. The camera also provides a high-speed continuous shooting function with a top speed of seven images per second.

    * In the electronic shutter mode, the camera’s SR II mechanism and AA filter simulator are inoperable. During high-speed continuous shooting, the subject may suffer some deformation.

    6.Optical viewfinder with nearly 100% field of view

    Within its compact body, the PENTAX KP incorporates a glass prism finder featuring the same optics and coatings as those used in higher-class models. With a nearly 100-percent field of view and magnification of approximately 0.95 times, it provides a wide, bright image field for easy focusing and framing.

    7.High-speed, 27-point autofocus system with the SAFOX 11 module

    The PENTAX KP features the high-speed SAFOX 11 phase-matching AF sensor module to deliver dependable, responsive autofocus operation. Of its 27 focus sensors, 25 are cross-type sensors positioned in the middle to assure pinpoint focus on the subject at a minimum brightness level as low as -3 EV. A completely new, much-improved algorithm assures better autofocusing accuracy and speed than models equipped with the conventional SAFOX 11 module. The camera also provides useful customization features to assist in autofocus operation, such as a choice of operation modes—focus-priority, release-priority or advance-speed-priority—and the Selected-area Expansion function to automatically refocus on a subject when it moves away from the initial point.

    8.Full HD movie recording with a range of functional settings

    The PENTAX KP captures flawless, high-resolution Full HD movie clips (1920 x 1080 pixels; 60i/30p frame rate) in the H-264 recording format. It also provides an external microphone terminal for manual setting of the audio recording level and monitoring of the sound pressure level for microphone input. In addition to various visual effect modes available during movie recording,* it features a range of movie recording functions, including a 4K Interval Movie mode that connects a series of 4K-resolution still images (3840 x 2160 pixels) at a fixed interval to create a single movie file, and the Star Stream mode to record the traces of stars in the Interval Movie mode. 

    * When a special visual effect is applied, the frame rate may differ depending on the selected effect mode.

    9.Vertical-tilt-type LCD monitor

    The PENTAX KP’s 3.0-inch LCD monitor has approximately 921,000 dots, and provides a vertical tilt function to facilitate high- and low-angle shooting. In addition to its wide-view design, it features an air-gapless construction, in which the air space between LCD layers is eliminated to effectively reduce the reflection and dispersion of light for improved visibility during outdoor shooting. It also comes equipped with such convenient features as: the Outdoor View Setting mode, which instantly sets the optimum monitor brightness level for a given lighting condition; and a red-lighted monitor display function, which facilitates monitor viewing when the photographer’s eyes have become accustomed to a dark location during nighttime photography.

    10.PENTAX Real-time Scene Analysis System

    Supported by the combination of the approximately 86,000-pixel RGB light-metering sensor and the high-performance PRIME IV imaging engine, the PENTAX Real-time Scene Analysis System accurately and efficiently analyzes such factors as a brightness distribution in the image field and the subject’s primary color and motion. By adopting a breakthrough artificial intelligence technology called deep learning to its image detection algorithm,* this system assesses each individual scene more accurately while selecting the most appropriate exposure level and finishing touch for a given scene.

    * Deep learning technology is available when the exposure mode is set to Scene Analyze Auto, or when the Custom Image mode is set to Auto Select. 

    11.Other features

    • Switching lever to activate various settings during still-image and Live-view shooting and movie recording
    • New Motion Bracketing and Depth-of-field Bracketing functions to capture three images of same scene by automatically shifting aperture and/or shutter-speed settings in user-selected steps.
    • Wireless LAN connection to support operation with smartphones and tablet computers
    • DR II(Dust Removal II) mechanism to shake dust off from the image sensor surface using ultrasonic vibrations
    • Clarity control and Skin Tone correction functions, two of the latest image processing technologies developed by RICOH Central Laboratory
    • Compatibility with the optional O-GPS1 GPS Unit for the recording of shooting position data and simplified astronomical photography
    • A selection of imaging tools, such as Custom Images, Digital Filters
    • Compatibility with the optional PENTAX IMAGE Transmitter 2 tethering software

    Optional Accessories 

    Grip M (O-GP1671) and Grip L (O-GP1672)

    Designed for exclusive use with the PENTAX KP camera body, these grips can be easily replaced with the standard Grip S (O-GP167) to accommodate the photographer’s shooting style or a mounted lens, or improve the camera’s operability and holding comfort. (Note: In North America, these accessory grips will come with the PENTAX KP.)

    D-BG7 Battery Grip

    Designed for exclusive use with the PENTAX KP, this battery grip features a dustproof, weather-resistant structure, and provides an extra set of control buttons (shutter release, AF/AE lock, exposure compensation/Fx3, and green), and a pair of electronic dials to facilitate vertical-position shooting. It comes with the Grip L for improved handling when a telephoto or large-aperture lens is mounted on the camera. In addition to the exclusive D-LI109 Lithium-ion Battery, it can also be powered by the large-capacity D-LI90 Lithium-ion Battery (a dedicated battery tray included), which is used to power the PENTAX K-1 and K-3II digital SLR cameras.

    Pentax KP specifications

    Price
    MSRP$1099 (body only)
    Body type
    Body typeMid-size SLR
    Body materialMagnesium alloy
    Sensor
    Max resolution6016 x 4000
    Other resolutions4608 x 3072, 3072 x 2048, 1920 x 1280
    Image ratio w:h3:2
    Effective pixels24 megapixels
    Sensor photo detectors25 megapixels
    Sensor sizeAPS-C (23.5 x 15.6 mm)
    Sensor typeCMOS
    ProcessorPRIME IV
    Color spacesRGB, Adobe RGB
    Color filter arrayPrimary Color Filter
    Image
    ISOAuto, 100-819200
    White balance presets9
    Custom white balanceYes (3 slots)
    Image stabilizationSensor-shift
    Image stabilization notes5-axis, up to 5 stops
    Uncompressed formatRAW
    JPEG quality levelsBest, better, good
    File format
    • JPEG (Exif v2.3)
    • Raw (Pentax PEF or DNG)
    Optics & Focus
    Autofocus
    • Contrast Detect (sensor)
    • Multi-area
    • Center
    • Selective single-point
    • Tracking
    • Single
    • Continuous
    • Touch
    • Face Detection
    • Live View
    Autofocus assist lampYes
    Manual focusYes
    Number of focus points27
    Lens mountPentax KAF2
    Focal length multiplier1.5×
    Screen / viewfinder
    Articulated LCDTilting
    Screen size3
    Screen dots921,000
    Touch screenNo
    Screen typeTFT LCD
    Live viewYes
    Viewfinder typeOptical (pentaprism)
    Viewfinder coverage100%
    Viewfinder magnification0.95×
    Photography features
    Minimum shutter speed30 sec
    Maximum shutter speed1/6000 sec
    Maximum shutter speed (electronic)1/24000 sec
    Exposure modes
    • Program
    • Sensitivity priority
    • Shutter priority
    • Aperture priority
    • Shutter & aperture priority
    • Manual
    • Bulb
    Built-in flashYes
    Flash range6.00 m (at ISO 100)
    External flashYes (via hot shoe)
    Flash modesAuto, auto w/redeye reduction, flash on w/redeye reduction, slow sync, trailing curtain sync, manual, wireless
    Flash X sync speed1/180 sec
    Drive modes
    • Single
    • Continuous
    • Self-timer
    • AE bracketing
    • DoF bracketing
    • Motion bracketing
    • Mirror-up
    • Multi-exposure
    • Interval shooting
    • Interval composite
    • Interval movie record
    • Star stream
    Continuous drive7.0 fps
    Self-timerYes (2 or 12 secs)
    Metering modes
    • Multi
    • Center-weighted
    • Spot
    Exposure compensation±5 (at 1/3 EV, 1/2 EV steps)
    AE Bracketing±5 (2, 3, 5 frames )
    WB BracketingYes
    Videography features
    FormatMPEG-4, H.264
    MicrophoneStereo
    SpeakerMono
    Storage
    Storage typesSD/SDHC/SDXC (UHS-I supported)
    Connectivity
    USB USB 2.0 (480 Mbit/sec)
    HDMINo (requires SlimPort adapter)
    Microphone portYes
    Headphone portNo
    WirelessBuilt-In
    Wireless notes802.11 b/g/n
    Remote controlYes (via remote cable or smartphone)
    Physical
    Environmentally sealedYes
    BatteryBattery Pack
    Battery descriptionD-LI109 lithium-ion battery & charger
    Battery Life (CIPA)390
    Weight (inc. batteries)703 g (1.55 lb / 24.80 oz)
    Dimensions132 x 101 x 76 mm (5.2 x 3.98 x 2.99)
    Other features
    Orientation sensorYes
    Timelapse recordingYes
    GPSOptional
    GPS notesO-GPS1

              Digital sucks: Das Leben in der Beta-Welt   

    Können wir reden? Digital sucks ... greatly! Da ist so ein quälendes Gefühl, dass eine Menge digitales Zeug irgendwie schief läuft. Oftmals werden digitale Produkte ihren Versprechen nicht gerecht.

    Nehmen wir zum Beispiel Twitter. Für eine Weile sah es vielversprechend aus. Aber dann kam Trump. Wir haben unsere Uber-Fahrten genossen, nur um eine ausgewachsene Kernschmelze zu erleben. Oder denken wir an das Internet der Dinge. Wir hofften auf eine Zukunft voll von vernetzten Geräten, aber was wir stattdessen erleben, sind riesige Botnetze, die große Teile des Internets lahmlegen, und Glühbirnen, die anfällig für Virenangriffe sind.

    Einst lauschten wir gläubig dem Versprechen künstlicher Intelligenz. Jetzt machen wir uns Sorgen, dass wir unsere Arbeit an Maschinen verlieren werden und dass AI schließlich die Weltherrschaft übernehmen und den Menschen als die höchste Lebensform auf der Erde ersetzen wird. Darüber hinaus haben wir stark an das Internet als eine leistungsfähige Plattform für eine blühende Kultur geglaubt. Stattdessen haben wir Angst bekommen, dass das Internet den kreativen Inhalt aus der ganzen Welt aufsaugen wird, bis nichts mehr übrig ist (David Byrne, 2013).

    Digital sucks! Oder nicht? Eines ist sicher: Wir leben in einer Beta-Welt. Nichts Digitales scheint jemals fertig zu sein, und wir warten immer auf das nächste Update. Bei jedem Update werden alte Bugs behoben - und gleichzeitig neue eingeführt. Wir leben ständig an der vordersten Front und gehen mit unserem Einsatz von Technologie und Werkzeugen hohe Risiken ein.

    Haben wir es mit einem weißen Elefanten zu tun? Wir können unsere digitalen Besitztümer nicht einfach so entsorgen, aber die Kosten sind allzu oft höher als ihr Nutzen.

    Die gleiche Frage kann im Hinblick auf die digitale Transformation gestellt werden, vor allem in der Rückschau auf himmelhohe Investitionen, die die meisten Unternehmen im Laufe der letzten Jahre getätigt haben. Viel Zeit und Geld flossen in glänzende neue digitale Projekte, aber hat sich die Wahrscheinlichkeit des Überlebens für diese Unternehmen entsprechend erhöht? Oder sind sie immer noch mit den gleichen großen Risiken der Disruption konfrontiert? „Software is eating the world”, wie Marc Andreessen einst formuliert hat.

    Wie konnte es geschehen, dass das Land der digitalen Utopie, mit Unicorns bevölkert, sich nun in einen dystopischen Albtraum verwandeln könnte? Jede Revolution hat ihre Kosten. Diese Kosten können zwar aus der Perspektive vieler Menschen zu hoch sein, aber haben diese Leute ein Mitspracherecht? Ironischerweise war es der unerbittliche Fokus auf den Nutzer der digitalen Technologie, den Verbraucher und den Mensch, der viele digitale Dinge so überwältigend erfolgreich gemacht hat. Der gleiche Nutzer, dem Technologie so viel Mehrwert gebracht hat, hat jetzt das Gefühl, dass der Preis, den er am Ende bezahlen muss, zu hoch sein könnte.

    Wir verdienen eine bessere digitale Welt

    Digital shouldn’t suck; weder für Kunden noch für Angestellte und andere Stakeholder.

    Zunächst einmal müssen wir zugeben, dass die in den siebziger Jahren entstandene Tech-Utopie zumindest teilweise übermäßig optimistisch war und nicht so sehr in der Realität verankert war, wie in einer Art Wunschdenken. Technologie an sich ist nicht die Lösung für alle Arten von Problemen, sondern sie ist ein Werkzeug, das auf viele verschiedene Weisen verwendet werden kann. Die kalifornische Ideologie war schlicht – eine Ideologie. Als solche war sie sicherlich mächtig, aber früher oder später gerät jede Ideologie mit einer andersgearteten Realität in Konflikt.

    Das ist genau das, was mit Uber und in geringerem Maße auch mit Lyft passiert. Beide Ridesharing-Dienste zeigten eine gewisse Art von Arroganz, als sie aus dem Markt in Austin zurückkamen, nachdem die Stadt beschlossen hatte, Hintergrundüberprüfungen für die Fahrer zu verlangen. Dies war ein klassisches Beispiel für den Zusammenstoß einer Ideologie des freien Marktes mit Regulierungsvorschriften.

    Im Nachhinein war das aber nur eine düstere Vorahnung der PR-Katastrophe, vor der Uber Anfang 2017 stand, als CEO Travis Kalanick von einem Uber-Fahrer des schlechten Benehmens bezichtigt wurde, während gleichzeitig Vorwürfe von systematischem Sexismus und Belästigung auftauchten. Uber musste auf die harte Tour lernen, dass eine toxische Firmenkultur und eine eklatante Vernachlässigung der Verantwortung des Unternehmens auf die Firma zurückfallen kann und mit Sicherheit auch wird, was schließlich die Performance beeinträchtigt. Digital shouldn’t suck; weder für Kunden noch für Angestellte und andere Stakeholder.

    Digitale Wertschöpfung basiert auf den Service-Erfahrungen des Nutzers

    Zweitens: Da die Wertschöpfung von anderen Sektoren der Wirtschaft in den digitalen Bereich abwandert, geht dieser Prozess unweigerlich mit der Abwertung von traditionellen Vermögenswerten, Fertigkeiten und Arbeitsplätzen einher. Diese Verschiebung ist nichts Besonderes, es ist schon einmal passiert – zuerst mit der industriellen Revolution und später mit dem Aufstieg des Dienstleistungssektors. Lösungen sind notwendig, um den Übergang zu erleichtern, nicht um ihm zu widerstehen, denn Widerstand ist zwecklos. Die Menschen werden immer in Regionen, Industrien und Berufe strömen, wo die Wertschöpfung höher ist als anderswo. In diesen Tagen ist es die digitale Sphäre, wo das der Fall ist. Drittens müssen wir bedenken, dass Twitter, wie das Internet, nicht aus sich selbst zur Freiheit des Denkens und der Meinungsäußerung führt. Während es die Nutzer befähigen kann, ihre Stimmen zu erheben, kann es auch die Stimmen derjenigen verstärken, die bereits einen riesigen Mindshare besitzen. Donald Trump hat bewiesen, dass Twitter riesige Massen von Anhänger höchst effektiv ansprechen kann, unter Umgehung traditioneller Medien. Das gleiche gilt für andere Plattformen wie Facebook. Während das digitale Feld in der Tat anders ist, ist es der etablierten Mediensphäre nicht völlig unähnlich. Die Aufmerksamkeitsökonomie bevorzugt diejenigen, die verstehen, wie man die meiste Aufmerksamkeit auf sich zieht. So ist es an uns zu entscheiden, ob wir auch weiterhin bevorzugt demjenigen zuhören wollen, der am lautesten bellt – oder ob wir neue Algorithmen finden wollen, die den Menschen bessere Wahlmöglichkeiten geben.

    Das gleiche gilt für die Popkultur. Wir haben den Aufstieg von Influencern und Youtube-Stars erlebt, während die Geschäftsmodelle der Etablierten anfingen zu bröckeln und in einigen Fällen zusammengebrochen sind. Es könnte so aussehen, als ob das Internet alle kreativen Inhalte aus der ganzen Welt aufsaugen würde, wie David Byrne es formuliert hat. Aber eine realistischere Sichtweise würde erkennen, dass die digitale Wertschöpfung sich ganz einfach von den Geschäftsmodellen der Vergangenheit unterscheidet. Diese waren auf Knappheit von physischen Gütern basiert, die verpackt, bepreist und an einen Massenmarkt verkauft werden konnten. Im Vergleich dazu basiert die digitale Wertschöpfung auf der Serviceerfahrung des Nutzers. Zwar hat sie noch eine physische Hardwarekomponente, doch ist die Software viel wichtiger.

    Streaming-Dienste wie Netflix und Spotify vertreiben keine DVDs oder CDs, sondern verkaufen monatliche Abonnements für den Zugriff auf riesige Bibliotheken von digitalen Inhalten. Sie lernen die Nutzerpräferenzen und passen ihre Dienste dem persönlichen Geschmack an. Dienstleistungen wie diese können für den Benutzer wertvoller sein als herkömmliche Medienpakete. Gleichzeitig fesseln sie den Benutzer an zeitraubende Gewohnheiten wie Binge-Watching. Sowohl Netflix als auch Spotify bewegen sich mittlerweile zunehmend in die Content-Erstellung und schalten dabei traditionelle Zwischenhändler wie Filmstudios und Musiklabels aus. Dieser strukturelle Wandel muss für Künstler nicht schlecht sein, zumindest wenn sie lernen, wie man nach den neuen Regeln spielt.

    Der vierte Produktzyklus wird von AI befeuert

    Es scheint, dass die Propheten der New Economy in den neunziger Jahren größtenteils richtig lagen, als sie die New Rules anpriesen. Falsch war indes die weit verbreitete Erwartung der Veränderungsgeschwindigkeit. Es wurde überschätzt, wie schnell die Nutzer neue Verhaltensweisen annehmen, aber unterschätzt, wie weit die Veränderungen reichen würden. Das gleiche gilt für den nächsten Tech-Zyklus, der durch künstliche Intelligenz (maschinelles Lernen, Deep Learning) und sprachgesteuerte Schnittstellen wie Alexa und Siri angeheizt wird.

    AI, maschinelles Lernen und Deep Learning haben nun ein Stadium erreicht, in dem sich Maschinen im Wesentlichen selbst programmieren. Das macht es für Menschen sehr schwer zu verstehen, was diese Maschinen tatsächlich tun. Erwarten Sie in naher Zukunft eine Menge heftiger Debatten über diese Fragen. Das seltsam fehlgeleitete Argument über die Ethik der selbstfahrenden Autos, die entscheiden, ob sie ihren Passagier oder einen unschuldigen Fußgänger töten sollen, ist nur eine düstere Vorahnung.

    Voice-Schnittstellen beruhen stark auf AI-Algorithmen und massiven Datenmengen als Backend und Backbone. Bessere Algorithmen und Daten sorgen für eine bessere Interface-Qualität, was wiederum mehr und verbesserte Daten generiert, die verwendet werden können, um erweiterte Algorithmen zu entwickeln und noch mehr Daten zu generieren. Ein sich selbst verstärkender Effekt. Das Rennen um die nächste dominante Plattform hat begonnen, und Gewinner werden diejenigen sein, die den Virtuous Circle der Daten und die Interface-Qualität am besten hinbekommen.

    Wir werden immer mehr Algorithmen und Anwendungen wie die Avatare von Soul Machine sehen, die in der Lage sind, mit Menschen auf einer emotionalen Ebene zu interagieren – die unsere Gefühle vielleicht sogar noch besser als Menschen lesen und auf sie in einer bis vor kurzem unvorstellbaren Weise reagieren. Wie fühlen wir uns angesichts dessen? Nach den früheren Zyklen (PC, Web und Mobile) kann der nächste Übergang noch schneller sein. Während der PC etwa 20 Jahre brauchte, um den Massenmarkt zu erreichen, brauchte das Web nur 15 Jahre, und es sieht so aus, als ob der aktuelle Mobile-Zyklus nach zehn Jahren abgeschlossen wäre. Um 2025 werden wir sehen, ob der vierte Zyklus in nur fünf Jahren komplettiert sein wird und vielleicht noch mehr Menschen erreicht als das Smartphone.

    Werden wir bald die Killer-Applikation von IOT sehen?

    Während maschinelles Lernen und Voice-Interfaces bereits vielversprechende Anwendungsfälle zeigen, scheint das Internet der Dinge noch keine zu haben. IoT sieht sehr wie Mobile aus, bevor es das iPhone gab. Heute fügen IoT-Geräte oft nur zusätzliche Komplexität zu ansonsten einfachen Anwendungsfällen wie Raumbeleuchtung oder Heizung hinzu. Das intelligente Haus, das uns schon seit einiger Zeit versprochen wurde, sieht noch nicht so besonders schlau aus.

    Es besteht ein gewaltiger Bedarf, das Nutzererlebnis von Gebäuden, Büros und Wohnungen neu zu gestalten. Dieses Erlebnis ist seit Jahrzehnten grundsätzlich unverändert, und aktuelle IoT-Geräte digitalisieren nur bekannte Interfaces, ohne sie zu überdenken und von Grund auf zu verändern. Diese Arbeit muss getan werden, und sie wird getan werden, mit riesigen Belohnungen für diejenigen, die es schaffen, die dominierenden digitalen Plattformen für Immobilien zu werden.

    Auf lange Sicht können und werden vermutlich große Teile des gigantischen Immobilienmarktes in ein digitales Dienstleistungsgeschäft verwandelt, das auf Plattformen wie Airbnb lebt. Die Nutzer werden komplett ausgestattete Häuser, Wohnungen und auch Büroflächen für eine begrenzte Zeit oder sogar langfristig mieten. Jeder Aspekt des Gebäudes wird ordnungsgemäß in eine einzige monatliche Rechnung passen, alle Dienstleistungen digital und automatisch gemessen und abgerechnet. Dies ist die Killer-Applikation von IoT, aber es könnte einige Zeit dauern, sie vollständig zu entwickeln.

    Wie sie Produkte entwickeln, die den Menschen dienen

    Digitale Produktentwicklung ist hart, kann scheitern, und es gibt keine Abkürzungen. Wie jede Neuerung ist sie riskant. Letzten Endes ist Produktinnovation die Entdeckung eines neuen Kundennutzens. Transformationale Produkte haben ein radikales Nutzenversprechen – und sie liefern sofort, anstatt Dinge zu versprechen, die sie nicht liefern können. Ein positives Erlebnis für den Anwender ist der erste Schritt zu einer nachhaltigen Verhaltensänderung. Um die Erwartungen der Nutzer, das Nutzerverhalten und nicht zuletzt die Wertschöpfung zu verändern, ist die Schaffung von Mehrwert das Geheimrezept der erfolgreichen digitalen Transformation.


               Digital sucks: Das Leben in der Beta-Welt    

    Können wir reden? Digital sucks ... greatly! Da ist so ein quälendes Gefühl, dass eine Menge digitales Zeug irgendwie schief läuft. Oftmals werden digitale Produkte ihren Versprechen nicht gerecht.

    Nehmen wir zum Beispiel Twitter. Für eine Weile sah es vielversprechend aus. Aber dann kam Trump. Wir haben unsere Uber-Fahrten genossen, nur um eine ausgewachsene Kernschmelze zu erleben. Oder denken wir an das Internet der Dinge. Wir hofften auf eine Zukunft voll von vernetzten Geräten, aber was wir stattdessen erleben, sind riesige Botnetze, die große Teile des Internets lahmlegen, und Glühbirnen, die anfällig für Virenangriffe sind.

    Einst lauschten wir gläubig dem Versprechen künstlicher Intelligenz. Jetzt machen wir uns Sorgen, dass wir unsere Arbeit an Maschinen verlieren werden und dass AI schließlich die Weltherrschaft übernehmen und den Menschen als die höchste Lebensform auf der Erde ersetzen wird. Darüber hinaus haben wir stark an das Internet als eine leistungsfähige Plattform für eine blühende Kultur geglaubt. Stattdessen haben wir Angst bekommen, dass das Internet den kreativen Inhalt aus der ganzen Welt aufsaugen wird, bis nichts mehr übrig ist (David Byrne, 2013).

    Digital sucks! Oder nicht? Eines ist sicher: Wir leben in einer Beta-Welt. Nichts Digitales scheint jemals fertig zu sein, und wir warten immer auf das nächste Update. Bei jedem Update werden alte Bugs behoben - und gleichzeitig neue eingeführt. Wir leben ständig an der vordersten Front und gehen mit unserem Einsatz von Technologie und Werkzeugen hohe Risiken ein.

    Haben wir es mit einem weißen Elefanten zu tun? Wir können unsere digitalen Besitztümer nicht einfach so entsorgen, aber die Kosten sind allzu oft höher als ihr Nutzen.

    Die gleiche Frage kann im Hinblick auf die digitale Transformation gestellt werden, vor allem in der Rückschau auf himmelhohe Investitionen, die die meisten Unternehmen im Laufe der letzten Jahre getätigt haben. Viel Zeit und Geld flossen in glänzende neue digitale Projekte, aber hat sich die Wahrscheinlichkeit des Überlebens für diese Unternehmen entsprechend erhöht? Oder sind sie immer noch mit den gleichen großen Risiken der Disruption konfrontiert? „Software is eating the world”, wie Marc Andreessen einst formuliert hat.

    Wie konnte es geschehen, dass das Land der digitalen Utopie, mit Unicorns bevölkert, sich nun in einen dystopischen Albtraum verwandeln könnte? Jede Revolution hat ihre Kosten. Diese Kosten können zwar aus der Perspektive vieler Menschen zu hoch sein, aber haben diese Leute ein Mitspracherecht? Ironischerweise war es der unerbittliche Fokus auf den Nutzer der digitalen Technologie, den Verbraucher und den Mensch, der viele digitale Dinge so überwältigend erfolgreich gemacht hat. Der gleiche Nutzer, dem Technologie so viel Mehrwert gebracht hat, hat jetzt das Gefühl, dass der Preis, den er am Ende bezahlen muss, zu hoch sein könnte.

    Wir verdienen eine bessere digitale Welt

    Digital shouldn’t suck; weder für Kunden noch für Angestellte und andere Stakeholder.

    Zunächst einmal müssen wir zugeben, dass die in den siebziger Jahren entstandene Tech-Utopie zumindest teilweise übermäßig optimistisch war und nicht so sehr in der Realität verankert war, wie in einer Art Wunschdenken. Technologie an sich ist nicht die Lösung für alle Arten von Problemen, sondern sie ist ein Werkzeug, das auf viele verschiedene Weisen verwendet werden kann. Die kalifornische Ideologie war schlicht – eine Ideologie. Als solche war sie sicherlich mächtig, aber früher oder später gerät jede Ideologie mit einer andersgearteten Realität in Konflikt.

    Das ist genau das, was mit Uber und in geringerem Maße auch mit Lyft passiert. Beide Ridesharing-Dienste zeigten eine gewisse Art von Arroganz, als sie aus dem Markt in Austin zurückkamen, nachdem die Stadt beschlossen hatte, Hintergrundüberprüfungen für die Fahrer zu verlangen. Dies war ein klassisches Beispiel für den Zusammenstoß einer Ideologie des freien Marktes mit Regulierungsvorschriften.

    Im Nachhinein war das aber nur eine düstere Vorahnung der PR-Katastrophe, vor der Uber Anfang 2017 stand, als CEO Travis Kalanick von einem Uber-Fahrer des schlechten Benehmens bezichtigt wurde, während gleichzeitig Vorwürfe von systematischem Sexismus und Belästigung auftauchten. Uber musste auf die harte Tour lernen, dass eine toxische Firmenkultur und eine eklatante Vernachlässigung der Verantwortung des Unternehmens auf die Firma zurückfallen kann und mit Sicherheit auch wird, was schließlich die Performance beeinträchtigt. Digital shouldn’t suck; weder für Kunden noch für Angestellte und andere Stakeholder.

    Digitale Wertschöpfung basiert auf den Service-Erfahrungen des Nutzers

    Zweitens: Da die Wertschöpfung von anderen Sektoren der Wirtschaft in den digitalen Bereich abwandert, geht dieser Prozess unweigerlich mit der Abwertung von traditionellen Vermögenswerten, Fertigkeiten und Arbeitsplätzen einher. Diese Verschiebung ist nichts Besonderes, es ist schon einmal passiert – zuerst mit der industriellen Revolution und später mit dem Aufstieg des Dienstleistungssektors. Lösungen sind notwendig, um den Übergang zu erleichtern, nicht um ihm zu widerstehen, denn Widerstand ist zwecklos. Die Menschen werden immer in Regionen, Industrien und Berufe strömen, wo die Wertschöpfung höher ist als anderswo. In diesen Tagen ist es die digitale Sphäre, wo das der Fall ist. Drittens müssen wir bedenken, dass Twitter, wie das Internet, nicht aus sich selbst zur Freiheit des Denkens und der Meinungsäußerung führt. Während es die Nutzer befähigen kann, ihre Stimmen zu erheben, kann es auch die Stimmen derjenigen verstärken, die bereits einen riesigen Mindshare besitzen. Donald Trump hat bewiesen, dass Twitter riesige Massen von Anhänger höchst effektiv ansprechen kann, unter Umgehung traditioneller Medien. Das gleiche gilt für andere Plattformen wie Facebook. Während das digitale Feld in der Tat anders ist, ist es der etablierten Mediensphäre nicht völlig unähnlich. Die Aufmerksamkeitsökonomie bevorzugt diejenigen, die verstehen, wie man die meiste Aufmerksamkeit auf sich zieht. So ist es an uns zu entscheiden, ob wir auch weiterhin bevorzugt demjenigen zuhören wollen, der am lautesten bellt – oder ob wir neue Algorithmen finden wollen, die den Menschen bessere Wahlmöglichkeiten geben.

    Das gleiche gilt für die Popkultur. Wir haben den Aufstieg von Influencern und Youtube-Stars erlebt, während die Geschäftsmodelle der Etablierten anfingen zu bröckeln und in einigen Fällen zusammengebrochen sind. Es könnte so aussehen, als ob das Internet alle kreativen Inhalte aus der ganzen Welt aufsaugen würde, wie David Byrne es formuliert hat. Aber eine realistischere Sichtweise würde erkennen, dass die digitale Wertschöpfung sich ganz einfach von den Geschäftsmodellen der Vergangenheit unterscheidet. Diese waren auf Knappheit von physischen Gütern basiert, die verpackt, bepreist und an einen Massenmarkt verkauft werden konnten. Im Vergleich dazu basiert die digitale Wertschöpfung auf der Serviceerfahrung des Nutzers. Zwar hat sie noch eine physische Hardwarekomponente, doch ist die Software viel wichtiger.

    Streaming-Dienste wie Netflix und Spotify vertreiben keine DVDs oder CDs, sondern verkaufen monatliche Abonnements für den Zugriff auf riesige Bibliotheken von digitalen Inhalten. Sie lernen die Nutzerpräferenzen und passen ihre Dienste dem persönlichen Geschmack an. Dienstleistungen wie diese können für den Benutzer wertvoller sein als herkömmliche Medienpakete. Gleichzeitig fesseln sie den Benutzer an zeitraubende Gewohnheiten wie Binge-Watching. Sowohl Netflix als auch Spotify bewegen sich mittlerweile zunehmend in die Content-Erstellung und schalten dabei traditionelle Zwischenhändler wie Filmstudios und Musiklabels aus. Dieser strukturelle Wandel muss für Künstler nicht schlecht sein, zumindest wenn sie lernen, wie man nach den neuen Regeln spielt.

    Der vierte Produktzyklus wird von AI befeuert

    Es scheint, dass die Propheten der New Economy in den neunziger Jahren größtenteils richtig lagen, als sie die New Rules anpriesen. Falsch war indes die weit verbreitete Erwartung der Veränderungsgeschwindigkeit. Es wurde überschätzt, wie schnell die Nutzer neue Verhaltensweisen annehmen, aber unterschätzt, wie weit die Veränderungen reichen würden. Das gleiche gilt für den nächsten Tech-Zyklus, der durch künstliche Intelligenz (maschinelles Lernen, Deep Learning) und sprachgesteuerte Schnittstellen wie Alexa und Siri angeheizt wird.

    AI, maschinelles Lernen und Deep Learning haben nun ein Stadium erreicht, in dem sich Maschinen im Wesentlichen selbst programmieren. Das macht es für Menschen sehr schwer zu verstehen, was diese Maschinen tatsächlich tun. Erwarten Sie in naher Zukunft eine Menge heftiger Debatten über diese Fragen. Das seltsam fehlgeleitete Argument über die Ethik der selbstfahrenden Autos, die entscheiden, ob sie ihren Passagier oder einen unschuldigen Fußgänger töten sollen, ist nur eine düstere Vorahnung.

    Voice-Schnittstellen beruhen stark auf AI-Algorithmen und massiven Datenmengen als Backend und Backbone. Bessere Algorithmen und Daten sorgen für eine bessere Interface-Qualität, was wiederum mehr und verbesserte Daten generiert, die verwendet werden können, um erweiterte Algorithmen zu entwickeln und noch mehr Daten zu generieren. Ein sich selbst verstärkender Effekt. Das Rennen um die nächste dominante Plattform hat begonnen, und Gewinner werden diejenigen sein, die den Virtuous Circle der Daten und die Interface-Qualität am besten hinbekommen.

    Wir werden immer mehr Algorithmen und Anwendungen wie die Avatare von Soul Machine sehen, die in der Lage sind, mit Menschen auf einer emotionalen Ebene zu interagieren – die unsere Gefühle vielleicht sogar noch besser als Menschen lesen und auf sie in einer bis vor kurzem unvorstellbaren Weise reagieren. Wie fühlen wir uns angesichts dessen? Nach den früheren Zyklen (PC, Web und Mobile) kann der nächste Übergang noch schneller sein. Während der PC etwa 20 Jahre brauchte, um den Massenmarkt zu erreichen, brauchte das Web nur 15 Jahre, und es sieht so aus, als ob der aktuelle Mobile-Zyklus nach zehn Jahren abgeschlossen wäre. Um 2025 werden wir sehen, ob der vierte Zyklus in nur fünf Jahren komplettiert sein wird und vielleicht noch mehr Menschen erreicht als das Smartphone.

    Werden wir bald die Killer-Applikation von IOT sehen?

    Während maschinelles Lernen und Voice-Interfaces bereits vielversprechende Anwendungsfälle zeigen, scheint das Internet der Dinge noch keine zu haben. IoT sieht sehr wie Mobile aus, bevor es das iPhone gab. Heute fügen IoT-Geräte oft nur zusätzliche Komplexität zu ansonsten einfachen Anwendungsfällen wie Raumbeleuchtung oder Heizung hinzu. Das intelligente Haus, das uns schon seit einiger Zeit versprochen wurde, sieht noch nicht so besonders schlau aus.

    Es besteht ein gewaltiger Bedarf, das Nutzererlebnis von Gebäuden, Büros und Wohnungen neu zu gestalten. Dieses Erlebnis ist seit Jahrzehnten grundsätzlich unverändert, und aktuelle IoT-Geräte digitalisieren nur bekannte Interfaces, ohne sie zu überdenken und von Grund auf zu verändern. Diese Arbeit muss getan werden, und sie wird getan werden, mit riesigen Belohnungen für diejenigen, die es schaffen, die dominierenden digitalen Plattformen für Immobilien zu werden.

    Auf lange Sicht können und werden vermutlich große Teile des gigantischen Immobilienmarktes in ein digitales Dienstleistungsgeschäft verwandelt, das auf Plattformen wie Airbnb lebt. Die Nutzer werden komplett ausgestattete Häuser, Wohnungen und auch Büroflächen für eine begrenzte Zeit oder sogar langfristig mieten. Jeder Aspekt des Gebäudes wird ordnungsgemäß in eine einzige monatliche Rechnung passen, alle Dienstleistungen digital und automatisch gemessen und abgerechnet. Dies ist die Killer-Applikation von IoT, aber es könnte einige Zeit dauern, sie vollständig zu entwickeln.

    Wie sie Produkte entwickeln, die den Menschen dienen

    Digitale Produktentwicklung ist hart, kann scheitern, und es gibt keine Abkürzungen. Wie jede Neuerung ist sie riskant. Letzten Endes ist Produktinnovation die Entdeckung eines neuen Kundennutzens. Transformationale Produkte haben ein radikales Nutzenversprechen – und sie liefern sofort, anstatt Dinge zu versprechen, die sie nicht liefern können. Ein positives Erlebnis für den Anwender ist der erste Schritt zu einer nachhaltigen Verhaltensänderung. Um die Erwartungen der Nutzer, das Nutzerverhalten und nicht zuletzt die Wertschöpfung zu verändern, ist die Schaffung von Mehrwert das Geheimrezept der erfolgreichen digitalen Transformation.


              Stephen Brobst, de Teradata: “Big Data empieza a dar problemas de agotamiento”   
    Teradata, una de las grandes empresas que proporcionan soluciones tecnologías para trabajar con grandes volúmenes de información, deja atrás el Big Data para adentrarse en el Deep Learning. Stephen Brobst, su CTO, se encarga de esparcir la buena nueva.
              Deep Learning/Computer Vision   

              Super-Resolution via Deep Learning. (arXiv:1706.09077v1 [cs.CV])   

    Authors: Khizar Hayat

    The recent phenomenal interest in convolutional neural networks (CNNs) must have made it inevitable for the super-resolution (SR) community to explore its potential. The response has been immense and in the last three years, since the advent of the pioneering work, there appeared too many works not to warrant a comprehensive survey. This paper surveys the SR literature in the context of deep learning. We focus on the three important aspects of multimedia - namely image, video and multi-dimensions, especially depth maps. In each case, first relevant benchmarks are introduced in the form of datasets and state of the art SR methods, excluding deep learning. Next is a detailed analysis of the individual works, each including a short description of the method and a critique of the results with special reference to the benchmarking done. This is followed by minimum overall benchmarking in the form of comparison on some common dataset, while relying on the results reported in various works.


              Classification of Medical Images and Illustrations in the Biomedical Literature Using Synergic Deep Learning. (arXiv:1706.09092v1 [cs.CV])   

    Authors: Jianpeng Zhang, Yong Xia, Qi Wu, Yutong Xie

    The Classification of medical images and illustrations in the literature aims to label a medical image according to the modality it was produced or label an illustration according to its production attributes. It is an essential and challenging research hotspot in the area of automated literature review, retrieval and mining. The significant intra-class variation and inter-class similarity caused by the diverse imaging modalities and various illustration types brings a great deal of difficulties to the problem. In this paper, we propose a synergic deep learning (SDL) model to address this issue. Specifically, a dual deep convolutional neural network with a synergic signal system is designed to mutually learn image representation. The synergic signal is used to verify whether the input image pair belongs to the same category and to give the corrective feedback if a synergic error exists. Our SDL model can be trained 'end to end'. In the test phase, the class label of an input can be predicted by averaging the likelihood probabilities obtained by two convolutional neural network components. Experimental results on the ImageCLEF2016 Subfigure Classification Challenge suggest that our proposed SDL model achieves the state-of-the art performance in this medical image classification problem and its accuracy is higher than that of the first place solution on the Challenge leader board so far.


              Deep Learning Based Large-Scale Automatic Satellite Crosswalk Classification. (arXiv:1706.09302v1 [cs.CV])   

    Authors: Rodrigo F. Berriel, Andre Teixeira Lopes, Alberto F. de Souza, Thiago Oliveira-Santos

    High-resolution satellite imagery have been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Even though, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this dataset is used to train deep-learning-based models in order to accurately classify satellite images that contains or not zebra crossings. A novel dataset with more than 240,000 images from 3 continents, 9 countries and more than 20 cities was used in the experiments. Experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale.


              Towards Metamerism via Foveated Style Transfer. (arXiv:1705.10041v2 [cs.CV] UPDATED)   

    Authors: Arturo Deza, Aditya Jonnalagadda, Miguel Eckstein

    Given the recent successes of deep learning applied to style transfer and texture synthesis, we propose a new theoretical framework to construct visual metamers: \textit{a family of perceptually identical, yet physically different images}. We review work both in neuroscience related to metameric stimuli, as well as computer vision research in style transfer. We propose our NeuroFovea metamer model that is based on a mixture of peripheral representations and style transfer forward-pass algorithms for \emph{any} image from the recent work of Adaptive Instance Normalization (Huang~\&~Belongie). Our model is parametrized by a VGG-Net versus a set of joint statistics of complex wavelet coefficients which allows us to encode images in high dimensional space and interpolate between the content and texture information. We empirically show that human observers discriminate our metamers at a similar rate as the metamers of Freeman~\&~Simoncelli (FS) In addition, our NeuroFovea metamer model gives us the benefit of near real-time generation which presents a $\times1000$ speed-up compared to previous work. Critically, psychophysical studies show that both the FS and NeuroFovea metamers are discriminable from the original images highlighting an important limitation of current metamer generation methods.


              A Deep Learning Scheme for Motor Imagery Classification based on Restricted Boltzmann Machines   
    Motor imagery classification is an important topic in brain–computer interface (BCI) research that enables the recognition of a subject’s intension to, e.g., implement prosthesis control. The brain dynamics of motor imagery are usually measured by electroencephalography (EEG) as nonstationary time series of low signal-to-noise ratio. Although a variety of methods have been previously developed to learn EEG signal features, the deep learning idea has rarely been explored to generate new representation of EEG features and achieve further performance improvement for motor imagery classification. In this study, a novel deep learning scheme based on restricted Boltzmann machine (RBM) is proposed. Specifically, frequency domain representations of EEG signals obtained via fast Fourier transform (FFT) and wavelet package decomposition (WPD) are obtained to train three RBMs. These RBMs are then stacked up with an extra output layer to form a four-layer neural network, which is named the frequential deep belief network (FDBN). The output layer employs the softmax regression to accomplish the classification task. Also, the conjugate gradient method and backpropagation are used to fine tune the FDBN. Extensive and systematic experiments have been performed on public benchmark datasets, and the results show that the performance improvement of FDBN over other selected state-of-the-art methods is statistically significant. Also, several findings that may be of significant interest to the BCI community are presented in this article.
              Christmas Tree Analyzes Your Tweets   

    It’s Christmas time. You have a string of 50 individually addressable RGB LEDs, what would you do? Well, [Barney] decided to try something different. He’s made a Christmas tree that reflects Twitter’s current sentiments about the holiday.

    Wait, what? We admit, it’s a kind of weird concept, but the software behind it is pretty cool. As it turns out Stanford University’s Natural Language Processing Group released the source code for their sentiment analyzer. Unlike a normal sentiment analyzer which assigns points to positive words and negative points for negative words, this one actually uses a deep learning model which builds …read more


              Multimedia Hashing and Networking   
    This department discusses multimedia hashing and networking. The authors summarize shallow-learning-based hashing and deep-learning-based hashing. By exploiting successful shallow-learning algorithms, state-of-the-art hashing techniques have been widely used in high-efficiency multimedia storage, indexing, and retrieval, especially in multimedia search applications on smartphone devices. The authors also introduce Multimedia Information Networks (MINets) and present one paradigm of leveraging MINets to incorporate both visual and textual information to reach a sensible event coreference resolution. The goal is to make deep learning practical in realistic multimedia applications.
              Solutions Architect - Autonomous Driving - NVIDIA - Santa Clara, CA   
    Be an internal champion for Deep Learning and HPC among the Nvidia technical community. You will assist field business development in guiding the customer...
    From NVIDIA - Fri, 23 Jun 2017 07:33:44 GMT - View all Santa Clara, CA jobs
              Comment on The Biggest Shift in Supercomputing Since GPU Acceleration by Rob   
    Agreed, it's not "Deep Learning", it's "Monkeys with Typewriters". The "unsupervised learning" is checked by Backpropagation and other methods, it's not 'truncated Brute Force' (peg fits in hole, thus correct). Wikipedia's writeup on Neural Networks compares the current state to the intelligence of a Worm's brain; yet it can demonstrate a better ability to play chess than a Worm. We don't understand the Human brain yet purport to model it with a GPU, originally created to perform simple functions on non-connected pixels. A good (and quick) Analogy is difficult to provide without a lot of thought, more than I care to devote to this Snake Oil, but here's a shot at it: If I show you a Photograph of a Cat and ask you "What is this?", what answer would you give: 1. It's a Cat. 2. It's a piece of paper. 3. It's a Photoshopped image of a Dog, made to look like a Cat. 4. A piece of paper coated with a light-sensitive chemical formula, used for making photographic prints. Printed upon the paper is a Photoshopped image of a Dog, made to look like a Cat. When you see the first answer there's no need to read the rest. When you see the second answer you 'know' it's somehow better, until you get to the third answer. Then comes the fourth answer, when will it end; do we need to calculate the exact locations of the Atoms to give the correct answer ... Neural Networking is as correct as E = mc^2 (we assume under all conditions a mass of anything has equal Energy, that would not have been my guess). Neural Networking was 'abandoned' (by most people) decades ago. To be fair one of the reasons was due to the slowness of Computers, another was a lack of interest. Now there's a resurgence due to the speed of Computers and the keen interest in 'something for nothing', a magic panacea. Instead of "doing the math" (solving something truly provable") we throw that out for a small chance of being 100% correct and a much better chance of being wrong. A recent example was with a self-driving Car. Joshua Brown drove a Tesla in Florida at high speed while watching a Movie. A white 18 Wheeler crossed his path and the Computer 'thought' it was clear sailing - a mistake the driver could not afford.
              Comment on U.S. Military Sees Future in Neuromorphic Computing by Bill Wild   
    "From custom ASICs at Google, new uses for quantum machines, FPGAs finding new routes into wider application sets, advanced GPUs primed for deep learning, hybrid combinations of all of the above, it is clear there is serious exploration of non-CMOS devices." Surely many of these ASICs and GPUs are still CMOS devices?
              HRExaminer v8.25   
    Colin Kingsbury discusses how the influence of goals is overlooked when shaping company culture. Read, You Can’t Build a Good Culture Without Clear Goals.


    Maren Hogan dismantles some of the myths and legends that surround using artificial intelligence for recruiting and hiring. Read, Maren Hogan on Artificial Intelligence (AI).


    In HRIntelligencer 1.04, John Sumser shares the Big Picture on algorithmic fallibility, HR’s view on Big Data, Lessons from the front lines of machine learning, 10 Common Natural Language Processing (NLP) Terms, The Strange Loop in Deep Learning, and the quote of the week.


    On episode 126 of HR Tech Weekly: HRTech Investments from Textio, Entelo, and Talla, CareerBuilder sells most of its stock, ADP and FinancialForce, Microsoft and Accenture BlockChain ID’s, and Zenefits to pay $3.4 million in unpaid overtime.


    John Sumser talks with Yvette Cameron. Yvette has over 30 years of experience in the HCM industry and is currently the SVP of Strategy & Corporate Development at SAP SuccessFactors. In previous HCM roles she worked at Gartner, SAP, Oracle, Saba, PeopleSoft, and JD Edwards. Listen to Yvette Cameron on HRExaminer Radio - Executive Conversations.
              Deep Learning Scientist/Senior Scientist - EXL - Jersey City, NJ   
    EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries....
    From EXL - Tue, 18 Apr 2017 00:09:55 GMT - View all Jersey City, NJ jobs
              Master Thesis in Deep Learning - Bosch - Renningen   
    Do you want beneficial technologies being shaped by your ideas? Whether in the areas of mobility solutions, consumer goods, industrial technology or energy
    Gefunden bei Bosch - Tue, 09 May 2017 19:40:38 GMT - Zeige alle Renningen Jobs
              The Rise of Artificial Intelligence in Events [Webinar]   

    Join us for this free webinar to learn how to get more from artificial intelligence for your event. Chatbots, Deep Learning, Concierge, Big Data. What all these words hide is an incredible opportunity for event professionals willing to embrace innovation. Artificial Intelligence offers revolutionary opportunities to grow your event. Whether it is customer support, event management, marketing, […]

    The post The Rise of Artificial Intelligence in Events [Webinar] by Julius Solaris appeared first on http://www.EventManagerBlog.com


              Deep Learning Engineer   

              Research & Technology Manager Computer Vision and Deep Learning - Siemens - Princeton, NJ   
    Maintain relationships to the business units within Siemens; Participate and lead projects with Siemens businesses, including research &amp; development as well as...
    From Siemens - Mon, 05 Jun 2017 20:52:09 GMT - View all Princeton, NJ jobs
              Revolutionary Material: Microbial Nanowires   

    The picture above is of the microbe Geobacter (red) expressing electrically conductive nanowires. Such natural nanowires can be mass produced from inexpensive, renewable feedstocks with low energy costs compared to chemical synthesis with toxic chemicals and high energy requirements. Microbiologist Derek Lovley and his team at the University of Massachusetts Amherst report that they have discovered a new type of natural wire produced by bacteria that could greatly accelerate the researchers’ goal of developing sustainable “green” conducting materials for the electronics industry. Learn more.


              Blossoms of Shujun   

    Imagine a layered cake, a parfait, or any layered dessert. A similar type of layering occurs in thin films of block copolymers, only the layers are tens of nanometers thick (a hundred thousand times thinner than a sheet of paper)! If we place a liquid on the surface that attracts one of the layers beneath, then the layers within the film will rearrange themselves in an attempt to allow the attracted layer to reach the liquid. The floral arrangement shown above is actually an electron microscope image that captured such a rearrangement, magnified twenty thousand times. This image from the Samuel Gido Research Group doubles as art. See more of these spectacular images at the Materials Research Science and Engineering Center MRSEC  VISUAL gallery.

    Photo Courtesy: VISUAL


              Artificial Intelligence (AI) Technical Sales Specialist - Intel - Santa Clara, CA   
    Inside this Business Group. At least 1+ years' experience in machine learning / deep learning. Conducting deep dive technical training of Intel AI products &amp;...
    From Intel - Wed, 08 Mar 2017 11:17:48 GMT - View all Santa Clara, CA jobs
              NextGen Frontier for Artificial Intelligence Market Poised to Achieve Significant Growth in the Years to Come   

    global market for artificial intelligence is estimated to post an impressive 36.1% CAGR between 2016 and 2024, rising to a valuation of US$3,061.35 bn by the end of 2024 from US$126.14 bn in 2015.

    Albany, NY -- (SBWIRE) -- 06/28/2017 -- Globally, there is a wave of artificial intelligence across various industries, especially consumer electronics and healthcare. The wave is likely to continue in the years to come with the expanding base of applications of the technology. The global market for artificial intelligence is expected to witness phenomenal growth over the coming years as organizations worldwide have started capitalizing on the benefits of such disruptive technologies for effective positioning of their offerings and customer reach. In addition, the increasing It spending by enterprises across the globe for better advancements in their services and products.

    According to a study by Transparency Market Research (TMR), the global market for artificial intelligence is estimated to post an impressive 36.1% CAGR between 2016 and 2024, rising to a valuation of US$3,061.35 bn by the end of 2024 from US$126.14 bn in 2015. The upward growth of the market is, however, hampered by the low upfront investments. The majority of companies operating in the market are facing difficulties in accumulating funds for early stage research and development of prototypes and their underlying technologies. The dearth of personnel with adequate technical knowledge is also restricting the market from realizing its full potential.

    Expert Systems to Lead Revenue Generation through 2024

    On the basis of type, the report segments the global artificial intelligence market into digital assistance system, expert system, embedded system, automated robotic system, and artificial neural network. The expert system segment was at the forefront of growth in 2015, representing 44% of the overall market revenue and is poised to maintain its dominance until 2024. The growth of the segment can be attributed to the rising implementation of artificial intelligence across various sectors such as process control, monitoring, diagnosis, design, planning, and scheduling.

    Digital assistance is estimated to be the most promising segment in terms of revenue during the review period. The proliferation of portable computing devices such as tablets and smartphones is the primary factor propelling the growth of the segment. Based on application, deep learning held the lion's share of 21.6% in the global market in terms of value in 2015, closely trailed by smart robots. The demand for artificial intelligence in image recognition is likely to rise at a noteworthy rate over the forecast horizon.

    Domicile of a Raft of Leading Players to Fuel North America's Dominance

    North America was the major revenue contributor in 2015, accounting for approximately 38.0% of the overall market. The domicile of a large number of the leading technology firms enables early introduction and high acceptance of artificial intelligence in the region. Moreover, high government funding is playing a pivotal role in the technological development of artificial intelligence in the region. The widening scope of applications the technology in various verticals, including media and advertising, retail, BFSI, consumer electronics, and automotive are also contributing the market in North America. Owing to these factors, the region is expected to retain its leadership through 2024.

    Get More Information : http://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=4674

    On the other hand, the Middle East and Africa is anticipated to exhibit a remarkable CAGR of 38.2% during the forecast period, which is higher than any other region. Rapid technological innovations, including robotic automation and increasing implementation of concepts such as smart cities are boosting the adoption of artificial intelligence in the region. Ongoing infrastructure projects such development of new airports are rendering the market in MEA highly opportunistic.

    Some of the prominent participants in the global artificial intelligence market are Nuance Communications, MicroStrategy Inc., QlikTech International AB, Google Inc., IBM Corporation, Microsoft Corporation, Brighterion Inc., Next IT Corporation, IntelliResponse Systems Inc., and eGain Corporation.

    For more information on this press release visit: http://www.sbwire.com/press-releases/nextgen-frontier-for-artificial-intelligence-market-poised-to-achieve-significant-growth-in-the-years-to-come-826257.htm

    Media Relations Contact

    Rohit Bhisey
    Head
    Transparency Market Research
    Telephone: 518-618-1030
    Email: Click to Email Rohit Bhisey
    Web: http://www.transparencymarketresearch.com/artificial-intelligence-market.html


              Deep Learning Scientist/Senior Scientist - EXL - Jersey City, NJ   
    EXL Analytics offers an exciting, fast-paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager...
    From EXL - Tue, 18 Apr 2017 00:09:55 GMT - View all Jersey City, NJ jobs
              (USA-DE-Dover) Principal Digital Product Manager   
    **About Us:** GE is the world's Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive. Through our people, leadership development, services, technology and scale, GE delivers better outcomes for global customers by speaking the language of industry. GE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer at http://www.ge.com/sites/default/files/15-000845%20EEO%20combined.pdf . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. **Role Summary:** We are looking for a Product Manager to work with key Artificial Intelligence(AI), Deep Learning(DL) Clinical Partnership accounts to drive the partnership outcome, all aspects of governance and engagement execution. This individual will work with customer clinical data scientists and interface with internal GE software and data science teams to deliver high quality software in a fast paced, challenging, and creative environment. **Essential Responsibilities:** + Build product plan as to support SOW(s) and be responsible for customer and internal development to deliver against product plan + Drive day to day technical activities at customer site partnering with customer technical resources + Publish GE standards to customer ensuring work done at customer site meets requirements of integration into GE software for deployment and scale + Owns the technical and user product specification for the feature(s) area, including defining the stories and acceptance criteria for the epics of the feature area(s) + Work closely with software engineering and customer engagement teams to resolve technical and logistic issues + Work with customer engagement, product management and technical leads to translate solution concepts into specific technical deliverables and integrations + Define tasks priorities and drive release and sprint planning based on priorities + Develop and manage product backlog to ensure deliverable to meet schedules and milestones + Lead and drive process change and improvement + Foresee risks and create risk management solutions as appropriate + Correctly represent the urgency of issues and communicate and escalate issues appropriately + Proactively capture, track and drive all issues to closure + Regularly communicate the project status stakeholders and customers **Qualifications/Requirements:** Basic Qualifications: + BS in Computer Science, Electrical Engineering, Computer Engineering or equivalent. MS is desirable. + Minimum of 10 years’ experience in Project/Program/Development of software productsEligibility Requirements: + Strong technical background and experience in driving releasing software cross-functional teams + Experience in developing and releasing software using Agile methodology + Prior experience in Healthcare in technical product management and or engineering roles. + Legal authorization to work in the U.S. is required. + Any offer of employment is conditioned upon the successful completion of a background investigation and drug screen **Desired Characteristics:** + MBA or Master’s degree in Product Management, Marketing, Business Administration or related field + Degree in Medical Informatics + Degree and practice in a clinical discipline + 8+ years’ experience in product management, product development or related field + Deep Product Management/Marketing expertise, including: market trends/analysis, NPI process, product roadmap development, requirements, product life-cycle management, + Healthcare product/industry/technical acumen + Leadership skills to lead teams and shape/lead growth vision and marketing strategy + Innovation – develop new ideas through collaboration and execute on creative ideas + Team oriented – ability to motivate and work well with diverse, cross-functional teams + Proven ability to work globally + Proven ability to influence and negotiate internally and with customers\#DTR **Locations:** United States; Massachusetts, New York; BostonGE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer at http://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.GE will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditional upon the successful completion​ of a background investigation and drug screen.
              Vincent Granville posted a blog post   
    Vincent Granville posted a blog post

              Andrei Macsin added a discussion to the group Analytic, Data Science and Big Data Jobs   
    Andrei Macsin added a discussion to the group Analytic, Data Science and Big Data Jobs
    Thumbnail

    Career Alert, June 23

    Job SpotlightSoftware Developer - One Acre FundGenomic Systems Engineer - Kaiser PermanenteData Scientist - Consumer Insights - Fossil GroupSenior Director for Institutional Analytics - Rollins CollegeFeatured JobsKenya Product Innovations Analyst - One Acre FundSenior Data Scientist - Spreemo HealthEngineer, Data Science, Audience Studio - NBCUniversal MediaDeep Learning Content Creator - NVIDIAAnalytics and Insights Specialist - Ramsey Solutions, A Dave Ramsey CompanyDirector, Marketing Analytics & Strategy - The Ad CouncilData Science Manager, Analytics - FacebookResearch Scientist - SpotifyData Scientist – Analytics - Booking .comData Scientist, Risk Analytics - John DeereData Scientist - Reynolds Consumer ProductsProgram Manager, Data Analysis & Reporting - MasterCardDecision Science Analyst II - USAAData Scientist - TapjoyResearch Scientist, Sr - YahooHealthcare Data Scientist - PhilipsSenior Data Scientist - Warner Bros. EntertainmentData Scientist - ShareThisProduction Cytometry Lead, Verily Life Sciences - GoogleData Science Manager, Analytics - TumblrData scientist, Business Strategy - StarbucksCheck out the most recent jobs on AnalyticTalent.comFeatured BlogSix Great Articles About Quantum Computing and HPC This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, Hadoop, decision trees, ensembles, correlation, outliers, regression, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, time series, cross-validation, model fitting, dataviz, AI and many more. Read full article.Upcoming DSC Webinars and ResourcesA Language for Visual Analytics - DSC Webinar, July 25Self-Service Machine Learning - DSC Webinar, July 18Maximize value of your IoT Data - DSC Webinar, June 29SPSS Statistics to Predict Customer Behavior - DSC Webinar, June 27The Data Incubator Presents: Data Science FoundationsDatabricks & The Data Incubator: Apache Spark Programming for DSArtificial Intelligence Blockchain Bootcamp with Job placement NYCGuarantee yourself a data science career - SpringboardData Science Boot Camp: Pharma and Healthcare - RxDataSciencePython Data Science Training - AccelebrateOnline Executive PGP in Data Science, Business Analytics & Big DataSee More

              (USA-CA-Mountain View) Big Data Engineer - Distributed Systems, Product   
    Big Data Engineer - Distributed Systems, Product Big Data Engineer - Distributed Systems, Product - Skills Required - Big Data, Distributed Systems, HPC, Machine Learning/Deep Learning Algorithms, Java/C/C++/SQL/Scala, HDFS/Spark/Cassandra/TensorFlow If you are a Big Data Engineer with experience, please read on! With an office in Mountain View, we are the creators of a hyper-acceleration technology for popular big data processing engines using both hardware and software accelerators! Currently, we're looking for a Big Data Engineer with distributed systems experience to join our tight-knit team in Mountain View! The ideal candidate will have experience working at one or more of the following companies: IBM, Spark Technology Center, MapR, Hortonworks, and/or Cloudera. **Top Reasons to Work with Us** - Competitive compensation package (salary + equity) - Potential work sponsorship (if needed) - Work with cutting edge tech! **What You Will Be Doing** - Develop and maintain the high performance libraries in our technology stack - Work with our product engineering team on our hyper-acceleration (CPUs + GPUs + FPGAs) platform for Spark, Machine Learning and AI workloads - Architect, design, develop and release the advanced infrastructure of our core big data processing engines - Building, fine-tuning and optimizing the performance of our technology stack **What You Need for this Position** - Experience with big data engineering - Experience distributed systems - Experience with HPC programming (Plus) - Experience working at one or more of the following companies: IBM, Spark Technology Center, MapR, Hortonworks, and/or Cloudera - Experience with Java, C/C++, SQL, and/or Scala - Familiarity with C++11 is a plus - Familiarity with machine learning/deep learning algorithms - Experience with big data frameworks, such as HDFS, Spark, Cassandra, or TensorFlow - Experience with query optimization and product performance improvement - Experience with MySQL preferred - Experience with Linux kernel preferred - BS, MS, or PhD in Computer Science and/or Computer Engineering **What's In It for You** - Up To $170K salary + equity (DOE) - Equity - Health benefits So, if you are a Big Data Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Big Data Engineer - Distributed Systems, Product* *CA-Mountain View* *EO1-1382518*
              Baidu Research Announces Next Generation Open Source Deep Learning Benchmark Tool   
    Baidu Research, a division of Baidu Inc. (NASDAQ: BIDU), today unveiled the next generation of DeepBench, the open source benchmark tool, which now includes the measurement of deep learning inference across different hardware platforms in addition to training.
              Artificial Intelligence (AI) Technical Sales Specialist - Intel - Santa Clara, CA   
    Inside this Business Group. At least 1+ years' experience in machine learning / deep learning. Conducting deep dive technical training of Intel AI products &amp;...
    From Intel - Wed, 08 Mar 2017 11:17:48 GMT - View all Santa Clara, CA jobs
              AI Data Scientist - NLP/Deep Learning!   
    CA-Sunnyvale, Artificial Intelligence touches many facets of our daily lives. It is changing the way our world communicates as well as changing the way we shop. Many retailers are already adding AI into their e-commerce processes, from virtual assistants chat bots to customized shopping experiences. The possibilities for combining AI and e-commerce are almost endless. As we advance AI to higher levels, there is
              Drive.ai Raises $50 Million   
    Drive.ai, another start-up in the burgeoning tech sector of autonomous self-driving cars raised an additional $50 million in funding for R&D following $12 million financed last year. The company with its group of about 70 computer scientists and engineers make kits for retrofitting cars currently on the streets, allowing it to be self-driven amongst the rest of us. Andrew Ng, one of the top AI researchers also joins the board of the company, who has previously stated that we shouldn't worry of our robot overlords just our wallets. Take it as you will, Ng is married to co-founder and president of the company. Check out a rainy night test drive of their tech. Drive.ai is a Silicon Valley startup founded by former lab mates out of Stanford Universityss Artificial Intelligence Lab. We are creating AI software for autonomous vehicles using deep learning, which we believe is the key to the future of transportation. We founded Drive.ai because we believe that this technology has the potential to save lives and transform industries, and we think this is the right team to do it. Discussion
              Data Scientist - Business & Decision - Brussels   
    Are you passionate about data? Do you like developing algorithms for recommendation engines, developing time series or employing deep learning methods on information? Are you familiar with the concepts like regression analysis, supervised and unsupervised learning, tensor flow? If the answer is "all of the above" you are the right person to apply to this job! Function As a Data Scientist @ B&D your job will be able to propose and develop advanced analytical models for a series...
              Lead Software Engineer - Tenstorrent - Toronto, ON   
    Our vision is that the next quantum leap in efficacy of deep learning will be unlocked by joint optimization of both algorithms and the hardware on which they...
    From Tenstorrent - Mon, 24 Apr 2017 19:28:13 GMT - View all Toronto, ON jobs
              Processor Architect/Designer - Tenstorrent - Toronto, ON   
    Our vision is that the next quantum leap in efficacy of deep learning will be unlocked by joint optimization of both algorithms and the hardware on which they...
    From Tenstorrent - Mon, 24 Apr 2017 19:27:36 GMT - View all Toronto, ON jobs
              Deep Learning Expert - Tenstorrent - Toronto, ON   
    Our vision is that the next quantum leap in efficacy of deep learning will be unlocked by joint optimization of both algorithms and the hardware on which they...
    From Tenstorrent - Fri, 31 Mar 2017 18:03:58 GMT - View all Toronto, ON jobs
              Research & Technology Manager Computer Vision and Deep Learning - Siemens - Princeton, NJ   
    Maintain relationships to the business units within Siemens; Participate and lead projects with Siemens businesses, including research &amp; development as well as...
    From Siemens - Mon, 05 Jun 2017 20:52:09 GMT - View all Princeton, NJ jobs
              International Journal of Computer Vision and Image Processing (IJCVIP) Volume 7, Issue 2   

    The International Journal of Computer Vision and Image Processing (IJCVIP) provides the latest industry findings useful to academicians, researchers, and practitioners regarding the latest developments in the areas of science and technology of machines, imaging, and their related applications, systems, and tools. This journal contains unique articles of original, innovative research in computer science, education, security, government, engineering disciplines, software industry, vehicle industry, medical industry, and other fields. This issue contains the following articles: A Deep Learning Approach for Hepatocellular Carcinoma Grading Face Match for Family Reunification: Real-World Face Image Retrieval Foreign Circular Element Detection in Chest X-Rays for Effective Automated Pulmonary Abnormality Screening Performance Analysis of Anisotropic Diffusion Based Colour Texture Descriptors in Industrial Applications Feature Selection of Interval Valued Data Through Interval K-Means Clustering Word-Level Multi-Script Indic Document Image Dataset and Baseline Results on Script Identification


              Artificial Intelligence (AI) Technical Sales Specialist - Intel - Santa Clara, CA   
    Inside this Business Group. At least 1+ years' experience in machine learning / deep learning. Conducting deep dive technical training of Intel AI products &amp;...
    From Intel - Wed, 08 Mar 2017 11:17:48 GMT - View all Santa Clara, CA jobs
              Deep Learning in Automotive Software   
    Deep-learning-based systems are becoming pervasive in automotive software. So, in the automotive software engineering community, the awareness of the need to integrate deep-learning-based development with traditional development approaches is growing, at the technical, methodological, and cultural levels. In particular, data-intensive deep neural network (DNN) training, using ad hoc training data, is pivotal in the development of software for vehicle functions that rely on deep learning. Researchers have devised a development lifecycle for deep-learning-based development and are participating in an initiative, based on Automotive SPICE (Software Process Improvement and Capability Determination), that's promoting the effective adoption of DNN in automotive software.
              Scientists made an AI that can read minds   
    Whether it's using AI to help organize a Lego collection or relying on an algorithm to protect our cities, deep learning neural networks seemingly become more impressive and complex each day. Now, however, some scientists are pushing the capabilities...
              DeepID-Net: Object Detection with Deformable Part Based Convolutional Neural Networks   
    In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [1] , which was the state-of-the-art, from $31$ to $50.3$ percent on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1 percent. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
              Face Verification via Class Sparsity Based Supervised Encoding   
    Autoencoders are deep learning architectures that learn feature representation by minimizing the reconstruction error. Using an autoencoder as baseline, this paper presents a novel formulation for a class sparsity based supervised encoder, termed as CSSE. We postulate that features from the same class will have a common sparsity pattern/support in the latent space. Therefore, in the formulation of the autoencoder, a supervision penalty is introduced as a joint-sparsity promoting $l_{2,1}$ -norm. The formulation of CSSE is derived for a single hidden layer and it is applied for multiple hidden layers using a greedy layer-by-layer learning approach. The proposed CSSE approach is applied for learning face representation and verification experiments are performed on the LFW and PaSC face databases. The experiments show that the proposed approach yields improved results compared to autoencoders and comparable results with state-of-the-art face recognition algorithms.
              Rank Pooling for Action Recognition   
    We propose a function-based temporal pooling method that captures the latent structure of the video sequence data - e.g., how frame-level features evolve over time in a video. We show how the parameters of a function that has been fit to the video data can serve as a robust new video representation. As a specific example, we learn a pooling function via ranking machines. By learning to rank the frame-level features of a video in chronological order, we obtain a new representation that captures the video-wide temporal dynamics of a video, suitable for action recognition. Other than ranking functions, we explore different parametric models that could also explain the temporal changes in videos. The proposed functional pooling methods, and rank pooling in particular, is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We evaluate our method on various benchmarks for generic action, fine-grained action and gesture recognition. Results show that rank pooling brings an absolute improvement of 7-10 average pooling baseline. At the same time, rank pooling is compatible with and complementary to several appearance and local motion based methods and features, such as improved trajectories and deep learning features.
              Нейронные сети (Deep Learning) и интуиция   
    За последнее время технологии нейронных сетей осуществили бешенный рывок, еще не замеченный обществом.
              Oxalide Academy : Morning Tech dédié au big data   

    Vidéo chapitrée Overview sur le big data :  Les grands concepts Les étapes clés des projets Big Data et les technologies à utiliser (stockage, ingestion, …) Les enjeux des architectures Big Data (architecture lambda, …) L’intelligence artificielle (machine learning, deep learning, …) Et un cas d’usage du big data sur […]

    Cet article Oxalide Academy : Morning Tech dédié au big data est apparu en premier sur Oxalide - A Claranet Group Company.


              International Journal of Computer Vision and Image Processing (IJCVIP) Volume 7, Issue 2   

    The International Journal of Computer Vision and Image Processing (IJCVIP) provides the latest industry findings useful to academicians, researchers, and practitioners regarding the latest developments in the areas of science and technology of machines, imaging, and their related applications, systems, and tools. This journal contains unique articles of original, innovative research in computer science, education, security, government, engineering disciplines, software industry, vehicle industry, medical industry, and other fields. This issue contains the following articles: A Deep Learning Approach for Hepatocellular Carcinoma Grading Face Match for Family Reunification: Real-World Face Image Retrieval Foreign Circular Element Detection in Chest X-Rays for Effective Automated Pulmonary Abnormality Screening Performance Analysis of Anisotropic Diffusion Based Colour Texture Descriptors in Industrial Applications Feature Selection of Interval Valued Data Through Interval K-Means Clustering Word-Level Multi-Script Indic Document Image Dataset and Baseline Results on Script Identification


              Frost & Sullivan Applauds the Unparalleled Accuracy of Deep Learning's Endpoint and Mobile Security Solution   

    LONDON, June 29, 2017 /PRNewswire/ -- Based on its recent analysis of the endpoint and mobile security market for critical national infrastructure, Frost & Sullivan recognizes Deep Instinct with the 2017 Global Frost & Sullivan Award for Technology Innovation. Deep Instinct's...



              NVIDIA Takes AI From Research to Production in New Work with Volvo, VW, ZF, Autoliv, HELLA     

    Furthering its growth in the European automotive market, NVIDIA today unveiled a series of collaborations with five of the continent’s key players to move AI technology into production. Speaking at the Automobil Elektronik Congress, an annual gathering outside Stuttgart focused on Europe’s auto industry, NVIDIA founder and CEO Jensen Huang described AI-based deep learning technology […]

    The post NVIDIA Takes AI From Research to Production in New Work with Volvo, VW, ZF, Autoliv, HELLA   appeared first on The Official NVIDIA Blog.


              How the Human Brain Project Maps the Brain Faster with Deep Learning   

    The Human Brain Project has ambitions to advance brain research, cognitive neuroscience and other brain-inspired sciences like few other projects before it. Created in 2013 by the European Commission, the project’s aims include gathering, organizing and disseminating data describing the brain and its diseases, and simulating the brain itself.¹ To do so, they’re going to […]

    The post How the Human Brain Project Maps the Brain Faster with Deep Learning appeared first on The Official NVIDIA Blog.


              Công nghệ thông tin - Mobile App Developer   
    CONG TY TNHH GIAI PHAP PHAT TRIEN DOANH SO THONG - Tp Hồ Chí Minh - Native is plus Mô tả Lập trình xây dựng các hệ thống của công ty về ứng dụng di động và hệ thống Big Data + Deep learning Quyền lợi - Môi... với 80% lương cứng - Được đào tạo kĩ năng và kiến thức dịch vụ trước khi làm việc Có cơ hội thăng tiến trong công việc ,lên làm leader, trưởng...
              (Associate) Data Scientist for Deep Learning Center of Excellence - SAP - Sankt Leon-Rot   
    Build Machine Learning models to solve real problems working with real data. Software-Design and Development....
    Gefunden bei SAP - Fri, 23 Jun 2017 08:50:58 GMT - Zeige alle Sankt Leon-Rot Jobs
              The Twitter Shareholders Meeting   


    Me and Jack Dorsey, Twitter CEO


    The Twitter Annual Meeting was held yesterday at the Twitter headquarters in San Francisco on Market Street. Jack Dorsey, the CEO, spoke about the goals for the company.
    Areas covered were:
    Improving Timeline
    Notifications
    Safety, with regard ti transparency, better tools, and deep learning & machine learning
    Dorsey talked about three major improvements:
    1. Twitter Lite – for Safari or Chrome on mobile devices
      30% faster load times
      Uses less than 1 meg of data
      Looks identical to the app
      Can turn on Data Saver – all images, videos, gifs blurred
      Reduce data usage by 70%
    2. Explore tab -brings everything together
      Trends, Moments, Live Events
    3. Mute – Safety control
      Notifications -muted words -pause and not see
    Anthony Noto, Twitter’s CFO, then spoke. He talked about the Live Streaming Video and expansion into:
    Sports
    News
    Entertainment
    ESports
    He said that in Q1, there was 800 hours of live video content, over 450 events, with more than 200 premium content partners.
    All the shareholder resolutions passed except the one that proposed that Twitter become a user owned company. The proposal said:
    “A community-owned Twitter could result in new and reliable revenue streams, since we, as users, could buy in as co-owners, with a stake in the platform’s success. Without the short-term pressure of the stock markets, we can realize Twitter’s potential value, which the current business model has struggled to do for many years. We could set more transparent accountable rules for handling abuse. We could re-open the platform’s data to spur innovation. Overall, we’d all be invested in Twitter’s success and sustainability. Such a conversion could also ensure a fairer return for the company’s existing investors than other options.”
    The questions and answers were related to looking into a Twitter Prime type service where some users could pay for a premium service, and utilizing artificial intelligence through machine learning and deep learning for timelines and notifications.

              International Journal of Computer Vision and Image Processing (IJCVIP) Volume 7, Issue 2   

    The International Journal of Computer Vision and Image Processing (IJCVIP) provides the latest industry findings useful to academicians, researchers, and practitioners regarding the latest developments in the areas of science and technology of machines, imaging, and their related applications, systems, and tools. This journal contains unique articles of original, innovative research in computer science, education, security, government, engineering disciplines, software industry, vehicle industry, medical industry, and other fields. This issue contains the following articles: A Deep Learning Approach for Hepatocellular Carcinoma Grading Face Match for Family Reunification: Real-World Face Image Retrieval Foreign Circular Element Detection in Chest X-Rays for Effective Automated Pulmonary Abnormality Screening Performance Analysis of Anisotropic Diffusion Based Colour Texture Descriptors in Industrial Applications Feature Selection of Interval Valued Data Through Interval K-Means Clustering Word-Level Multi-Script Indic Document Image Dataset and Baseline Results on Script Identification


              Research & Technology Manager Computer Vision and Deep Learning - Siemens - Princeton, NJ   
    Maintain relationships to the business units within Siemens; Participate and lead projects with Siemens businesses, including research &amp; development as well as...
    From Siemens - Mon, 05 Jun 2017 20:52:09 GMT - View all Princeton, NJ jobs
              Brainy Voices: Innovative Voice Creation Based on Deep Learning by Acapela Group Research Lab   
    MONS, Belgium, June 29, 2017 /PRNewswire/ -- Neural Networks have revolutionized artificial vision and automatic speech recognition. This machine learning revolution is holding its promises as it enters the Text to Speech arena. Acapela Group is actively working on Deep Neural...
              Lucid Planet Radio with Dr. Kelly: Encore: The Singularity is HERE: Interface the Future, Explore Paradox, and Recode Mythology with Mitch Schultz    
    GuestToday visionary thinker, futurist and filmmaker Mitch Schultz joins Dr. Kelly to explore humanity as we approach the technological singularity. What is the singularity, and what does it mean for humanity?   Explore a transdisciplinary approach at the intersection of the arts, cognitive psychology, deep learning, and philosophy. Guided by Consciousness, Evolution, and Story. Beginning with what conscious state of being (terrestrial and universal) is perceiving. Followed the one consta ...
              Lucid Planet Radio with Dr. Kelly: The Singularity is HERE: Interface the Future, Explore Paradox, and Recode Mythology with Mitch Schultz    
    EpisodeToday visionary thinker, futurist and filmmaker Mitch Schultz joins Dr. Kelly to explore humanity as we approach the technological singularity. What is the singularity, and what does it mean for humanity?   Explore a transdisciplinary approach at the intersection of the arts, cognitive psychology, deep learning, and philosophy. Guided by Consciousness, Evolution, and Story. Beginning with what conscious state of being (terrestrial and universal) is perceiving. Followed the one consta ...
              Computer Vision Research Engineer   
    College Park, If you are a Computer Vision Researcher with experience, please read on! Top Reasons to Work with Us More info coming soon! What You Need for this Position At Least 3 Years of experience and knowledge of: - Computer Vision - Machine Learning - Deep Learning - OpenCV - CUDA - OpenGL So, if you are a Computer Vision Researcher with experience, please apply today!
              Computer Vision Engineer - C++, Python, OpenCV, CUDA   
    Seattle, If you are a Computer Vision Engineer with experience, please read on! Top Reasons to Work with Us 1. We are a well-funded startup pushing the limits of Computer Vision and Deep Learning, and we are led by a team of industry experts with a proven history of delivering incredibly high-tech products from the research stage to commercialization! 2. This is an opportunity to join a scientific team in
              Systems Software Engineer   
    CA-Santa Clara, We are now looking for a Systems Software Engineer: Nvidia’s invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the
              Manager of Global Engineering Compute Services   
    CA-Santa Clara, We are now looking for a Manager of Global Engineering Compute Services: Nvidia has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that
              Sr Software Engineer (eCommerce)   
    CA-Santa Clara, We now looking for a Sr. Software Engineer (ecommerce) Nvidia has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolve
              Deep Learning & Computer Vision Software Engineer   
    CA-Santa Clara, We are now looking for a Deep Learning & Computer Vision Software Engineer: Academic and commercial groups around the world are using GPUs to apply deep learning techniques that drive computer vision breakthroughs. Nvidia is seeking computer vision engineers familiar with applying deep learning to drive innovative computer vision usages on our Mobile devices. This position provides the chance to d
              Senior Deep Learning Architect   
    CA-Santa Clara, We are now looking for a Senior Deep Learning Architect: NVIDIA is seeking elite architects to design hardware accelerator and processor architectures that enable state of the art machine learning and data analytics algorithms and applications on our next-generation mobile, embedded and datacenter platforms. This position offers the opportunity to have a real impact in a dynamic, technology-focuse
              UI Developer for Cloud Computing   
    CA-Santa Clara, We are now looking for a UI Developer Nvidia has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to
              Solutions Architect - Autonomous Driving - NVIDIA - Santa Clara, CA   
    Be an internal champion for Deep Learning and HPC among the Nvidia technical community. You will assist field business development in guiding the customer...
    From NVIDIA - Fri, 23 Jun 2017 07:33:44 GMT - View all Santa Clara, CA jobs
              50 Years of the Turing Award   
    The ACM knows how to throw a party, a two-day celebration of the 50th anniversary of the Turing Award. Every recipient got a deck of Turing Award playing cards and the ACM unveiled a new bust of Turing perfect for selfies.

    The conference featured a number of panels on different challenges of computer science from privacy to quantum. Deep Learning formed a common thread, not only did it have its own panel but the Moore's law panel talked about specialized hardware for learning and deep learning causes concern for the privacy and ethics panels. Even quantum computing used deep learning as an example of a technology that succeeded once the computing power was there.

    The deep learning panel focused on what it can't do, particularly semantics, abstraction and learning from a small or medium amount of data. Deep networks are a tool in the toolbox but we need more. My favorite line came from Stuart Russell worried about "Grad Student Descent", research focused on parameter tuning to optimize learning in different regimes, as opposed to developing truly new approaches. For the theory folks, some questions like how powerful are deep neural nets (circuit complexity) and whether we can just find the best program for some data (P v NP).

    The "Moore's Law is Really Dead" panel joked about the Monty Python parrot (it's resting). For the future, post-CPU software will need to know about hardware, we'll have more specialized and programmable architectures and we'll have to rely on better algorithms for improvement (theory again). Butler Lampson said "The whole reason the web works is because it doesn't have to." I don't remember how that fit into the discussion but I do like the quote.

    The quantum panel acknowledged that we don't quite have the algorithms yet but we will soon have enough qbits to experiment and find ways that quantum can help.

    You can watch the panels yourself, but the real fun comes from spending time with the leaders of the field, and not just theory but across computer science.
              Frighteningly accurate ‘mind reading’ AI reads brain scans to guess what you’re thinking   
    Researchers have developed a deep learning neural network that's able to read complex thoughts based on brain scans.
              Comment on How to Check-Point Deep Learning Models in Keras by Xiufeng Yang   
    Hi, great post! I want to save the training model not the validation model, how to set the parameters in checkpoint()?
              Comment on Object Recognition with Convolutional Neural Networks in the Keras Deep Learning Library by Bruce Wind   
    Hi, Jason, thanks for sharing. I test the code you provided, but my machine does not support CUDA, so it runs very slowly( half an hour per epoch). Since you have such a powerful computer, could you please show the results after hundreds or thousands epoches later? Thanks.
              Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Jason Brownlee   
    Very nice Candida.
              Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Candida   
    scipy: 0.19.0 numpy: 1.12.1 matplotlib: 2.0.2 pandas: 0.20.1 statsmodels: 0.8.0 sklearn: 0.18.1
              Go   
    ...
    Since Go lacks a simple evaluation function mainly based on counting material, attempts to apply similar techniques and algorithms as in chess were less successful. The breakthrough in computer Go was accomplished by Monte-Carlo tree search and deep learning.
    19*19 Go board
    Progress
    Monte-Carlo Go
    After early trials to apply Monte Carlo methods to a Go playing program by Bernd Brügmann in 1993 , recent developments since the mid 2000s by Bruno Bouzy , and by Rémi Coulom, who coined the term Monte-Carlo Tree Search , in conjunction with UCT (Upper Confidence bounds applied to Trees) introduced by Levente Kocsis and Csaba Szepesvári , led to a breakthrough in computer Go .
    CNN
    CNNs
    As mentioned by Ilya Sutskever and Vinod Nair in 2008 , convolutional neural networks are well suited for problems with a natural translation invariance, such as object recognition. Go has some translation invariance, because if all the pieces on a hypothetical Go board are shifted to the left, then the best move will also shift (with the exception of pieces that are on the boundary of the board). Many applications of neural networks to Go have already used convolutional neural networks, such as Nicol N. Schraudolph et al. , Erik van der Werf et al. , and Markus Enzenberger , among others.
    In 2014, two teams independently investigated whether deep convolutional neural networks could be used to directly represent and learn a move evaluation function for the game of Go. Christopher Clark and Amos Storkey trained an 8-layer convolutional neural network by supervised learning from a database of human professional games, which without any search, defeated the traditional search program Gnu Go in 86% of the games . In their paper Move Evaluation in Go Using Deep Convolutional Neural Networks , Chris J. Maddison, Aja Huang, Ilya Sutskever, and David Silver report they trained a large 12-layer convolutional neural network in a similar way, to beat Gnu Go in 97% of the games, and matched the performance of a state-of-the-art Monte-Carlo Tree Search that simulates a million positions per move .
    AlphaGo
    AlphaGo
    In 2015, a team affiliated with Google DeepMind around David Silver, Aja Huang, Chris J. Maddison, and Demis Hassabis, supported by Google researchers John Nham and Ilya Sutskever, build a Go playing program dubbed AlphaGo , combining Monte-Carlo tree search with their 12-layer networks , the “policy network,” to select the next move, the “value network,” to predict the winner of the game. The neural networks were trained on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time. AlphaGo achieved a huge winning rate against other Go programs, and defeated European Go champion Fan Hui in October 2015 with a 5 - 0 score . On March 9 to 15, 2016, AlphaGo won a $1M 5-game challenge match in Seoul versus Lee Sedol with 4 - 1 .
    During The Future of Go Summit from May 23 to 27, 2017 in Wuzhen, China, AlphaGo won a three-game match versus current world No. 1 ranking player Ke Ji. After the Summit, AlphaGo is now retired from competitive play while DeepMind continues AI research in other areas .
    Quotes
    Quote by Gian-Carlo Pascutto in 2010 :
    There is no significant difference between an alpha-beta search with heavy LMR and a static evaluator (current state of the art in chess) and an UCT searcher with a small exploration constant that does playouts (state of the art in go).
    The shape of the tree they search is very similar. The main breakthrough in Go the last few years was how to backup an uncertain Monte Carlo score. This was solved. For chess this same problem was solved around the time quiescent search was developed.
    Both are producing strong programs and we've proven for both the methods that they scale in strength as hardware speed goes up.
    So I would say that we've successfully adopted the simple, brute force methods for chess to Go and they already work without increases in computer speed. The increases will make them progressively stronger though, and with further software tweaks they will eventually surpass humans.

    Computer Olympiads
    1st Computer Olympiad, London 1989
    ...
    18th Computer Olympiad, Leiden 2015
    19th Computer Olympiad, Leiden 2016
    Progress
    Monte-Carlo Go
    After early trials to apply Monte Carlo methods to a Go playing program by Bernd Brügmann in 1993 , recent developments since the mid 2000s by Bruno Bouzy , and by Rémi Coulom, who coined the term Monte-Carlo Tree Search , in conjunction with UCT (Upper Confidence bounds applied to Trees) introduced by Levente Kocsis and Csaba Szepesvári , led to a breakthrough in computer Go .
    CNN
    CNNs
    As mentioned by Ilya Sutskever and Vinod Nair in 2008 , convolutional neural networks are well suited for problems with a natural translation invariance, such as object recognition. Go has some translation invariance, because if all the pieces on a hypothetical Go board are shifted to the left, then the best move will also shift (with the exception of pieces that are on the boundary of the board). Many applications of neural networks to Go have already used convolutional neural networks, such as Nicol N. Schraudolph et al. , Erik van der Werf et al. , and Markus Enzenberger , among others.
    In 2014, two teams independently investigated whether deep convolutional neural networks could be used to directly represent and learn a move evaluation function for the game of Go. Christopher Clark and Amos Storkey trained an 8-layer convolutional neural network by supervised learning from a database of human professional games, which without any search, defeated the traditional search program Gnu Go in 86% of the games . In their paper Move Evaluation in Go Using Deep Convolutional Neural Networks , Chris J. Maddison, Aja Huang, Ilya Sutskever, and David Silver report they trained a large 12-layer convolutional neural network in a similar way, to beat Gnu Go in 97% of the games, and matched the performance of a state-of-the-art Monte-Carlo Tree Search that simulates a million positions per move .
    AlphaGo
    AlphaGo
    In 2015, a team affiliated with Google DeepMind around David Silver, Aja Huang, Chris J. Maddison, and Demis Hassabis, supported by Google researchers John Nham and Ilya Sutskever, build a Go playing program dubbed AlphaGo , combining Monte-Carlo tree search with their 12-layer networks , the “policy network,” to select the next move, the “value network,” to predict the winner of the game. The neural networks were trained on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time. AlphaGo achieved a huge winning rate against other Go programs, and defeated European Go champion Fan Hui in October 2015 with a 5 - 0 score . On March 9 to 15, 2016, AlphaGo won a $1M 5-game challenge match in Seoul versus Lee Sedol with 4 - 1 .
    During The Future of Go Summit from May 23 to 27,
    20th Computer Olympiad, Leiden 2017 in Wuzhen, China, AlphaGo won a three-game match versus current world No. 1 ranking player Ke Ji. After the Summit, AlphaGo is now retired from competitive play while DeepMind continues AI research in other areas .
    Quotes
    Quote by Gian-Carlo Pascutto in 2010 :
    There is no significant difference between an alpha-beta search with heavy LMR and a static evaluator (current state of the art in chess) and an UCT searcher with a small exploration constant that does playouts (state of the art in go).
    The shape of the tree they search is very similar. The main breakthrough in Go the last few years was how to backup an uncertain Monte Carlo score. This was solved. For chess this same problem was solved around the time quiescent search was developed.
    Both are producing strong programs and we've proven for both the methods that they scale in strength as hardware speed goes up.
    So I would say that we've successfully adopted the simple, brute force methods for chess to Go and they already work without increases in computer speed. The increases will make them progressively stronger though, and with further software tweaks they will eventually surpass humans.

    See also
    Search

              Mein Structured-News-Experiment erhält eine Förderung durch den Google-DNI-Innovationsfonds   

    Freudestrahlend darf ich verkünden, dass mein Projektvorschlag tatsächlich für die zweite Runde der Digital News Initiative (DNI) ausgewählt wurde. Für alle Interessierten teile ich vorab den ersten Teil meiner Bewerbung.

    Project title: Structured News: Atomizing the news into a browsable knowledge base for structured journalism

    Brief overview:

    Structured news are exploring a new space in news presentation and consumption.

    It promotes that all news events be broken down into their component pieces and organized into a coherent repository. As tiny, contextualized bits of information these news "atoms" and "particles" will be findable, investigable and recombinable as individual units.

    This in turn makes personal media feasible where a story can be customized for each reader, depending on her device, time budget and information needs, effectively being an answer to the unbundling of news.

    For the news ecosystem as a whole, structured data could become a new building block enabling a Lego-like flexibility in the newsroom.

    Project description:

    This proposal takes much inspiration from the Living Stories project, led by Google, the New York Times and the Washington Post, and builds upon their approach to structured journalism.

    A living story can be thought of as a continuously updated news resource that is capable to react to multifaceted story development given varying information preferences. This is made possible by treating journalistic content as structured data and structured data as journalistic content.

    By "atomizing the news" we will be transforming a news archive into a fine-grained web of journalistic assets to be repurposed in different and new contexts. Technically a number of algorithms will split our text corpus into small, semantic chunks, be they a name, location, date, numeric fact, citation or some such concept. These "atomic news particles" will then get identified, refined and put into optimal storage format, involving tasks such as information extraction, named entity recognition and resolution.

    For the seminal living stories experiment all stories had to be labeled by hand. This prototype project in contrast will try the automation route. Ideally these approaches would be blended to a hybrid form with more editorial input.

    Key deliverable will be the structured journalism repository accumulated over time with all information organized around the people, places, organizations, products etc. named within news stories, facts about these entities, relationships between them and their role with respect to news events and story developments.

    To make this knowledge base easily browsable I'd like to propose a faceted search user interface. Faceted search allows users to explore a multi-dimensional information space by combining search queries with multiple filters to drill down along various dimensions and is therefore an optimal navigation vehicle for all our purposes.

    Specific outcome:

    On the publishers' side, the proposed infrastructure would help build up newsroom memory, maximize the shelf life of content and provide the ultimate building blocks for novel news offerings and experiments. It must be emphasized that any news business created out of structured data is virtually safe from content theft because its user experience cannot be replicated without also copying the entire underlying database.

    On the consumers' side, through structured journalism today's push model of news effectively turns into more of a pull, on-demand model. Up-to-date information is increasingly sought out exactly when it is needed and in just the right detail, not necessarily when it's freshly published nor in a one-size-fits-all news package. Essentially this implies transferring control over content from publishers to consumers. Product innovation on the users' behalf would be completely decoupled from innovation and experimentation in the newsroom.

    Broader impact:

    For news consumers I could see two major implications in user experience, honoring the readers' time and tending to their curiosity:

    Today readers who have been following the news are confronted with lots of redundant context duplicated across articles whereas new readers are given too little background. In the programming community we have a famous acronym: DRY! It stands for "don't repeat yourself" and is stated as "every piece of knowledge must have a single, unambiguous, authoritative representation within a system." DRY represents one of the core principles that makes our code readable, reusable and maintainable. Applied to journalism I have high hope it might reap the same benefits.

    The second implication I would call "just-in-time information". It means that information is pulled, not pushed, so that the reader can decide for herself how to consume the content. Choosing just the highlights or just the updates? Or following a specific event or topic? Or slicing and dicing through the whole news archive? It all requires more structure. Atomized news organize the information around structure.

    As for a broader impact on the news ecosystem I could see more ideas of integrated software development environments be applied to the news editing process:

    For instance, for several decades source code was merely looked at as a blob of characters. Only in the last 15 years our source code editors started parsing our program code while we type, understanding the meaning in a string of tokens, giving direct feedback. We share the same major raw materials, namely text, so the same potential lies in journalism tools. Just imagine what will happen if we stop squeezing articles into just a few database columns but save as much information about a story as we like? Would increased modularity in reporting bring the same qualities to journalism that developers value so much in code, like reuse, refactoring, versioning and possibly even open source? I would hope that the approach of structured news will inspire more explorations in these directions.

    What makes your project innovative?

    This project will supply a prototype infrastructure for structured journalism.

    Because I am not a content provider myself this project would be transformative to me if I could become a technology provider in the respected field. My goal is to classify approximately three million web pages, archived since 2007 by my own web crawler, into an ever richer network of structured stories. This repository then could establish a playground to evaluate the ideas described and implicated.

    Advanced natural language understanding will be most crucial to the problem. This project would help me familiarize myself more with state-of-the-art deep learning models like word vector and paragraph vector representations as well as long-short-term-memory neural networks.

    The technology built for this project will mainly include a streaming data processing pipeline for several natural language processing and machine learning tasks, including information extraction, recognition and resolution.

    Key deliverables will be the structured journalism repository and faceted news browser mentioned before and in the project description.

    It's essential that this structured news browser be intuitive and useful to readers without mastering advanced search options. Different levels of detail cater to readers with different levels of interest. So ideally the final product should remind users somehow of a very flexible Wikipedia article. Imagine a single page with a story summary and stream of highlighted updates. All content is organized and filterable by named people, places, events and so on. Every piece of content is weighted by importance and remembers what you have already read and which topics you are really interested in.

    Although international teams are experimenting on the very same frontier, the German language poses some unique problems in text mining and therefore bears overlapping efforts.

    How will your Project support and stimulate innovation in digital news journalism? Why does it have an impact?

    Because of its Lego-like flexibility structured data is the ultimate building block, enabling quick experimentation and novel news products.

    It establishes a new kind of market place ripe for intensive collaboration and teamwork. Jeff Jarvis' rule "Do your best, link to the rest" could put a supply chain in motion in which more reuse, syndication and communication takes place both within a single news organization and across the industry. Just imagine a shared news repository taking inspiration from the open source culture in development communities like GitHub.

    A shared knowledge base of fact-checked microcontent initially would result in topic pages and info boxes being more efficient, therefore maximizing the investments in today's news archives. Similarly, structure acts as an enabling technology on countless other fronts, be it personalization, diversification, summarization on readers' behalf or process journalism, data journalism, robot journalism for the profession.


              Living Stories (continued)   

    Having applied to the Digital News Initiative Innovation Fund with no success, I'm posting my project proposal here in hope for a wider audience. If you are interested in atomized news and structured journalism and like to exchange ideas and implementation patterns, please send me an email.

    Project title: Living Stories (continued)

    Brief overview:

    With this proposal, I'd like to follow up on the Living Stories project, led by Google, the New York Times and the Washington Post, and build upon its approach to structured journalism.

    A living story can be thought of as a continuously updated news resource that is capable to react to multifaceted story development given varying information preferences. It's like a Wikipedia where each and every word knows exactly whether it is a name, place, date, numeric fact, citation or some such concept. This "atomization of news" breaks a corpus of articles down into a fine-grained web of journalistic assets to be repurposed in different and new contexts. This in turn makes personal media feasible where a story can be customized for each reader, depending on her device, time budget and information needs, effectively being an answer to the unbundling of news.

    Combining the latest natural language processing and machine learning algorithms, I'd love to build the technical infrastructure to automate these tasks. My proof of concept would turn nine years worth of crawled web data into a rich network of living stories. If successful, microservice APIs will be offered for paid and public use.

    Detailed description:

    Living stories are exploring a new space in news presentation and consumption.

    To refresh our memories what a living story actually was, I'll quickly summarize: It's a single-page web app, with a story summary and stream of updates, where all content is organized and filterable by named people, places, events, and so on. Different levels of detail cater to readers with different levels of interest, so every piece of content is weighted by importance and remembers what you have already read.

    I'd like to highlight just two outcomes: (i) the DRY principle ("don't repeat yourself") says to honor the readers' time, and (ii) just-in-time information says to tend to the readers' curiosity.

    Today, readers who have been following the news are confronted with lots of redundant context duplicated across articles, whereas new readers are given too little background. In the programming community, we have a famous acronym: DRY! It stands for "don't repeat yourself" and is stated as "every piece of knowledge must have a single, unambiguous, authoritative representation within a system." DRY represents one of the core principles that makes code readable, reusable and maintainable. Applied to journalism, it might reap the same benefits.

    The second idea is called just-in-time information. It means that information is pulled, not pushed, so that the reader can decide for herself how to consume the content. Choosing just the highlights or just the updates, or following a specific event or topic, or slicing and dicing through the whole news archive, requires structure. Living stories organize the information around structure.

    What makes your project innovative?

    In many ways, this project is merely applying principles of modern software development, plus ideas out of lean production by Toyota, to the value stream of news organizations.

    While both disciplines work with text as their major raw materials, we don't share the same powerful tools and processes yet. For example, why do news articles get squeezed into just a few database fields (i.a. headline, text, author, timestamp) when we could imagine so many more attributes for each story? What will happen if we stop handling articles as mere blobs of characters, but parse them like source code? Would increased modularity in reporting bring the same qualities to journalism that developers value so much in code, like reuse, refactoring, versioning, and possibly even open source?

    For the seminal living stories experiment held in 2010, all data seems to have been crafted by hand, a librarian's job. This project however will apply computer science to the task. Ideally, these approaches would be blended to a hybrid form with more editorial input.

    The technology built for this project will include a streaming data processing pipeline for information extraction, recognition and resolution. Advanced natural language understanding will be most crucial to the problem, that's why I'd love to gain more experience with state-of-the-art deep learning models like recurrent, recursive, convolutional, and especially long-short-term-memory neural networks, as well as word vector and paragraph vector representations.

    My goal is to classify approximately three million web pages, archived since 2007 by Rivva's web crawler, into living stories. Deliverables will include a RESTful hypermedia API where there is a URL for everything and its relations, both browsable for humans as well as machine-readable. Also, the APIs of internally used microservices will be released, so that developers can then build their own applications.

    On the publishers' side, the proposed technology stack would help build up newsroom memory, maximize the shelf life of content, and provide the ultimate building blocks for novel news offerings and experiments. It must be emphasized that any news business created out of structured data is virtually safe from content theft, because its user experience cannot be replicated without also copying the entire underlying database.

    On the consumers' side, through structured journalism, today's push model of news effectively turns into more of a pull, on-demand model. Up-to-date information is increasingly sought out exactly when it is needed and in just the right detail, not necessarily when it's freshly published nor in a one-size-fits-all news package. Essentially, this implies transferring control over content from publishers to consumers. Product innovation on the users' behalf would be completely decoupled from innovation and experimentation in the newsroom.

    Competition:

    Adrian Holovaty's work on chicagocrime.org is the first example I remember combining data, code and journalism in an innovative way.

    Truely pathbreaking was the Living Stories effort by Google Labs, the New York Times and the Washington Post. It's unclear to me why its cutting edge approach has been discontinued so soon, or in the meantime not even been taken up by someone else.

    Circa News was regarded as a front-runner in "atomized news", but shutdown this year due to lack of funding. Circa was breaking out of the traditional article format and branching out into an update stream with facts, statistics, quotes and images representing the atomic level to each story.

    PolitiFact is another good demonstration of structured news, which won them the Pulitzer price in 2009 for fact-checking day-to-day claims made in US politics.

    On the extreme end of the spectrum is Structured Stories. This approach is so highly structured and thus affords so much manual labour that I personally can't see how it would scale to the work pace inside newsrooms.

    Recently, the BBC, the New York Times, the Boston Globe, the Washington Post, and possibly even more news labs, all have announced both experimental prototypes as well as new projects on the way, with the BBC being the most prolific (Ontology, Linked Data) and the New York Times being the most innovative (Editor, Particles).

    References:


              Brainy Voices: Innovative Voice Creation Based on Deep Learning by Acapela Group Research Lab   
    ...the text and acoustic databases we have for all of our voices. This means Acapela DNN knows a lot about human speech in general but doesn't yet know anything about a specific person's voice and will need to hear ...

              Microsoft acquires Maluuba, a deep learning startup in Montreal   
    Microsoft has announced the acquisition of a Montreal-based startup named Maluuba. The acquisition seems to revolve around Maluuba’s natural language work, though the startup works on more than just that, ultimately focusing on the development of artificial intelligence capable of thinking and speaking like a human. Microsoft says that Maluuba’s vision is “exactly in line with ours.” Microsoft describes this … Continue reading
              International Journal of Computer Vision and Image Processing (IJCVIP) Volume 7, Issue 2   

    The International Journal of Computer Vision and Image Processing (IJCVIP) provides the latest industry findings useful to academicians, researchers, and practitioners regarding the latest developments in the areas of science and technology of machines, imaging, and their related applications, systems, and tools. This journal contains unique articles of original, innovative research in computer science, education, security, government, engineering disciplines, software industry, vehicle industry, medical industry, and other fields. This issue contains the following articles: A Deep Learning Approach for Hepatocellular Carcinoma Grading Face Match for Family Reunification: Real-World Face Image Retrieval Foreign Circular Element Detection in Chest X-Rays for Effective Automated Pulmonary Abnormality Screening Performance Analysis of Anisotropic Diffusion Based Colour Texture Descriptors in Industrial Applications Feature Selection of Interval Valued Data Through Interval K-Means Clustering Word-Level Multi-Script Indic Document Image Dataset and Baseline Results on Script Identification


              Intelligent Movie-Making Apps - Flo is a Free Video Editing App That Uses Machine Learning and AI (TrendHunter.com)   
    (TrendHunter.com) This free video editing app and intelligent movie maker uses deep learning and an artificial intelligence-integrated camera. From NexGear Technology, Flo is a smartphone application for making...