Helpdesk Analyst- Alameda (US) - Metaswitch Networks        
Los Altos, CA - Megan Hogan
May 16, 2017 at 2:12 PM
The role

We have a highly expert IT Services team based in both the US and UK, who are responsible for designing, building and maintaining the company?s IT infrastructure as part of the wider Company DevOps team, which provides tools and
          Entry-Level DevOps Engineer - Bullhorn        
Boston, MA - Development/IT
Boston, MassachusettsFull time


Share
Print
Job Description

Bullhorn is the leading provider of CRM software to relationship-driven businesses. Our engineers use modern technologies in Agile development to innovate and create the future of our product suite.
          ä»Žé›†è£…箱历史看 DevOps 的发展进程        

那些创新了人与事物连接方式,且极大降低这种连接成本的技术,才能真正促进生产力的提升。 DevOps正是这样的技术,它是针对研发系统的一次系统性创新。其创新性在于针对整个研发系统中的各个子系统进行交付与反馈的优化,从而有效提升整体效率。

从集装箱历史看 DevOps 的发展进程,首发于文章 - 伯乐在线。


          Ingénieur DevOps F/H - Lutessa - Marseilles         
Nous recherchons pour nos clients en région PACA des ingénieurs DevOps. A la croisée des compétences d'un développeur et d'un opérationnel, venez relever le challenge Cloud/DevOps pour monter en compétences vers les métiers de demain. Mission : Ingénieur DevOps Localité : Sophia Antipolis, Marseille Expérience : Profil junior initié / avancé Disponibilité: ASAP Technical skills: Tools/Framework/Environnement: Docker, Ansible, Puppet, OpenStack, OpenShift, Jenkins,...
          Intégrateur DevOps H/F - Orange Cloud for Business - Boulogne-Billancourt        
Mission : Vous êtes intégré-e dans l'équipe Intégration/DevOps et vous travaillez en étroite collaboration avec les autres équipes (OpenStack, OpenContrail, BSS, Ops...) Vous êtes responsable d'une factory et de son infrastructure associée. Activités : Vous maintenez l'infrastructure (Jenkins, Docker, Kubernetes, Vault, Consul, Chef) et vous êtes responsable de la Qualité de Services de cette dernière. A ce titre, vous assumez les montées de versions et accompagnez les autres...
          Ingénieur(e) DevOps intégration continue JENKINS/ANSIBLE - DAVIDSON CONSULTING - Boulogne-billancourt        
Au sein de l'équipe industrialisation et architecture, vous aurez pour missions : La maintenance et la mise en place des playbook Ansible « Automatiser les livraisons » L'installation et la maintenance des outils de l'intégration continue et du déploiement continu L'installation et la configuration des EC2/AMI (AWS) L'installation et la configuration des conteneurs (DOCKER) L'administration système linux Le développement de scripts shell La gestion des...
          Ingénieur DevOps (H/F) - Oxiane - Paris intra-muros et petite couronne         
Nous recherchons un Ingénieur DevOps et Intégration continue Rejoignez- notre équipe Factory Profil Compétences - Expérience Diplômé d'un Bac +4/5 en Informatique Compétences en développement (quelques années) : vous avez un bagage Java / Java EE solide. Et bien sûr des connaissances "Ops" : Administration de parc informatique (Linux, Windows) Langage de scripting OS : *sh, PowerShell Connaissances Réseaux Outils du DevOps (Infrastructure...
          Ingénieur Système Devops - Télécom (H/F) - Alten - Ile de France        
Descriptif du projet Nous recrutons en CDI un ingénieur DevOps pour un projet basé en Ile de France. Vous interviendrez au sein d'une équipe dynamique sur des projets innovant et à forte valeur ajoutée. En plus de vos compétences techniques, vous êtes un véritable ambassadeur de la culture DevOps auprès des parties prenantes du projet. L'ingénieur DevOps apporte son expertise aux équipes de développement et de production pour améliorer l'exploitation des applications. Vos...
          Work from Home Staff Software Engineer        
A company transforming industry with software-defined machines has a current position open for a Work from Home Staff Software Engineer. Individual must be able to fulfill the following responsibilities: Engineer service automation Identify and define non-functional requirements Collaborate with cross functional stakeholders Qualifications for this position include: Must be willing to travel Bachelor's Degree in “STEM” Majors OR the completion of a code development pair-programming, boot camp-style, or accelerated training curriculum focused on contemporary software development OR High School Diploma / GED with a 4 years of IT experience 8 years of professional experience in IT OR Master’s degree with 6 years of experience in IT OR PhD with 3 years of experience in ITM Legal authorization to work in the U.S. Experience engineering cloud/online services at massive scale, including large scale SQL/NoSQL services DevOps practitioner with hands on experience delivering continuous integration and continuous delivery pipelines
          DevOps Specialist        
Sunlife of Canada - Waterloo, ON - You believe life is about aiming high and making an impact. At Sun Life Financial, we work together, share common values and help each other grow and achieve goals. With roots tracing back to 1865 in Canada, Sun Life Financial has grown to become an established and trusted name i...
          Why DevOps is critical to your organization’s success        

DevOps is becoming a critical tool for organizations. In this article, Sirisha Paladhi explains why you need DevOps and how your company risks becoming irrelevant without it.

The post Why DevOps is critical to your organization’s success appeared first on JAXenter.


          DevOps: “Collaboration comes with a cost”        

Instead of a very simplistic approach, we need to look for collaboration and interaction patterns that work in different contexts. Collaboration comes with a cost but it produces very good results. However, sometimes it’s better to deliberately introduce a kind of boundary between teams. JAXenter editor Hartmut Schlosser talked to Matthew Skelton at DevOpsCon 2016 about the need for collaboration and what goes into good team structures.

The post DevOps: “Collaboration comes with a cost” appeared first on JAXenter.


          It’s an honor to be nominated: JAX Innovation Awards 2017 nominations close tomorrow        

Here at JAXenter, we love celebrating innovation in Java and DevOps! Help us fill out 2017’s award slate by nominating a company, organization, technology, or person that has brought significant innovation to the Java ecosystem! Submissions close tomorrow, August 4th!

The post It’s an honor to be nominated: JAX Innovation Awards 2017 nominations close tomorrow appeared first on JAXenter.


          Hunter the Cheerleader        

Johnie, Christophers sister is a Rigby High school cheerleader. Hunter thinks it is the coolest thing in the world. This last week they did a clinic for the little girls and Hunter participated. She loved it! They learned a dance in two days then performed it at the foot ball game. I think Hunter just liked being in front of the crowd!

This is Hunter doing the dance. She really had a hard time remembering it and kept looking at Johnie for help!



          Announcing DrupalCamp Mumbai 2017        
Start: 
2017-04-01 08:30 - 2017-04-02 18:00 Asia/Kolkata
Organizers: 
rachit_gupta
manasiv
jmandar91
priyanka1989
Ashish.Dalvi
Event type: 
Drupalcamp or Regional Summit

April 1st and 2nd, 2017

Victor Menezes Convention Centre, IIT Bombay

We are excited to announce that the 6th annual DrupalCamp Mumbai will be held on Saturday, April 1st and Sunday, April 2nd, 2017 at VMCC, IIT Bombay.

REGISTER NOW

#DCM2017 Features

  • Keynote by featured open source & community evangelist

  • Expert sessions on multiple tracks: The latest developments with Drupal 8. Expert sessions for developers, site builders, themers, devops admins, project managers, growth hackers and business owners

  • Drupal in a Day Training for Drupal beginners and students

  • Interactive Drupal 8 workshops on topics like--Transitioning to D8, Migrating to D8, Symfony, D8 site building, D8 admin and content management, D8 contribs, headless D8 and more.

  • CXO roundtable - where business leaders in the Drupal community can share knowledge and resolve issues with their peers.

  • Projects Showcases featuring some exciting projects in Drupal

  • Bird of Feathers sessions: these barcamp style, informal discussion groups are a great place to meet other members of the community with similar interests.

  • Networking sessions and after party

  • Day 2 Codesprints and Hackathon featuring Drupal 8 development and Drupal 7->8 module porting.

  • Community Meetup to discuss Drupal Mumbai initiatives, challenges, ideas and suggestions with community members and Plan 2017

DrupalCamp Mumbai 2015 was a resounding success with 650+ participants over 2 days. 40+ businesses were represented. Drupal trainings were a big draw. So were expert sessions. Mke Lamb keynoted the event. We had two unique panel discussions on open source in governance, and understanding open source communities in India. 

DrupalCamp Mumbai 2017 promises to be even bigger. Drupal 8 is maturing, Our focus will be everything around Drupal 8. Besides expert sessions, we are planning sessions interactive workshops around Drupal 8 for developers, designers, CXOs and project managers.

Who should attend? DCM is the biggest and most fun gathering of Drupalers in the country.

  • For businesses this will be a perfect platform to scout talent and raise their brand’s awareness. Networking opportunities will allow you to prospect alliances, and understand the rapidly growing Drupal landscape.

  • Developers, designers, site builders, project managers and hackers can network with the top companies, their peers and experts in Drupal today. You can sharpen your skills at workshops, solve problems and become an active contributor to Drupal.org at the codesprints.

  • For startups, consultants, and enthusiasts, there is no better networking and prospecting platform than DCM2017, be it for new projects or looking for technical advice from experts.

  • Wordpress, Joomla, PHP developers, students or anyone into open source and Drupal, will especially find DCM2017 an eye-opening experience. If you know someone who is interested, bring them along or ask them to attend!

  • For everyone it is a great way to become part of the most exciting, fastest growing and dynamic open source communities and become a contributing member.

  • Lets not forget all the fun and free giveaways to be had, and new friends to be made!

So what are you waiting for?

REGISTER NOW

We are always looking for passionate volunteers. If you, or anyone you know would like to be part of Drupal Mumbai please ask them to sign up here - http://2017.drupalmumbai.org/contact/volunteer_registration

Engage with us and keep updated on DCM2017 and Drupal Mumbai events:

Drupal.org: https://groups.drupal.org/mumbai

Mailing List: http://eepurl.com/8ElBf

Meetup: https://meetup.com/Drupal-Mumbai-Meetup-Group

Twitter: @DrupalMumbai

Facebook: FB.com/DrupalMumbai

Google+: http://gplus.to/DrupalMumbai

For any more details, please write to info@drupalmumbai.org

 

AttachmentSize
DCM-Logo-New2017.png22.58 KB

          Spring 2016 tech reading        

Hello, Spring is here and so is another set of links that I've bookmarked for your reading pleasure.

Java

Cassandra and other distributed systems

Network

Systems

Functional programming

Some DevOps

Misc tech

Until next time!
Ashwin.

          Wired: DevOps isn't a job, but it is still important        

"Traditionally, companies have at least two main technical teams. There are the programmers, who code the software that the company sells, or that its employees use internally. And then there are the information technology operations staff, who handle everything from installing network gear to maintaining the servers that run those programmers’ code. The two teams only communicate when it’s time for the operations team to install a new version of the programmers’ software, or when things go wrong. That’s the way it was at Munder Capital Management when J. Wolfgang Goerlich joined the Midwestern financial services company in 2005."

Read the rest at: http://www.wired.com/2015/05/devops-isnt-job-still-important/


          Développeur Full Stack Java/J2EE H/F - Apside - Nantes        
Passionné(e) et curieux(se), vous êtes toujours à la pointe des technologies ? Vous portez un intérêt particulier aux meet up et aux JUG ? Vous souhaitez relever des défis dans un environnement digital et multi-technologies (Java/J2EE, Android, iOS) ? Nous avons des challenges à votre hauteur : intégré(e) au sein d'une équipe Agile de 6 personnes guidées par le mouvement Devops, vous serez en charge de l'étude, du développement et de l'optimisation d'applications web et/ou métiers. ...
          Devops Engineer        
OH-Mayfield Village, Position Details: Contract to Hire Strong candidates should have a good blend of infrastructure technology and an application development background. • Knowledge of build and release engineering principles and methodologies including source control, branch management, build and elevate scripting, and build infrastructure configuration. • Experience with multiple scripting technologies - MS Build,
          Dirk Deimeke: DevOps ...        
none
          Christian M. Grube: DevOps ...        
none
          Stuff The Internet Says On Scalability For August 4th, 2017        

Hey, it's HighScalability time:


Hands down the best ever 25,000 year old selfie from Pech Merle cave in southern France. (The Ice Age)

If you like this sort of Stuff then please support me on Patreon.

 

  • 35%: US traffic is now IPV6; 10^161: decision points in no-limit Texas hold’em; 4.5 billion: Facebook translations per day; 90%: savings by moving to Lambda; 330TB: IBM's tiny tape cartridge, enough to store 330 million books; $108.9 billion: game revenues in 2017; 85%: of all research papers are on Sci-Hub; 1270x: iPhone 5 vs Apollo guidance computer; 16 zettabytes: 2017 growth in digital universe; 

  • Quotable Quotes:
    • Andrew Roberts: [On Napoleon] No aspect of his command was too small to escape notice.
    • Jason Calacanis: The world has trillions of dollars sitting in bonds, cash, stocks, and real estate, which is all really “dead money.” It sits there and grows slowly and safely, taking no risk and not changing the world at all. Wouldn’t it be more interesting if we put that money to work on crazy experiments like the next Tesla, Google, Uber, Cafe X, or SpaceX?
    • @icecrime: The plural of “it’s not a bug, it’s a feature” is “it’s not a bug tracker, it’s a backlog”.
    • Jeff Darcy: When greater redundancy drives greater dependency, it’s time to take a good hard look at whether the net result is still a good one.
    • uhnuhnuhn: "They ran their business into the ground, but they did it with such great tech!"
    • Anglés-Alcázar: It’s very interesting to think of our galaxy not as some isolated entity, but to think of the galaxy as being surrounded by gas which may come from many different sources. We are connected to other galaxies via these galactic winds.
    • @ojiidotch: Main app now running Python 3.6 (was 2.7 until yesterday). CPU usage 40% down, avg latency 30% down, p95 60% down.
    • Nemanja Mijailovic: It’s really difficult to catch all bugs without fuzzing, no matter how hard you try to test your software. 
    • SandwichTeeth: a lot of companies have security teams solely to meet audit requirements. If you find yourself on a team like that, you'll be spending a lot of time just gathering evidence for audits, remediating findings and writing policy. I really loved security intellectually, but in practice, the blue-team side of things wasn't my cup of tea.
    • jph: security is needed to gradually escalate a user's own identity verification -- think of things like two-factor auth and multi-factor auth, that can phase in (or ramp up) when a user's actions enter a gray area of risk. Some examples: when a user signs in from a new location, or a user does an especially large money transfer, or a user resumes an account that's been dormant for years, etc.
    • @hichaelmart: So while Google is doubling down on gRPC it seems that Amazon is going all in with CBOR. DDB DAX uses some sort of CBOR-over-sockets AFAICT
    • Wysopal: I’d like to see someone fixing this broken market [insecure software and hardware market]. Profiting off of that fix seems like the best approach for a capitalism-based economy.
    • Matthias Käppler: Microservices are often intermediate nodes in a graph of services, acting as façades where an incoming request translates to N outgoing requests upstream, the responses to which are then combined into a single response back downstream to the client.
    • Jack Fennimore: EA Play 2017 was watchable the same way Olive Garden is edible.
    • erikb: [On SoundCloud] TL;DR Top Management started too late to think about making actual money. They also hired an asshole for their US offices. When they got an opportunity to be bought by Twitter they asked for way too much money. And the CEO is basically on a constant holidays trip since 2014, while not failing to rub it in everybody's face via Instagram photos.
    • Jennifer Mendez: If you don’t have the games people want to play, you can wave goodbye to return on investment on a powerful console. Does hardware matter? Of course it does! But it doesn’t matter if you don’t have anything to play on it.
    • Alex Miller: The utility of a blockchain breaks down in a private or consortium setting and should, in my opinion, be replaced by a more performant engine like Apache Kafka.
    • Krish: most of the multi-cloud usecases I am seeing are about using different cloud for different workloads. It could change and I would expect them to embrace the eventual consistency model initially
    • Ian Cutress: Then there is the Ryzen 3 1300X. Compared to the Core i3-7300/7320 and the Core i5-7400, it clearly wins on performance per dollar all around. Compared to the Core i3-7100 though, it offers almost 5% more performance for around $10-15 more, which is just under 10% of the cost.
    • throw2016: Just from an year ago the cpu market has changed completely. The sheer amount of choice at all levels is staggering. For the mid level user the 1600 especially is a formidable offering, and the 1700 with 8 cores just ups the ante.
    • danmaz74: the main reason Rails is declining in relevance isn't microservices or the productivity (!) of Java, but the fact that more and more development effort for web applications is moving into JS front-end coding.
    • Rohit Karlupia: we can deal with [S3] eventual consistency in file listing operations by repeating the listing operation, detecting ghost and conceived files and modifying our work queues to take our new knowledge about the listing status into account.
    • tboyd47: It's the end of an era. From 2005 to 2007, the "Web 2.0" craze, the release of Ruby on Rails, and the rise of Agile methods all happened at once. These ideas all fed into and supported each other, resulting in a cohesive movement with a lot of momentum. The long-term fact turned out to be that this movement didn't benefit large corporations that have always been and usually still are the main source of employment for software developers. So we have returned to our pre-Rails, pre-agile world of high specialization and high bureaucratic control, even if Rails and "Agile" still exist with some popularity.
    • @reneritchie: Only beginning to see the advantages of Apple making everything from atom to bit. Everything will be computational.
    • Vasiliy Zukanov: switching to Kotlin will NOT have any appreciable positive gains on the cost, the effort or the schedule of software projects
    • visarga: Over the years I have seen astronomy become more like biology - diverse both in the kinds of objects it describes and their behavior.
    • Jaana B. Dogan: I think the industry needs a breakdown between product and infra engineering and start talking how we staff infra teams and support product development teams with SRE. The “DevOps” conversation is often not complete without this breakdown and assuming everyone is self serving their infra and ops all the times.
    • David Rosenthal~ Does anybody believe we'll be using Bitcoin or Ethereum 80 years from now?
    • Richard Jones: There is a physical lower limit on how much energy it takes to carry out a computation – the Landauer limit. The plot above shows that our current technology for computing consumes energy at a rate which is many orders of magnitude greater than this theoretical limit (and for that matter, it is much more energy intensive than biological computing). There is huge room for improvement – the only question is whether we can deploy R&D resources to pursue this goal on the scale that’s gone into computing as we know it today.
  • Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...


          Sponsored Post: Apple, Domino Data Lab, Etleap, Aerospike, Clubhouse, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp        

Who's Hiring? 

  • Apple is looking for passionate VoIP engineer with a strong technical background to transform our Voice platform to SIP. It will be an amazing journey with highly skilled, fast paced, and exciting team members. Lead and implement the engineering of Voice technologies in Apple’s Contact Center environment. The Contact Center Voice team provides the real time communication platform for customers’ interaction with Apple’s support and retail organizations. You will lead the global Voice, SIP, and network cross-functional engineers to develop world class customer experience. More details are available here.

  • Advertise your job here! 

Fun and Informative Events

  • Advertise your event here!

Cool Products and Services

  • Enterprise-Grade Database Architecture. The speed and enormous scale of today’s real-time, mission critical applications has exposed gaps in legacy database technologies. Read Building Enterprise-Grade Database Architecture for Mission-Critical, Real-Time Applications to learn: Challenges of supporting digital business applications or Systems of Engagement; Shortcomings of conventional databases; The emergence of enterprise-grade NoSQL databases; Use cases in financial services, AdTech, e-Commerce, online gaming & betting, payments & fraud, and telco; How Aerospike’s NoSQL database solution provides predictable performance, high availability and low total cost of ownership (TCO)

  • What engineering and IT leaders need to know about data science. As data science becomes more mature within an organization, you may be pulled into leading, enabling, and collaborating with data science teams. While there are similarities between data science and software engineering, well intentioned engineering leaders may make assumptions about data science that lead to avoidable conflict and unproductive workflows. Read the full guide to data science for Engineering and IT leaders.

  • Etleap provides a SaaS ETL tool that makes it easy to create and operate a Redshift data warehouse at a small fraction of the typical time and cost. It combines the ability to do deep transformations on large data sets with self-service usability, and no coding is required. Sign up for a 30-day free trial.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.


          Sponsored Post: Apple, Domino Data Lab, Etleap, Aerospike, Loupe, Clubhouse, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp        

Who's Hiring? 

  • Apple is looking for passionate VoIP engineer with a strong technical background to transform our Voice platform to SIP. It will be an amazing journey with highly skilled, fast paced, and exciting team members. Lead and implement the engineering of Voice technologies in Apple’s Contact Center environment. The Contact Center Voice team provides the real time communication platform for customers’ interaction with Apple’s support and retail organizations. You will lead the global Voice, SIP, and network cross-functional engineers to develop world class customer experience. More details are available here.

  • Advertise your job here! 

Fun and Informative Events

  • Advertise your event here!

Cool Products and Services

  • Enterprise-Grade Database Architecture. The speed and enormous scale of today’s real-time, mission critical applications has exposed gaps in legacy database technologies. Read Building Enterprise-Grade Database Architecture for Mission-Critical, Real-Time Applications to learn: Challenges of supporting digital business applications or Systems of Engagement; Shortcomings of conventional databases; The emergence of enterprise-grade NoSQL databases; Use cases in financial services, AdTech, e-Commerce, online gaming & betting, payments & fraud, and telco; How Aerospike’s NoSQL database solution provides predictable performance, high availability and low total cost of ownership (TCO)

  • What engineering and IT leaders need to know about data science. As data science becomes more mature within an organization, you may be pulled into leading, enabling, and collaborating with data science teams. While there are similarities between data science and software engineering, well intentioned engineering leaders may make assumptions about data science that lead to avoidable conflict and unproductive workflows. Read the full guide to data science for Engineering and IT leaders.

  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • Etleap provides a SaaS ETL tool that makes it easy to create and operate a Redshift data warehouse at a small fraction of the typical time and cost. It combines the ability to do deep transformations on large data sets with self-service usability, and no coding is required. Sign up for a 30-day free trial.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.


The Solution to Your Operational Diagnostics Woes

Scalyr gives you instant visibility of your production systems, helping you turn chaotic logs and system metrics into actionable data at interactive speeds. Don't be limited by the slow and narrow capabilities of traditional log monitoring tools. View and analyze all your logs and system metrics from multiple sources in one place. Get enterprise-grade functionality with sane pricing and insane performance. Learn more today


VividCortex Gives You Database Superpowers 

Database monitoring is hard, but VividCortex makes it easy. Modern apps run complex queries at large scales across distributed, diverse types of databases (e.g. document, relational, key-value). The “data tier” that these become is tremendously difficult to observe and measure as a whole. The result? Nobody knows exactly what’s happening across all those servers.

VividCortex lets you handle this complexity like a superhero. With VividCortex, you're able to inspect all databases and their workloads together from a bird's eye view, spotting problems and zooming down to individual queries and 1-second-resolution metrics in just two mouse clicks. With VividCortex, you gain a superset of system-monitoring tools that use only global metrics (such as status counters), offering deep, multi-dimensional slice-and-dice visibility into queries, I/O, and CPU, plus other key measurements of your system's work. VividCortex is smart, opinionated, and understands databases deeply, too: it knows about queries, EXPLAIN plans, SQL bugs, typical query performance, and more. It empowers you to find and solve problems that you can't even see with other types of monitoring systems, because they don’t measure what matters to the database.

Best of all, VividCortex isn’t just a DBA tool. Unlike traditional monitoring tools, VividCortex eliminates the walls between development and production -- anybody can benefit from VividCortex and be a superhero. With powerful tools like time-over-time comparison, developers gain immediate production visibility into databases with no need for access to the production servers. Our customers vouch for it: Problems are solved faster and they're solved before they ever reach production. As a bonus, DBAs no longer become the bottleneck for things like code reviews and troubleshooting in the database. 

Make your entire team into database superheros: deploy VividCortex today. It only takes a moment, and you’re guaranteed to immediately learn something you didn’t already know about your databases and their workload. 

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.


          Relay Replay: July 24 - 28        
Studio Update: Austin Refactoring of shopping code is now to the point where Austin Design Team is now plugging items back into the shops, this include first pass at getting kiosks up/running for commodity tradingFocused on getting shops related to PU/landing zones functional first and will be making a pass of Area 18 shops as wellArmour can now be bought piece by pieceRemaining ships have been implemented into the price fixer toolsThis tool is also used as a way to gauge whether or not the ships they’re building are over/underpowered for their intended purposeProgress has been made on getting Miles Eckhart ready to go, lot of work on the feather blending system and got him working with a small subset of his animationsReceived additional code support to allow reputation to dictate his conversation paths with the playerNow have the ability to assign specific missions with mission brief tagsJosh Coons finished images/videos of the Cutlass Black and moved onto create base material and whitebox meshes for Cutlass variantsHe will complete the first pass on exterior looks for the variants before moving on to the Constellation Phoenix to allow Design to flesh out some key gameplay system for CutlassesChris Smith working on bugs for Hornet/Constellation Andromeda and will move on the promo video for Constellation Aquila nextBackend Services Team preparing for deployment of diffusion, game servers now have full access to diffusion API and they’ll start using it with the shopping service in 3.0Started converting the persistence cache and general instance manager into smaller stateless, full diffusionized servicesPU Animation Team started research/development for how to implement their wild line(dialogue spoken by an NPC) systemFeather blending allows them to blend performance capture of the wild lines with a large number of useable animationsGoing through existing animations and filling in gaps missing from original performance capture with new transition animationsAustin Ship Animation Team continue to refine the cockpit/turret experienceR&D phase of implementing button presses using Item 2.0 System. This helped finalise dashboard/cockpit metrics across different ships that use same cockpit typeUK Engineers refactored some backend systems allowing to fully implement blending of base G-force pose blend spacesDevOps Team working on increasing capacity of build/deployment pipelines in preparation for 3.0Making additional change/bug fixes to support new delta patcherIT department upgraded Austin networkAudio team member Jason Cobb continued work on derelict crash site sound design for different moon environments/performed a variety of particle audio implementation experiments for revamped ship debris noises/playtested and mixed refinements for ship emergency state audio and captured sound effects for various props and materialsAustin QA has been testing new Cutlass BlackNew missions/wrecks/NPCs in StantonShip testing continued as more ships converted to Item 2.0 systemLarge scale playtests of Arena Commander, Star Marine and Stanton ongoing weeklyTesting more mobiGlas applications like Starmap, the personal manager, mission manager and mission boardTesting character gravity/free fall as well as cargo mechanicsEngine/editor testers are testing new tech like capsule based actor entity, entity component update scheduler and director/actor animation and controlNew stamina/oxygen breathing system are getting some balance changesFour new people joining the Player Relations TeamTurbulent Updating ship specifications in the databaseGround vehicles can now be added to the ship matrix as well as anything that comes upNew sizing scheme for shipsRevamped ship detail pageReworked ship matrix database to make it easier to updateSpectrum v0.3.6 with EvocatiNew features include new text editorNew tools like hyperlinks, hyperlink formatting, preview posts and draftsMini profile which include post count and karmaNew jump to track replyWorking on refreshing some UI elementsEngineering Team working on getting most of the digital distribution channels ready for the delta patching system.Ship Shape: Tumbril Cyclone CIG's entry into land vehicles with a new manufacturer has lore that's been around awhile associated with rugged military vehiclesThe Cyclone is a fun fast four-wheeled and steered land vehicle partial to jumping off ramps with marketing design aesthetics, though clearly from a military originIf the Ursa Rover is the tortoise and the Cyclone is the hare, then the Nox is, as with Goldilocks, just right and in the middleBesides being fast and super rugged, it's built with modularity similar to the real worldThe variants include one for cargo, infantry support, recon and exploration as well as racing and even an anti-air versionThe recon version can not only map terrain, but drop beaconsThe racing version basically has nitrous and will be similar to a Baja racerThe anti-air model will have two size two missile racks with a countermeasure package that includes not only chaff and flares, but smokescreens and a size one EMPIt's always a struggle to balance function with design with the feedback within the pipelineThe first one of any type of entity or asset is always the hardest due to the unknown, but the most challenging tends to be making the deadlines . . . but they always doBehind the Scenes: Ship in ship Persistence Gameplay Programmer Chad McKinney talks about how the technology involved with persistence will change in the upcoming 3.0 update.Persistence as it is now is a simple system that tracks player accounts, specifically loadouts and ships with their loadouts. While this works for the gameplay in 2.6, it limits the player to what they can do. They can't pick up items and store them on their ship, trade, etc because the server doesn't track those things.With 3.0 players will be able to start making changes to the world around them in subtle ways such as picking up cargo from an outpost they found or parking a Dragonfly in their Cutlass and having it remain there regardless of whether they log out or not.While it seems like a simple thing, it's actually fairly complicated and required a rework of the way persistence is handled in many backend systems.They've revamped the system to track legal ownership of entities and physical ownership. The difference between the two is physical ownership is having the item in front of you and in your hands or on your ship and in your possession at that moment, with legal ownership being your entitlement to a given entity like a ship for instance.Players can legally own a ship, an item, etc, but another player can physically take it and claim it as their own and it'll remain in that other players possession until someone else takes it from that player too. However it isn't without consequence, players can become wanted if they steal other players ships, cargo, items and will be marked to the authorities as a criminal.This goes deeper as the gameplay for pirates can become very rewarding, but at a high cost. Players can steal cargo for instance and roll the dice by not including the cargo on their manifest when they come across authorities, or get creative by using secret compartments to store the cargo, or take the safest route to avoid authorities entirely.The technology doesn't stop there, with diffusion, a service that allows for a new way of interacting with all the backend services at the same time will allow for more things in the world to become persistent. With 3.0 the amount of things that will persist in the world will still be fairly limited, but in the future it'll expand from affecting a small area in the world to having the entire world be affected by a variety of player actions from supply and demand, to wars, to NPC activity and so much more. A player will be able to take something from a planet and another player won't ever see what was taken, on the flipside a player can leave something and another player will find it.With 3.0 player health, stamina, ship health and ammunition will become persistent so no longer will players be able to fire all their ammunition with no penalty, also if you fly your ship and a wing gets blown off, the next time you login it will be still missing until you go and get it repaired or make an insurance claim.Production Schedule Update: Evocati window now opens August 3rd.  PTU window opens August 18th.  Live window opens September 4th. Delamar and Levski were completed this week along with UI for the Cargo Manifest App.
          What makes a great chatbot? Laser focus on customers        

Fewer technologies have garnered more attention over the past year than chatbots, those virtual assistants that mimic human speech while facilitating tasks on behalf of humans, typically via a conversational messaging interface. At a time when software is driving unprecedented levels of automation, companies are using chatbots to help customers order anything from food to office supplies to additional computing capacity.

Chatbots are a big reason why corporate adoption of cognitive systems and AI will drive worldwide revenues from nearly $8 billion in 2016 to more than $47 billion in 2020, according to IDC. But what exactly makes a great chatbot? Perhaps more importantly given enterprises' investments in such tools, what makes a bad one? What precautions should CIOs take in building them?

To read this article in full or to leave a comment, please click here


          Surge in Mobile Users Spawns App Development, Testing Advances        
As mobile app test and development challenges increase, businesses turn to responsive design, test automation and DevOps to meet user demands. Published by: SearchSOA.com
          Requirements Engineering and Privacy        
A lot of travelling this month to conferences and speaking about privacy engineering (as usual). I just spent a week in Beijing at RE'16 (Requirements Engineering 2016) where I both presented a paper on privacy requirements and participated in a panel session on digitalisation and telecommunications - more on that later.

Anyway, here are the slides from the privacy paper:

Experiences in the Development and Usage of a Privacy Requirements Framework from Ian Oliver

And here is the abstract:

"Any reasonable implementation of privacy requirements can not be made through legal compliance alone. The belief that a software system can be developed without privacy being an integral concept, or that a privacy policy is sufficient as requirements or compliance check is at best dangerous for the users, customers and business involved. While requirements frameworks exist, the specialisation of these into the privacy domain have not been made in such a manner that they unify both the legal and engineering domains. In order to achieve this one must develop ontological structures to aid communication between these domains, provide a commonly acceptable semantics and a framework by which requirements expressed at different levels of abstractness can be linked together and support refinement. An effect of this is to almost completely remove the terms ‘personal data’ and ‘PII’ from common usage and force a deeper understanding of the data and information being processed. Once such a structure is in place - even if just partially or sparsely populated - provides a formal framework by which not only requirements can be obtained, their application (or not) be justified and a proper risk analysis made. This has further advantages in that privacy requirements and their potential implementations can be explored through the software development process and support ideas such as agile methods and ‘DevOps’ rather than being an ‘add-on’ exercise - a privacy impact assessment - poorly executed at inappropriate times."

Ian Oliver (2016) Experiences in the Development and Usage of a Privacy Requirements Framework. Requirements Engineering 2016 (RE'16), Beijing, China, September 12-17, 2016
          How to interview an amazon database expert        
via GIPHY Amazon releases a new database offering every other day. It sure isn’t easy to keep up. Join 35,000 others and follow Sean Hull on twitter @hullsean. Let’s say you’re hiring a devops & you want to suss out their database knowledge? Or you’re hiring a professional services firm or freelance consultant. Whatever the … Continue reading How to interview an amazon database expert
          Do we have to manage ops in the cloud?        
via GIPHY One of the things that is exciting about the cloud is the reduced need for operations staff. There seem to be two drivers of this trend. One is devops, and all the automation that comes with it. As we formalize configurations, things become repeatable, and fewer people can manage greater armies of servers. … Continue reading Do we have to manage ops in the cloud?
          Top questions to ask a devops expert when hiring or preparing for job & interview        
Whether your a hiring manager, head of HR or recruiter, you are probably looking for a devops expert. These days good ones are not easy to find. The spectrum of tools & technologies is broad. To manage today’s cloud you need a generalist. Join 33,000 others and follow Sean Hull on twitter @hullsean. If you’re … Continue reading Top questions to ask a devops expert when hiring or preparing for job & interview
          La solution CloudBees Jenkins Platform - Private SaaS Edition est certifiée Red Hat OpenStack        
Secuobs.com : 2016-04-25 17:04:49 - Global Security Mag Online - CloudBees annonce l'obtention de la certification Red Hat OpenStack pour son offre CloudBees Jenkins Platform - Private SaaS Edition CloudBees devient ainsi la première plateforme DevOps et de déploiement continu à être certifiée sur Red Hat OpenStack Cette certification vient renforcer les niveaux de performance et de compatibilité de CloudBees Jenkins Platform - Private SaaS Edition La solution CloudBees Jenkins Platform Private SaaS Edition fournit des fonctions cloud-natives Elle permet - Produits
          DevOps and your database        

I’m a consultant. That means I have to deal with whatever I come across at customer sites. I can recommend change, but when I’m called in to fix something, I generally don’t get to insist on it. I just have to get something fixed. That means dealing with developers (if they exist) and with DBAs, and making sure that anything that I try to fix somehow works for both sides. That means I often have to deal with the realm of DevOps, whether or not the customer knows it.

DevOps is the idea of having a development story which improves operations.

Traditionally, developers would develop code without thinking much about operations. They’d get some new code ready, deploy it somehow, and hope it didn’t break much. And the Operations team would brace themselves for a ton of pain, and start pushing back on change, and be seen as a “BOFH”, and everyone would be happy. I still see these kinds of places, although for the most part, people try to get along.

With DevOps, the idea is that developers work in a way that means that things don’t break.

I know, right.

If you’re doing the DevOps things at your organisation, you’re saying “Yup, that’s normal.” If you’re not, you’re probably saying “Ha – like that’s ever going to happen.”

But let me assure you – it can. For years now, developers have been doing Continuous Integration, Test-Driven Development, Automated Builds, and more. I remember seeing these things demonstrated at TechEd conferences in the middle of the last decade.

But somehow, these things are still considered ‘new’ in the database world. Database developers look at TDD and say “It’s okay for a stateless environment, but my database changes state with every insert, update, or delete. By its very definition, it’s stateful.”

The idea that a stored procedure with particular parameters should have a specific impact on a table that particular characteristics (values and statistics – I would assume structure and indexes would be a given) isn’t unreasonable. And it’s this that can lead to the understanding that whilst a database is far from stateless, state can be a controllable thing. Various states can become part of various tests: does the result still apply when there are edge-case rows in the table?; is the execution plan suitable when there are particular statistics in play?; is the amount of blocking reasonable when the number of transactions is at an extreme level?

Test-driven development is a lot harder in the database-development world than in the web-development world. But it’s certainly not unreasonable, and to have confidence that changes won’t be breaking changes, it’s certainly worthwhile.

The investment to implement a full test suite for a database can be significant, depending on how thorough it needs to be. But it can be an incremental thing. Elements such as source control ought to be put in place first, but there is little reason why database development shouldn’t adhere to DevOps principles.

@rob_farley

(Thanks to Grant Fritchey (@gfritchey) - for hosting this month’s T-SQL Tuesday event)


          â€œStored procedures don’t need source control…”        

Hearing this is one of those things that really bugs me.

And it’s not actually about stored procedures, it’s about the mindset that sits there.

I hear this sentiment in environments where there are multiple developers. Where they’re using source control for all their application code. Because, you know, they want to make sure they have a history of changes, and they want to make sure two developers don’t change the same piece of code, maybe they even want to automate builds, all those good things.

But checking out code and needing it to pass all those tests is a pain. So if there’s some logic that can be put in a stored procedure, then that logic can be maintained outside the annoying rigmarole of source control. I guess this is appealing because developers are supposed to be creative types, and should fight against the repression, fight against ‘the man’, fight against [source] control.

When I come across this mindset, I worry a lot.

I worry that code within stored procedures could be lost if multiple people decide to work on something at the same time.

I worry that code within stored procedures won’t be part of a test regime, and could potentially be failing to consider edge cases.

I worry that the history of changes won’t exist and people won’t be able to roll back to a good version.

I worry that people are considering that this is a way around source control, as if source control is a bad thing that should be circumvented.

I just worry.

And this is just talking about code in stored procedures. Let alone database design, constraints, indexes, rows of static data (such as lookup codes), and so on. All of which contribute to a properly working application, but which many developers don’t consider worthy of source control.

Luckily, there are good options available to change this behaviour. Red Gate’s Source Control is tremendously useful, of course, and the inclusion of many Red Gate’s DevOps tools within VS2017 would suggest that Microsoft wants developers to take this more seriously than ever.

For more on this kind of stuff, go read the other posts about this month’s T-SQL Tuesday!

TSQL2sDay150x150

@rob_farley


          Do you know DevOps? What about SRE? Learn More with Founder of RackN        
In this podcast, Rob Hirschfeld, Founder and CEO, RackN discusses the latest trends in IT management at scale including DevOps and the emergence of Site Reliability Engineering (SRE). SRE is a response to the limitations of DevOps faced by Google providing an answer to the significant challenges of operating global Hybrid IT infrastructure that continues to grow at a rapid rate. For more information on RackN visit www.rackn.com
          EPISODE86 - Intro to Docker #1        
First of 4 podcasts focused on Docker and how HP is leveraging this new DevOps technology.
          EPISODE82 - You (are) the change agent for driving cloud adoption in your b        
Join us to explore practical strategies to overcome cloud adoption challenges faced by traditional businesses. We will discuss planning a cloud proof of concept (PoC), cloud native application development, where to start with DevOps, and why hybrid cloud is the winning formula. Come find out how we'll equip you with a pragmatic approach towards starting your cloud journey - one step at a time.
          EPISODE80 - Making DevOps a Reality with PaaS        
DevOps for PaaS session recorded at the 2015 HP Discover event in Las Vegas.
          EPISODE70 - CloudSland Introduction        
Learn about the new open source project; CloudSlang from the HP team in Israel for DevOps automation.
          Featured Resource: Tactics for Leading Change        

Leading up to DevOps Enterprise Summit 2017 San Francisco, we’ll be releasing our most valuable DevOps resources to our community on a weekly basis. Each of these resources come to you straight from the DevOps Enterprise Forum, where more than 50 technology leaders and thinkers gather for three days to create written guidance on the best-known […]

The post Featured Resource: Tactics for Leading Change appeared first on IT Revolution.


          Featured Resource: DevOps Case Studies        

Leading up to DevOps Enterprise Summit 2017 San Francisco, we’ll be releasing our most valuable DevOps resources to our community on a weekly basis. Each of these resources come to you straight from the DevOps Enterprise Forum, where more than 50 technology leaders and thinkers gather for three days to create written guidance on the best-known […]

The post Featured Resource: DevOps Case Studies appeared first on IT Revolution.


          Featured Resource: Test Automation for Legacy Code        

Leading up to DevOps Enterprise Summit 2017 San Francisco, we’ll be releasing our most valuable DevOps resources to our community on a weekly basis. Each of these resources come to you straight from the DevOps Enterprise Forum, where more than 50 technology leaders and thinkers gather for three days to create written guidance on the best-known […]

The post Featured Resource: Test Automation for Legacy Code appeared first on IT Revolution.


          Join us July 27 for the Biggest DevOps Enterprise Summit CrowdChat Yet!        

We are thrilled to be hosting our largest DevOps Enterprise Summit CrowdChat to date! On July 27, more than 20 leaders in tech will join Gene Kim as co-hosts of an online discussion to dive deep into several of the hot topics surrounding DevOps, technology transformations and lessons learned from careers in IT. This live […]

The post Join us July 27 for the Biggest DevOps Enterprise Summit CrowdChat Yet! appeared first on IT Revolution.


          Featured Resource: DevOps and Audit        

Leading up to DevOps Enterprise Summit 2017 San Francisco, we’ll be releasing our most valuable DevOps resources to our community on a weekly basis. Each of these resources come to you straight from the DevOps Enterprise Forum, where more than 50 technology leaders and thinkers gather for three days to create written guidance on the best-known […]

The post Featured Resource: DevOps and Audit appeared first on IT Revolution.


          Lessons Learned from Top Tech Leaders        

In the technology world, we are in a time of incredible disruption and innovation. Many years ago, my friend and DevOps Handbook co-author John Willis predicted that a decade from now, academics will call this period the “Cambrian explosion” — it was an incredibly vibrant 25-million year period that resulted in an incredible diversification of […]

The post Lessons Learned from Top Tech Leaders appeared first on IT Revolution.


          Featured Resource: Mythbusting DevOps in the Enterprise        

Leading up to DevOps Enterprise Summit 2017 San Francisco, we’ll be releasing our most valuable DevOps resources to our community on a weekly basis. Each of these resources come to you straight from the DevOps Enterprise Forum, where more than 50 technology leaders and thinkers gather for three days to create written guidance on the best-known […]

The post Featured Resource: Mythbusting DevOps in the Enterprise appeared first on IT Revolution.


          Featured Resource: Metrics for DevOps Initiatives        

Leading up to DevOps Enterprise Summit 2017 in San Francisco, we’ll be releasing our most valuable DevOps resources to our community on a weekly basis. Each of these resources come to you straight from the DevOps Enterprise Forum, where more than 50 technology leaders and thinkers gather for three days to create written guidance on […]

The post Featured Resource: Metrics for DevOps Initiatives appeared first on IT Revolution.


          An Imaginary Apology Letter From Your Airline CEO        

I spent this morning reflecting on everything I had learned and experienced in the past two days at the London DevOps Enterprise Summit, which I co-hosted. I was so inspired by all the amazing tales of courageous business transformation by the amazing technology leaders representing almost every industry vertical. And then I ran into this […]

The post An Imaginary Apology Letter From Your Airline CEO appeared first on IT Revolution.


          We're Hiring: Web Developer Positions        
[color=orange][b]We're happy to report that both positions listed in this article have now been filled. As such, please don't send us your CV unless you are interested in us keeping it on record for when we are next looking to hire.[/b][/color]

We’re looking for two mid-level to senior web developers to join our team at our [url=http://www.nexusmods.com/games/news/13212/]new office in Exeter[/url], UK. The ideal candidate will be multi-skilled with experience working on high traffic websites.


[b][size=4]The Role[/size][/b]

You’d be joining our established web team which currently consists of 3 programmers whose primary responsibility is the Nexus Mods website and everything around it. Our stack consists of a mixture of technologies - Linux, Apache, NginX, PHP, MySQL, APC, Redis, Ruby, Rails, Bash, Git, Puppet, Vagrant, Javascript, Jquery, CSS, HTML and everything in between.

We’re currently hard at work on the next iteration of the Nexus Mods website and we have a long list of features that we’d like you to help us realise. We work as closely as possible to an agile project management scheme and every team member’s input is highly valued - we’re looking for people who can constructively discuss ideas in our programming meetings.

You’ll need production experience with PHP/MySQL and be comfortable with all related technologies.

Ultimately, we’re looking for people who are keen to learn and flexible in their approach with a strong web background.


[b][size=4]Responsibilities[/size][/b]

[list]
[*][size=2]Working as part of the web team to maintain the Nexus Mods website, fixing bugs and adding new features.[/size]
[*][size=2]Participating in team meetings, keeping track of your workflow using project management tools.[/size]
[*][size=2]Working with everyone at Nexus Mods to shape the future of our platform.[/size]
[/list]


[size=4][b]Requirements and Skills[/b][/size]

[list]
[*][size=2]PHP[/size]
[*][size=2]MySQL[/size]
[*][size=2]Javascript & Javascript Frameworks[/size]
[*][size=2]CSS & HTML5[/size]
[*][size=2]Strong communication skills both verbally and written (English).[/size]
[*][size=2]Right to work in the UK[/size]
[/list]


[size=4][b]Bonus Skills[/b][/size]

[list]
[*][size=2]Comfortable using Linux[/size]
[*][size=2]Sysadmin / Devops experience with LAMP[/size]
[*][size=2]Ruby / Rails[/size]
[*][size=2]Experience with code testing[/size]
[*][size=2]An understanding of games modding and knowledge of Nexus Mods[/size]
[*][size=2]A sense of humour[/size]
[*][size=2]A love of computer games[/size]
[/list]


[size=4][b]Other Information[/b][/size]

[list]
[*][size=2]We will offer a competitive market rate salary dependent on your level.[/size]
[*][size=2]We will provide high spec hardware for you to work from in the office.[/size]
[*][size=2]For the right candidates we may be able to assist with relocation expenses and logistics. [/size]
[/list]


[b][size=4]To Apply[/size][/b]

In order to apply, please send an email to jobs@nexusmods.com with your CV and why you’d be suitable for this role.
          DevOps Training in Ameerpet (Hyderabad)        
The DevOps training in Hyderabad is aimed at giving you an in-depth introductory understanding of DevOps. Being one of the best DevOps training in Hyderabad, the training course dives into the tools of DevOps, which govern operations and provide a t...
          devops : développeurs et exploitants dans le même bateau.        
none
          Firebrand Training launch Microsoft Software Tester Apprenticeship with Testhouse        
The demand for software testers is increasing but businesses across the UK are failing to get the testing and DevOps skills they need because of a lack of qualified professionals.

In response to this demand, Testhouse, a Microsoft Gold ALM Partner, and Firebrand Training, leaders in accelerated learning, have launched the Microsoft Software Tester Apprenticeship.



The launch of this programme aligns with Microsoft’s National Empowerment activity, which sees over 4,500 young people start a Microsoft-backed apprenticeship across over 2,500 Microsoft Partners and customers this year. This apprenticeship will also be delivered in time for the imminent arrival of the Apprenticeship Levy.

The Microsoft Software Tester apprenticeship is designed to bring new software testers and DevOps specialists into IT, at a time when these professionals are more essential than ever.

Built around the National Apprenticeship Standard for Software Testers, this apprenticeship programme is designed by Firebrand -- a Microsoft Gold Partner -- to align directly with job role demands of employers.

“We can’t wait to get new software testers and DevOps specialists into the industry to fill the digital skills gap and contribute to Microsoft’s target of 30,000 new apprenticeships by 2020” says Stefano Capaldo, Firebrand’s Managing Director.

“Firebrand are already receiving interest from employers for this new apprenticeship. We’re excited to have our first Azure apprentices starting soon.”

Get Microsoft Software Testing skills in your organisation


As apprentices learn how to design and execute tests on business applications and services, students on this programme will develop into software testing and DevOps professionals.

The rapid increase in popularity for Microsoft Visual Studio 2015 has led to a lack of knowledge in the solution’s software testing tools. To meet this demand, apprentices will receive considerable training on this industry-standard platform.

Jim Cowdery, Head of Business Development at Testhouse, says:

“We are very proud to work with both Firebrand and Microsoft to provide much needed skills in the marketplace and offer businesses affordable testing solutions.”

Built by Testhouse and Firebrand, the programme also includes £10,000s worth of government-funded software testing training, including:


Firebrand’s Microsoft Software Tester programme is also fully customisable, so you can tailor it to suit you as an employer.

Plus, apprenticeships are not only for new staff. Savvy organisations can put existing employees on this Microsoft Software Tester Apprenticeship, boosting their testing skills with government-funded training.

Who is Firebrand Training?


Firebrand Training is the fastest way to learn. Firebrand’s apprentices train at twice the speed, getting industry-recognised certifications at an all-inclusive residential training centre. For more information on Firebrand’s unique accelerated learning take a look at the video below:


Since launching in 2001, Firebrand has trained more than 62,000 students and saved a total of 1 million hours. Firebrand Training is also a Microsoft Gold Training Partner and has provided accelerated apprenticeships since 2012.



Get more information on the new Microsoft Software Tester Apprenticeship, including how to access £10,000s of government-funded accelerated training. Or learn more about the Apprenticeship Levy with this this guide.

          DevOps Engineer - Zartis - Madrid        
We are looking for a DevOps Specialist for one of our clients, a SW Disruptive Big Data Startup based in the city center of Madrid. You would be part of a team in charge of supporting the company infrastructure and the systems associated. Requirements: To hold a Bachelor Degree in Computer Science or similar. A passion for Unix/Linux (required some experience withUbuntu/Debian and MacOS). Experience with cloud. Experience managing a continuous delivery environments. Our client...
          Ep. #5, Continuous Security at Chef        

In the fifth installment of The Secure Developer, Guy talks with Chef CTO Adam Jacob about the role security can play in DevOps and continuous integration/deployment. They cover the differences between baked-in and bolted on security and how automation with Habitat can change the way developers approach secure coding.

The post Ep. #5, Continuous Security at Chef appeared first on Heavybit.


          The Benefits of an Application Policy Language in Cisco ACI: Part 4 – Application Policies for DevOps        
[Note: This is the last installment of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 1 | Part 2 | Part 3] As noted earlier in this series, modern DevOps applications such as Puppet, Chef, and CFEngine have […]
          Episode 46 – Is DevOps a good career for me? - The Mentoring Developers Podcast with Arsalan Ahmed: Interviews with mentors and apprentices | Career and Technical Advice | Diversity in Software | Struggles, Anxieties, and Career Choices        
What about DevOps? When a listener asked Arsalan this question over email, he just had to create a podcast episode to answer that. And then another listener asked a question about her dilemma: remove a weak experience from her resume or show it and be ready for pointed questions during interviews. Listen to this episode to...
          Comment on How to Learn to Code For Free: 8 Essential Resources by #programming How to Learn to Code For Free: 7 Essential Resources – DevOps Infographics Resource Center – Programming and IT        
[…] from bargainbabe.com […]
          Comment on How to Learn to Code For Free: 8 Essential Resources by Learn how to code with these free educational websites. Finally, computer programming with reach – for you and your children…. – DevOps Infographics Resource Center – Programming and IT        
[…] from bargainbabe.com […]
          DevOps and Open Source Technologies        
As everybody is adopting various cloud solutions in some shape or form which may be Public, Private or Hybrid or even changing the architecture or approach to do things (for example microservices), it’s essential that we implement a very high degree of automation. With the traditional approach and architecture, we had few components which makes […]
          High Performance DevOps: Insights and Tips        
Getting started in DevOps? Or have you already adopted a DevOps culture and are looking to apply advanced principles? No matter where you are in your DevOps journey, this eBook is a must read. Get insights and tips from DevOps ...
          Saving Your Infrastructure From DevOps        
Illustration showing DevOps as the intersection of Development (Software Engineering), Technology Operations and Quality Assurance (QA) (Photo credit: Wikipedia) There’s a dirty little secret when it comes to running data center infrastructure, it’s hard, really, really hard. Ok, maybe it’s not so much a secret as a fact of life for [...]
          DevOps-Spezialist Chef Software öffnet Niederlassung in Deutschland        
Der Anbieter einer Continuous Automation Plattform will mit einem Büro in Berlin künftig auch für deutsche Unternehmen präsent sein.
          We've got a new blog entry: DevOps and Microservices Architecture done right - IBM Netcool Agile Service Manager http://ift.tt/2ubUqPO         

We've got a new blog entry: DevOps and Microservices Architecture done right - IBM Netcool Agile Service Manager http://ift.tt/2ubUqPO 


          We've got a new blog entry: DevOps: Operations Can't Fail http://ift.tt/2nAfawr         

We've got a new blog entry: DevOps: Operations Can't Fail http://ift.tt/2nAfawr 


          400+ Free Resources for DevOps & Sysadmins        
As an Python advocate and educator, I’m always looking for ways to make my job (and yours) easier. This list put together by Morpheus Data offers a ton of great resources for Python users (more than 25 tools specific to Python) and other DevOps and Sysadmins. Enjoy. Table of Contents Source Code Repos Tools for […]
          Datanauts 089: SRE vs. Cloud Native vs. DevOps        
SRE, or Site Reliability Engineer, is an effort to create a job role that puts operations teams on equal footing with developers. Rob Hirschfeld joins the Datanauts to discuss just what it means. The post Datanauts 089: SRE vs. Cloud Native vs. DevOps appeared first on Packet Pushers.
          Datanauts 084: A Fireside Chat: PowerShell & DevOps Global Summit        
Today's episode is all about Chris s adventures at the 5th annual PowerShell and DevOps Global Summit. The post Datanauts 084: A Fireside Chat: PowerShell & DevOps Global Summit appeared first on Packet Pushers.
          Datanauts 054: Containers Won’t Fix Your Broken Culture        
The Datanauts talk with Bridget Kromhout about why containers, DevOps, and microservices aren't magic wands; teams still need to communicate about requirements & expectations to make things work The post Datanauts 054: Containers Won’t Fix Your Broken Culture appeared first on Packet Pushers.
          Datanauts 016: The Realities Of Hybrid Cloud        
The Datanauts talk with John Merline, a network architect at a Fortune 500 company. He shares his experiences moving data and applications to the cloud, working in a hybrid cloud model, implementing DevOps, and more. The post Datanauts 016: The Realities Of Hybrid Cloud appeared first on Packet Pushers.
          QualiTest Partners with QualiSystems to Automate Network Elements and Infrastructure Testing for SDN/NFV        

QualiTest, the world’s second largest independent software testing and QA company, is joining forces with QualiSystems, a leading provider of DevOps orchestration and automation solutions, to offer a joint solution for automating SDN/NFV network elements and infrastructure testing.

(PRWeb February 17, 2015)

Read the full story at http://www.prweb.com/releases/2015/02/prweb12500988.htm


          Devops Engineer - DMC consulting - Bangalore, Karnataka        
Google Kubernetes, Amazon ECS and Azure ACS. Applicants should have a strong knowledge in the following areas and tools and can show how these tools can be...
From Indeed - Wed, 12 Jul 2017 08:19:54 GMT - View all Bangalore, Karnataka jobs
          My Experiences with DevOps while Working in Bing Ads        

Up until a few months ago, the term DevOps was simply another buzzword which filled my Twitter feed that evoked a particular idea but wasn’t really concrete to me. Similar to other buzzwords related to software development such as NoSQL and Agile, it is hard to pin down what the definitive definition of the term is just what it wasn’t. If you aren’t familiar with DevOps, a simple definition is that the goal of DevOps is to address this common problem when building online services

The Big Switch

A couple of months ago, my work group took what many would consider a rather extreme step in eliminating this wall between developers and operations. Specifically, Bing Ads transitioned away from the traditional Microsoft engineering model of having software design engineers (aka developers), software design engineers in test (testers) and service operations (ops) and merged all of these roles into a single engineering role. As it states in the Wikipedia entry for DevOps, the adoption of DevOps was driven by the following trends

  1. Use of agile and other development processes and methodologies
  2. Demand for an increased rate of production releases from application and business unit stakeholders
  3. Wide availability of virtualized and cloud infrastructure from internal and external providers
  4. Increased usage of data center automation and configuration management tools

All of these trends already applied to our organization before we made the big switch to merge the three engineering disciplines into a DevOps role. We’d already embraced the Agile development model complete with two to four week sprints, daily scrums, burn-down charts, and senior program managers playing the role of the product owner (although we use the term scenario owner). Given our market position as the underdog to Google in search and advertising, our business leaders always wants to ship more features, more quickly while maintaining high product quality. In addition, there’s a ton of peer pressure for all of us at Microsoft to leverage internal tools Windows Azure and Autopilot for as much of our cloud services needs as possible instead of rolling our own data centers and hardware configurations.

Technically our organization was already committed to DevOps practices before we made the transition that eliminated roles. However the what the organization realized is that a bigger change to the culture was needed for us to get the most value out of these practices. The challenge we faced is that the organizational structure of separate roles for developers, testers and operations tends to create these walls where one role feels their responsibility is for a certain part of the development cycle and then tosses the results of their efforts down stream to the next set of folks in the delivery pipeline. Developers tended to think their job was to write code and quality was the role of testers. Testers felt their role was to create test frameworks and find bugs then deployment was the role of the operations team. The operations team tended to think their role was keeping the live site running without the ability to significantly change how the product was built. No matter how open and collaborative the people are on your team, these strictly defined roles create these walls. My favorite analogy for this situation is like comparing two families who are on a diet trying to lose weight and one of them has fruit, veggies and healthy snacks in the pantry while the other has pop tarts, potato chips, chocolate and ice cream in theirs. No matter how much will power the latter family has, they are more likely to “cheat” on their diet than the first family because they have created an environment that makes it harder for them to do the right thing.

Benefits

The benefits of fully embracing DevOps are fairly self-evident so I won’t spend time on discussing the obvious benefits that have been beaten to death elsewhere. I will talk about the benefits I’ve seen in our specific case of merging the 3 previous engineering roles into a single one. The most significant change is the cultural change towards how we view automation of every step related to deployment and monitoring. It turns out that there is a big difference when approaching a problem from the perspective of taking away people’s jobs (i.e. automating what the operations team does) versus making your team more effective (i.e. reducing the amount of time the engineering team spends on operational tasks that can be automated thus giving us more time to work on features that move the business forward). This has probably the biggest surprise, although obvious in hindsight, as well as the biggest benefit.

We’ve also begun to see faster time to resolve issues from build breaks to features failing in production due to fact that the on-call person (we call them Directly Responsible Individuals or DRIs) is now a full member of the engineering team who is expected to be capable of debugging and fixing issues encountered as part of being on-call. This is an improvement from prior models where the operations team were the primary folks on-call and would tend to pull in the development team as a last resort outside of business hours.  

As a program manager (or product manager if you’re a Silicon Valley company), I find it has made my job easier since I have fewer people to talk to because we’ve consolidated engineering managers. No longer having to talk to an development manager separately from the manager of systems engineers separately from a test manager has made communication far more efficient for me.

Challenges

There are a number of risks with any organization taking the steps that we have at Bing Ads. The biggest risk is definitely attrition especially at a company like Microsoft where these well-defined roles have been a part of the culture for decades and are still part & parcel of how the majority of the company does business. A number of people may feel that this is a bait and switch on their career plans with the new job definitions not aligning with how they saw their roles evolving over time. Others may not mind that as much but may simply feel that their skills may not be as valuable in the new world especially as they now need to learn a set of new skills. I’ve had one simple argument when I’ve met people with this mindset. The first is that DevOps is here to stay. The industry trends that have had more and more companies from Facebook and Amazon to Etsy and Netflix blurring the lines between developers, test engineers and operations staff will not go away. Companies aren’t going to want to start shipping less frequently nor will they want to bring back manual deployment processes instead of automating as much as possible. The skills you learn in a DevOps culture will make you more broadly valuable wherever they find their next role whether it is a traditional specialized engineering structure or in a DevOps based organization.

Other places where we’re still figuring things out are best practices around ownership of testing. We currently try to follow a “you build it, you test it, you deploy it” culture as much as possible although allowing any dev to deploy code has turned out to be bit more challenging than we expected since we had to ensure we do not run afoul of the structures we had in place to stay compliant with various regulations. Testing your own code is one of topics where many in the industry have come out against as being generally a bad idea. I remember arguments from my college classes from software engineering professors about the blind spots developers have about their software requiring the need for dedicated teams to do testing. We do have mitigations in place such as test plan reviews and code reviews to ensure there are alternate pairs of eyes looking at the problem space not just the developer who created the functionality. There is also the school of thought that since the person who wrote the code will likely be the person woken up in the middle of the night if it goes haywire at an inopportune moment, there is a sense of self preservation that will cause more diligence to be applied to the problem than was the case in the previous eras of boxed software which is when most of the anti-developer testing arguments were made.

Further Reading

 

Note Now Playing: Eminem – Rap God Note


          How Do You Fit a Core Banking System into a Few Containers? Insight from DOES EU 17        

At the DevOps Enterprise Summit EU 2017, held in London, InfoQ sat down with Amine Boudali and Jose Quaresma and discussed key insights from their presentation “How Do You Fit a Core Banking System into a Few Containers?”

By Daniel Bryant
          PASS SQL Saturday #567 Slovenia Schedule        

The schedule for the last SQL Saturday this year in Europe is alive. Hurry up with registrations, two months before the even we are already 70% full!

Also don’t forget our great pre-conference seminars:

See you in beautiful Ljubljana!


          PASS SQL Saturday #567 Slovenia 2016        

So we are back againSmile

The leading event dedicated to Microsoft SQL Server in Slovenia will take place on Saturday, 10th December 2016, at the Faculty of Computer and Information Science of the University of Ljubljana, Večna pot 113, Ljubljana (http://www.fri.uni-lj.si/en/about/how_to_reach_us/).

As always, this is an English-only event. We don’t expect the speakers and the attendees to understand SlovenianSmile However, this way, our SQL Saturday has become quite well known especially in the neighboring countries. Therefore, expect not only international speakers, expect international attendees as well. There will be 30 top sessions, two original and interesting pre-conference seminars, a small party after the conference, an organized dinner for the speakers and sponsors… But first of all, expect a lot of good vibrations, mingling with friends, smiling faces, great atmosphere. You might also consider visiting Ljubljana and Slovenia for couple of additional days. Ljubljana is a very beautiful and lively city, especially in December.

In cooperation with Kompas Xnet d.o.o. we are once again organizing two pre-conference seminars by three distinguished Microsoft SQL Server experts:

The seminars will take place the day before the main event, on Friday, 9th December 2016, at Kompas Xnet d.o.o., Stegne 7, Ljubljana. The attendance fee for each seminar is 149.00 € per person; until 31st October 2016 you can register for each seminar for 119.00 € per person.

Hope we meet at the event!


          DevOps / SiteReliability Engineer - IBM - Austin, TX        
Responsibilities Support production systems in a 24/7 environment. Support development systems during normal office hours. Take turns in rotating on-call
From IBM - Tue, 08 Aug 2017 14:53:50 GMT - View all Austin, TX jobs
          DevOps Software Engineer - Adeas RRHH - Barcelona        
Our client is an American Multinational corporation specialized in outsourcing of payroll services worldwide, looking for a DevOps Software Engineer for its Barcelona offices, to implement the automation for continuous integration and continuous deployment processes. Responsibilities: Identify automation opportunities within current SDLC process. Develop and deploy automation backlog items. Support and improve existing tools for continuous build, automated testing and release...
          Microsoft arme Visual Studio 2017 d'outils Database DevOps en intégrant Redgate Data Tools à son EDI pour les bases de données SQL et Azure SQL        
Microsoft arme Visual Studio 2017 d'outils Database DevOps en intégrant Redgate Data Tools à son EDI
pour les bases de données SQL et Azure SQL

Microsoft a annoncé que Redgate Data Tools, qui était déjà disponible en tant qu'extension, est désormais intégré à Visual Studio 2017. La suite d'outils, qui comprend ReadyRoll Core, SQL Prompt Core et SQL Search, vous permet d'étendre les processus DevOps déjà mis en place pour l'application aux bases de données SQL et Azure SQL.

ReadyRoll Core permet...
          Comment on PowerShell and DevOps Global Summit Recap by kishor        
Hi Jason/ Don, The debugging video posted in youtube is having some issue, the screen captures only a part of the slide, can it be corrected? https://www.youtube.com/watch?v=3Z3BAldF_EE
          Comment on PowerShell and DevOps Global Summit Recap by Daniel Krebs        
A few videos have started to show up on YouTube over the last few days - https://www.youtube.com/user/powershellorg/videos
          Ingénieur DevOps F/H - Lutessa - Marseilles         
Nous recherchons pour nos clients en région PACA des ingénieurs DevOps. A la croisée des compétences d'un développeur et d'un opérationnel, venez relever le challenge Cloud/DevOps pour monter en compétences vers les métiers de demain. Mission : Ingénieur DevOps Localité : Sophia Antipolis, Marseille Expérience : Profil junior initié / avancé Disponibilité: ASAP Technical skills: Tools/Framework/Environnement: Docker, Ansible, Puppet, OpenStack, OpenShift, Jenkins,...
          Prázdninové čtení, co život Vám změní - 360 ebooků zdarma ke stažení        
Je poměrně časté, že v rámci produktů Microsoft je možné narazit na zajímavé ebooky, které se váží k dané technologii. A za poslední dobu jich vyšlo více jak dost. Proto je načase si udělat opět nějaký přehled toho, co je aktuálně k dispozici a že je toho opravdu hodně - konkrétně 360 knih, které si můžete zdarma stáhnout a ponořit se tak do světa pro Vás třeba zatím neobjevených technologií. Která bude Vaše nejoblíbenější?

KategoriNázevFormát
AzureIntroducing Windows Azure™ for IT ProfessionalsPDF MOBI EPUB
AzureMicrosoft Azure Essentials Azure AutomationPDF MOBI EPUB
AzureMicrosoft Azure Essentials Azure Machine LearningPDF MOBI EPUB
AzureMicrosoft Azure Essentials Fundamentals of AzurePDF MOBI EPUB
AzureMicrosoft Azure Essentials Fundamentals of Azure, Second EditionPDF
AzureMicrosoft Azure Essentials Fundamentals of Azure, Second Edition MobilePDF
AzureMicrosoft Azure Essentials Migrating SQL Server Databases to Azure – MobilePDF
AzureMicrosoft Azure Essentials Migrating SQL Server Databases to Azure 8.5X11PDF
AzureMicrosoft Azure ExpressRoute GuidePDF
AzureOverview of Azure Active DirectoryDOC
AzureRapid Deployment Guide For Azure Rights ManagementPDF
AzureRethinking Enterprise Storage: A Hybrid Cloud ModelPDF MOBI EPUB
BizTalkBizTalk Server 2016 Licensing DatasheetPDF
BizTalkBizTalk Server 2016 Management Pack GuideDOC
CloudEnterprise Cloud StrategyPDF MOBI EPUB
CloudEnterprise Cloud Strategy – MobilePDF
Developer.NET Microservices: Architecture for Containerized .NET ApplicationsPDF
Developer.NET Technology Guidance for Business ApplicationsPDF
DeveloperBuilding Cloud Apps with Microsoft Azure™: Best practices for DevOps, data storage, high availability, and morePDF MOBI EPUB
DeveloperContainerized Docker Application Lifecycle with Microsoft Platform and ToolsPDF
DeveloperCreating Mobile Apps with Xamarin.Forms, Preview Edition 2PDF MOBI EPUB
DeveloperIT 公论 131 期「我仍然记得早期的互联网是什么模样。」
  • Ubuntu Mono
  • Anaconda
  • 《狂蟒之灾》
  • Dr. Dobb’s
  • Michael Abrash’s Graphics Programming Black Book 实体书,GitHub 上的电子版
  • AnandTech
  • DevOps
  • Cisco CCIP
  • Digital Ocean
  • Linode
  • Ender’s Game
  • Fabric
  • Capistrano
  • Chef
  • Puppet
  • Pallet
  • Salt
  • Rex
  • 《集装箱改变世界》
  • 《集装箱改变世界》英文 Kindle 版
  • cgroups
  • LXC
  • Docker
  • Vagrant
  • CoreOS
  • Rocket
  • Nix / NixOS
  • 人物简介

    • Rio:《IT 公论》主播,IPN 联合创始人,Apple4us 程序员。
    • 吴涛:Type is Beautiful 程序员,《内核恐慌》主播。

              #7: 软件包管理        

    延伸上期「生命周期管理」话题,本期节目中吴涛和 Rio 讨论了 package management,包管理。内容包括 Windows 的 DLL hell,各 Linux 发行版的打包格式,Homebrew 有何过人之处,Python 包管理工具的混乱现状,其他关键字包括 npm、Ruby Gem、Rake、Zope、Maven,当然还有 Rio 钟爱的 Go。

    相关链接

    人物简介

    • Rio:《IT 公论》主播,IPN 联合创始人,Apple4us 程序员。
    • 吴涛:Type is Beautiful 程序员,《内核恐慌》主播。

              #3: 静态网站生成器        

    动态网站太重了。轻量级的静态网站生成工具一时蔚然成风,至少在开发者圈子里如此。它是什么,解决了怎样的问题,为什么流行,效果如何?

    吴涛希望 Rio 转写成 Go 的 Python 代码:

    import datetime
    print datetime.datetime.now().strftime('北京时间 %H 点 %M 分')
    

    Rio 的回复:

    import (
        "fmt"
        "time"
    )
    
    func main() {
        fmt.Println(time.Now().Format("北京时间 15 点 04 分"))
    }
    

    简单来说,Go 使用「2006」、「Jan」、「2」、「15」、「04」等字面值来代替 strftime 中的「%Y」、「%b」、「%d」、「%H」、「%M」作为格式化日期时的占位符。延伸阅读: Parsing and formatting date/time in Go、http://golang.org/pkg/time/#pkg-constants。

    相关链接


              Security Link Roundup - January 4, 2016        
    January 4, 2016 Oracle Consulting Security Link Roundup
    I'm Mark Wilcox.The Chief Technology Officer for Oracle Consulting- Security in North America and this is my weekly roundup of security stories that interested me.###Database of 191 million U.S. voters exposed on Internet: researcherSo 2016 starts off with another headline of a database breach. In this case 191 million records of US voters. This is ridiculous. And could have been prevented.And a sobering reminder to contact your Oracle represenative and ask them for a database security assessment by Oracle consulting.###Secure Protocol for Mining in Horizontally Scattered Database Using Association RuleData mining is a hot topic - it's essential to marketing, sales and innovation. Because companies have lots of information on hand but until you start mining it, you can't really do anything with it.And often that data is scattered across multiple databases.In this academic paper from the "International Journal on Recent and Innovation Trends in Computing and Communication" the authors describe a new protocol that they claim respects privacy better than other options.On the other hand - Oracle already has lots of security products (for example database firewall, identity governance) that you can implement today to help make sure only the proper people have access to the data.So make sure to call your Oracle represenative and ask for a presentation by Oracle Consulting on how Oracle security can help protect your data mining databases. ###A Guide to Public Cloud Security ToolsCloud computing is happening.And most people are still new to the space.This is a good general article into the differences in security between public and private clouds.Plus has a list of tools to help you with cloud security.And if you are wanting to use cloud to host Oracle software - please call your Oracle represenative and ask them to arrange a meeting with Oracle Consulting Security to talk about how Oracle can help do that securely.###Survey: Cloud Security Still a Concern Heading into 2016Security continues to be the biggest concern when it comes to cloud.While there are challenges - I find securing cloud computing alot simpler than on-premise. Assuming your cloud hosting is with one of the major vendors such as Oracle or Amazon.And if you are wanting to use cloud to host Oracle software - please call your Oracle represenative and ask them to arrange a meeting with Oracle Consulting Security to talk about how Oracle can help do that securely.###40% BUSINESS DO NOT USE " SECURITY ENCRYPTION" FOR STORING DATA IN CLOUD"Holy crap, Marie." I watch a lot of reruns of "Everybody Loves Raymond" and I feel like this story is another rerun.Except unlike Raymond this is a rerun of a bad TV show.Encrypting a database is one of the best ways to secure your data from hackers.So before you start storing data in the cloud, in particular with an Oracle database make sure you have Oracle Consulting do a security assessment for you. That way you can know what potential problems you have before you start storing sensitive production data.###image credit unsplash.
              Enterprises look for partners to make the most of Microsoft Azure Stack apps        
    The next BriefingsDirect Voice of the Customer hybrid cloud advancements discussion explores the application development and platform-as-a-service (PaaS) benefits from Microsoft Azure Stack

    We’ll now learn how ecosystems of solutions partners are teaming to provide specific vertical industries with applications and services that target private cloud deployments.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

    Here to help us explore the latest in successful cloud-based applications development and deployment is our panel, Martin van den Berg, Vice President and Cloud Evangelist at Sogeti USA, based in Cleveland, and Ken Won, Director of Cloud Solutions Marketing at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Gardner: Martin, what are some of the trends that are driving the adoption of hybrid cloud applications specifically around the Azure Stack platform?

    Van den Berg: What our clients are dealing with on a daily basis is an ever-expanding data center, they see ever-expanding private clouds in their data centers. They are trying to get into the hybrid cloud space to reap all the benefits from both an agility and compute perspective.

    van den Berg

    They are trying to get out of the data center space, to see how the ever-growing demand can leverage the cloud. What we see is that Azure Stack will bridge the gap between the cloud that they have on-premises, and the public cloud that they want to leverage -- and basically integrate the two in a true hybrid cloud scenario.

    Gardner: What sorts of applications are your clients calling for in these clouds? Are these cloud-native apps, greenfield apps? What are they hoping to do first and foremost when they have that hybrid cloud capability?

    Van den Berg: We see a couple of different streams there. One is the native-cloud development. More and more of our clients are going into cloud-native development. We recently brought out a white paper wherein we see that 30 percent of applications being built today are cloud-native already. We expect that trend to grow to more than 60 percent over the next three years for new applications.

    HPE Partnership Case Studies
    of Flex Capacity Financing

    The issue that some of our clients have has to do with some of the data being consumed in these applications. Either due to compliance issues, or that their information security divisions are not too happy, they don’t want to put this data in the public cloud. Azure Stack bridges that gap as well.
     
    They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.
    Microsoft Azure Stack can bridge the gap between the on-premises data center and what they do in the cloud. They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.

    On the other hand, what we also see is that some of our clients are looking at Azure Stack as a bridge to gap the infrastructure-as-a-service (IaaS) space. Even in that space, where clients are not willing to expand their own data center footprint, they can use Azure Stack as a means to seamlessly go to the Azure public IaaS cloud.

    Gardner: Ken, does this jibe with what you are seeing at HPE, that people are starting to creatively leverage hybrid models? For example, are they putting apps in one type of cloud and data in another, and then also using their data center and expanding capacity via public cloud means?

    Won

    Won: We see a lot of it. The customers are interested in using both private clouds and public clouds. In fact, many of the customers we talk to use multiple private clouds and multiple public clouds. They want to figure out how they can use these together -- rather than as separate, siloed environments. The great thing about Azure Stack is the compatibility between what’s available through Microsoft Azure public cloud and what can be run in their own data centers.

    The customer concerns are data privacy, data sovereignty, and security. In some cases, there are concerns about application performance. In all these cases, it's a great situation to be able to run part or all of the application on-premises, or on an Azure Stack environment, and have some sort of direct connectivity to a public cloud like Microsoft Azure.

    Because you can get full API compatibility, the applications that are developed in the Azure public cloud can be deployed in a private cloud -- with no change to the application at all.

    Gardner: Martin, are there specific vertical industries gearing up for this more than others? What are the low-lying fruit in terms of types of apps?

    Hybrid healthcare files

    Van den Berg: I would say that hybrid cloud is of interest across the board, but I can name a couple of examples of industries where we truly see a business case for Azure Stack.

    One of them is a client of ours in the healthcare industry. They wanted to standardize on the Microsoft Azure platform. One of the things that they were trying to do is deal with very large files, such as magnetic resonance imaging (MRI) files. What they found is that in their environment such large files just do not work from a latency and bandwidth perspective in a cloud.

    With Microsoft Azure Stack, they can keep these larger files on-premises, very close to where they do their job, and they can still leverage the entire platform and still do analytics from a cloud perspective, because that doesn’t require the bandwidth to interact with things right away. So this is a perfect example where Azure Stack bridges the gap between on-premises and cloud requirements while leveraging the entire platform.

    Gardner: What are some of the challenges that these organizations are having as they move to this model? I assume that it's a little easier said than done. What's holding people back when it comes to taking full advantage of hybrid models such as Azure Stack?

    Van den Berg: The level of cloud adoption is not really yet where it should be. A lot of our clients have cloud strategies that they are implementing, but they don't have a lot of expertise yet on using the power that the platform brings.

    Some of the basic challenges that we need to solve with clients are that they are still dealing with just going to Microsoft Azure cloud and the public cloud services. Azure Stack simplifies that because they now have the cloud on-premises. With that, it’s going to be easier for them to spin-up workload environments and try this all in a secure environment within their own walls, their own data centers.

    Should a specific workload go in a private cloud, or should another workload go in a public cloud?
    Won: We see a similar thing with our client base as customers look to adopt hybrid IT environments, a mix of private and public clouds. Some of the challenges they have include how to determine which workload should go where. Should a specific workload go in a private cloud, or should another workload go in a public cloud?

    We also see some challenges around processes, organizational process and business process. How do you facilitate and manage an environment that has both private and public clouds? How do you put the business processes in place to ensure that they are being used in the proper way? With Azure Stack -- because of that full compatibility with Azure -- it simplifies the ability to move applications across different environments.

    Gardner: Now that we know there are challenges, and that we are not seeing the expected adoption rate, how are organizations like Sogeti working in collaboration with HPE to give a boost to hybrid cloud adoption?

    Strategic, secure, scalable cloud migration 

    Van den Berg: As the Cloud Evangelist with Sogeti, for the past couple of years I have been telling my clients that they don’t need a data center. The truth is, they probably need some form of on-premises still. But the future is in the clouds, from a scalability and agility perspective -- and the hyperscale with which Microsoft is building out their Azure cloud capabilities, there are no enterprise clients that can keep up with that. 

    We try to help our clients define strategy, help them with governance -- how do they approach cloud and what workloads can they put where based on their internal regulations and compliance requirements, and then do migration projects.
    The future is in the clouds, from a scalability and agility perspective.

    We have a service offering called the Sogeti Cloud Assessment, where we go in and evaluate their application portfolio on their cloud readiness. At the end of this engagement, we start moving things right away. We have been really successful with many of our clients in starting to move workloads to the cloud.

    Having Azure Stack will make that even easier. Now when a cloud assessment turns up some issues on moving the Microsoft Azure public cloud -- because of compliance or privacy issues or just comfort (sometimes the information security departments just don't feel comfortable moving certain types of data to a public cloud setting) -- we can move those applications to the cloud, leverage the full power and scalability of the cloud while keeping it within the walls of our clients’ data centers. That’s how we are trying to accelerate the cloud adoption, and we truly feel that Azure Stack bridges that gap.

    HPE Partnership Case Studies
    of Flex Capacity Financing

    Gardner: Ken, same question, how are you and Sogeti working together to help foster more hybrid cloud adoption?

    Won: The cloud market has been maturing and growing. In the past, it’s been somewhat complicated to implement private clouds. Sometimes these private clouds have been incompatible with each other, and with the public clouds.

    In the Azure Stack area, now we have almost an appliance-like experience where we have systems that we build in our factories that we pre-configure, pretest, and get them into the customers’ environment so that they can quickly get their private cloud up and running. We can help them with the implementation, set it up so that Sogeti can help with the cloud-native applications work.
     
    With Sogeti and HPE working together, we make it much simpler for companies to adopt the hybrid cloud models and to quickly see the benefit of moving into a hybrid environment.
    Sogeti and HPE work together to make it much simpler for companies to adopt the hybrid cloud models.

    Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

    So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer.

    Won: I agree. When they talk about a private cloud, they are really talking about virtual  machines, or virtualization. But because the Microsoft Azure Stack solution provides built-in services that are fully compatible with what's available through Microsoft Azure public cloud, it truly provides the full cloud experience. These are the types of services that are beyond just virtualization running within the customers’ data center.

    Keep IT simple

    I think Azure Stack adoption will be a huge boost to organizations looking to implement private clouds in their data centers.

    Gardner: Of course your typical end-user worker is interested primarily their apps, they don’t really care where they are running. But when it comes to getting new application development, rapid application development (RAD), these are some of the pressing issues that most businesses tell us concern them.

    So how does RAD, along with some DevOps benefits, play into this, Martin? How are the development people going to help usher in cloud and hybrid cloud models because it helps them satisfy the needs of the end-users in terms of rapid application updates and development?

    Van den Berg: This is also where we are talking about the difference between virtualization, private cloud, hybrid clouds, and definitely cloud services. So for the application development staff, they still run in the traditional model, they still run into issues in provisioning of their development environments and sometimes test environments.

    A lot of cloud-native application development projects are much easier because you can spin-up environments on the go. What Azure Stack is going to help with is having that environment within the client’s data center; it’s going to help the developers to spin up their own resources.

    There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite.

    There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite
    We need to integrate business development and IT operations to deliver value to our clients. If we are waiting multiple weeks for development and the best environment to spin up -- that’s an issue our clients are still dealing with today. That’s where Azure Stack is going to bridge the gap, too.

    Won: There are a couple of things that we see happening that will make developers much more productive and able to bring new applications or updates quicker than ever before. One is the ability to get access to these services very, very quickly. Instead of going to the IT department and asking them to spin up services, they will be able to access these services on their own.

    The other big thing that Azure Stack offers is compatibility between private and public cloud environments. For the first time, the developer doesn't have to worry about what the underlying environment is going to be. They don’t have to worry about deciding, is this application going to run in a private cloud or a public cloud, and based on where it’s going, do they have to use a certain set of tools for that particular environment.

    Now that we have compatibility between the private cloud and the public cloud, the developer can just focus on writing code, focus on the functionality of the application they are developing, knowing that that application now can easily be deployed into a private cloud or a public cloud depending on the business situation, the security requirements, and compliance requirements.

    So it’s really about helping the developers become more effective and helping them focus more on code development and applications rather than having them worry about the infrastructure, or waiting for infrastructure to come from the IT department.

    HPE Partnership Case Studies
    of Flex Capacity Financing

    Gardner: Martin, for those organizations interested in this and want to get on a fast track, how does an organization like Sogeti working in collaboration with HPE help them accelerate adoption?

    Van den Berg: This is where we heavily partner with HPE, to bring the best solutions to our clients. We have all kinds of proof of concepts, we have accelerators, and one of the things that we talked about already is making developers get up to speed faster. We can truly leverage those accelerators and help our clients adopt cloud, and adopt all the services that are available on the hybrid platform.

    We have all heard the stories about standardizing on micro-services, on a server fabric, or serverless computing, but developers have not had access to this up until now and IT departments have been slow to push this to the developers.

    The accelerators that we have, the approaches that we have, and the proofs of concept that we can do with our client -- together with HPE --  are going to accelerate cloud adoption with our clientele. 

    Gardner: Any specific examples, some specific vertical industry use-cases where this really demonstrates the power of the true hybrid model?

    When the ship comes in

    Won: I can share a couple of examples of the types of companies that we are working with in the hybrid area, and what places that we see typical customers using Azure Stack.

    People want to implement disconnected applications or edge applications. These are situations where you may have a data center or an environment running an application that you may either want to run in a disconnected fashion or run to do some local processing, and then move that data to the central data center.

    One example of this is the cruise ship industry. All large cruise ships have essentially data centers running the ship, supporting the thousands of customers that are on the ship. What the cruise line vendors want to do is put an application on their many ships and to run the same application in all of their ships. They want to be able to disconnect from connectivity of the central data center while the ship is out at sea and to do a lot of processing and analytics in the data center, in the ship. Then when the ship comes in and connects to port and to the central data center, it only sends the results of the analysis back to the central data center.

    This is a great example of having an application that can be developed once and deployed in many different environments, you can do that with Azure Stack. It’s ideal, running that same application in multiple different environments, in either disconnected or connected situations.

    Van den Berg: In the financial services industry, we know they are heavily regulated. We need to make sure that they are always in compliance.

    So one of the things that we did in the financial services industry with one of our accelerators, we actually have a tool called Sogeti OneShare. It’s a portal solution on top of Microsoft Azure that can help you with orchestration, which can help you with the whole DevOps concept. We were able to have the edge node be Azure Stack -- building applications, have some of the data reside within the data center on the Azure Stack appliance, but still leverage the power of the clouds and all the analytics performance that was available there.

    That's what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.
    Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

    So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer. We just did a project in this space and we were able to deliver functionality to the business from start of the project in just eight weeks. They have never seen that before -- the project that just lasts eight weeks and truly delivers business value. That's the direction that we should be taking. That’s what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.

    Gardner: Perhaps we could now help organizations understand how to prepare from a people, process, and technology perspective to be able to best leverage hybrid cloud models like Microsoft Azure Stack.

    Martin, what do you suggest organizations do now in order to be in the best position to make this successful when they adopt?

    Be prepared

    Van den Berg: Make sure that the cloud strategy and governance are in place. That's one of the first things this should always start with.

    Then, start training developers, and make sure that the IT department is the broker of cloud services. In the traditional sense, it is always normal that the IT department is the broker for everything that is happening on-premises within the data center. In the cloud space, this doesn’t always happen. In the cloud space, because it is so easy to spin-up things, sometimes the line of business is deploying.

    We try to enable IT departments and operators within our clients to be the broker of cloud services and to help with the adoption of Microsoft Azure cloud and Azure Stack. That will help bridge the gap between the clouds and the on-premises data centers.

    Gardner: Ken, how should organizations get ready to be in the best position to take advantage of this successfully?

    Mapping the way

    Won: As IT organizations look at this transformation to hybrid IT, one of the most important things is to have a strong connection to the line of business and to the business goals, and to be able to map those goals to strategic IT priorities.

    Once you have done this mapping, the IT department can look at these goals and determine which projects should be implemented and how they should be implemented. In some cases, they should be implemented in private clouds, in some cases public clouds, and in some cases across both private and public cloud.

    The task then changes to understanding the workloads, the characterization of the workloads, and looking at things such as performance, security, compliance, risk, and determining the best place for that workload.

    Then, it’s finding the right platform to enable developers to be as successful and as impactful as possible, because we know ultimately the big game changer here is enabling the developers to be much more productive, to bring applications out much faster than we have ever seen in the past.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.


    You may also be interested in:

              Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack        
    The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.

    We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

    Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?

    Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.

    Journey to the cloud

    Number one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.

    Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that's where we are seeing a lot of traction and focus.

    The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.

    Won
    Won: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public -- and managed services -- to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.

    We have these conversations with many of our customers about how do you determine the right placement for these different workloads -- taking into account things like security, performance, compliance, and cost -- and helping them evaluate this hybrid IT environment that they now need to manage.

    Gardner: Ro, a lot of what people have used public cloud for is greenfield apps -- beginning in the cloud, developing in the cloud, deploying in the cloud -- but there's also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?

    Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy, it's no longer the 5- and 10-year strategy. They are thinking about how to be relevant in the market this year, today, this quarter. That requires a lot of flexibility in their business model; that requires a lot of variability in their cost structure.

    Antao
    When you look at it from that viewpoint, a lot of our clients look at the public cloud as more than, “Is the app suitable for the public cloud?” They are also seeking certain cost advantages in terms of variability in that cost structure that they can take advantage of. And that’s where we are seeing them look at the public cloud beyond just applications in terms that are suitable for public cloud.

    Public and/or private power

    Won: We help a lot of companies think about where the best place is for their traditional apps. Often they don’t want to restructure them, they don’t want to rewrite them, because they are already an investment; they don’t want to spend a lot of time refactoring them.

    If you look at these traditional applications, a lot of times when they are dealing with data – especially if they are dealing with sensitive data -- those are better placed in a private cloud.

    Antao: One of the great things about Microsoft Azure Stack is it gives the data center that public cloud experience -- where developers have the similar experience as they would in a public cloud. The only difference is that you are now controlling the costs as well. So that's another big advantage we see.

    Hybrid Cloud Solutions
    for Microsoft Azure Stack
    Won: Yeah, absolutely, it's giving the developers the experience of a public cloud, but from the IT standpoint of also providing the compliance, the control, and the security of a private cloud. Allowing applications to be deployed in either a public or private cloud -- depending on its requirements -- is incredibly powerful. There's no other environment out there that provides that API-compatibility between private and public cloud deployments like Azure Stack does. 

    Gardner: Clearly Microsoft is interested in recognizing that skill sets, platform affinity, and processes are all really important. If they are able to provide a private cloud and public cloud experience that’s common to the IT operators that are used to using Microsoft platforms and frameworks -- that's a boon. It's also important for enterprises to be able to continue with the skills they have.

    Ro, is such a commonality of skills and processes not top of mind for many organizations? 

    Antao: Absolutely! I think there is always the risk when you have different environments having that “swivel chair” approach. You have a certain set of skills and processes for your private data center. Then you now have a certain set of skills and processes to manage your public cloud footprint.

    One of the big problems and challenges that this solves is being able to drive more of that commonality across consistent sets of processes. You can have a similar talent pool, and you have similar kinds of training and awareness that you are trying to drive within the organization -- because you now can have similar stacks on both ends.

    Won: That's a great point. We know that the biggest challenge to adopting new concepts
    The biggest challenge to adopting new concepts is not the technology; it's really the people and process issues.  
    is not the technology; it's really the people and process issues. So if you can address that, which is what Azure Stack does, it makes it so much easier for enterprises to bring on new capabilities, because they are leveraging the experience that they already have using Azure public cloud.

    Gardner: Many IT organizations are familiar with Microsoft Azure Stack. It's been in technical preview for quite some time. As it hits the market in September 2017, in seeking that total-solution, people-and-process approach, what is PwC bringing to the table to help organizations get the best value and advantage out of Azure Stack?

    Hybrid: a tectonic IT shift

    Antao: Ken made the point earlier in this discussion about hybrid IT. When you look at IT pivoting to more of the hybrid delivery mode, it's a tectonic shift in IT's operating model, in their architecture, their culture, in their roles and responsibilities – in the fundamental value proposition of IT to the enterprise.

    When we partner with HPE in helping organizations drive through this transformation, we work with HPE in rethinking the operating model, in understanding the new kinds of roles and skills, of being able to apply these changes in the context of the business drivers that are leading it. That's one of the typical ways that we work with HPE in this space.

    Won: It's a great complement. HPE understands the technology, understands the infrastructure, combined with the business processes, and then the higher level of thinking and the strategy knowledge that PwC has. It's a great partnership.

    Gardner: Attaining hybrid IT efficiency and doing it with security and control is not something you buy off the shelf. It's not a license. It seems to me that an ecosystem is essential. But how do IT organizations manage that ecosystem? Are there ways that you all are working together, HPE in this case with PwC, and with Microsoft to make that consumption of an ecosystem solution much more attainable?

    Won: One of the things that we are doing is working with Microsoft on their partnerships so that we can look at all these companies that have their offerings running on Azure public cloud and ensuring that those are all available and supported in Azure Stack, as well as running in the data center.

    We are spending a lot of time with Microsoft on their ecosystem to make sure those services, those companies, or those products are available on Azure Stack -- as well fully supported on Azure Stack that’s running on HPE gear.

    Gardner: They might not be concerned about the hardware, but they are concerned about the total value -- and the total solution. If the hardware players aren't collaborating well with the service providers and with the cloud providers -- then that's not going to work.

    Quick collaboration is key

    Won: Exactly! I think of it like a washing machine. No one wants to own a washing machine, but everyone wants clean clothes. So it's the necessary evil, it’s super important, but you just as soon not have to do it.

    Gardner: I just don’t know what to take to the dry cleaner or not, right?

    Won: Yeah, there you go!
    Hybrid Cloud Solutions
    for Microsoft Azure Stack
    Antao: From a consulting standpoint, clients no longer have the appetite for these five- to six-year transformations. Their businesses are changing at a much faster pace. One of the ways that we are working the ecosystem-level solution -- again much like the deep and longstanding relationship we have had with HPE – is we have also been working with Microsoft in the same context.

    And in a three-way fashion, we have focused on being able to define accelerators to deploying these solutions. So codifying a lot of our experiences, the lessons learned, a deep understanding of both the public and the private stack to be able to accelerate value for our customers -- because that’s what they expect today.

    Won: One of the things, Ro, that you brought up, and I think is very relevant here, is these three-way relationships. Customers don't want to have to deal with all of these different vendors, these different pieces of stack or different aspects of the value chain. They instead expect us as vendors to be working together. So HPE, PwC, Microsoft are all working together to make it easier for the customers to ultimately deliver the services they need to drive their business.

    Low risk, all reward

    Gardner: So speed-to-value, super important; common solution cooperation and collaboration synergy among the partners, super important. But another part of this is doing it at low risk, because no one wants to be in a transition from a public to private or a full hybrid spectrum -- and then suffer performance issues, lost data, with end customers not happy.

    PwC has been focused on governance, risk management and compliance (GRC) in trying to bring about better end-to-end hybrid IT control. What is it that you bring to this particular problem that is unique? It seems that each enterprise is doing this anew, but you have done it for a lot of others and experience can be very powerful that way.

    Antao: Absolutely! The move to hybrid IT is a fundamental shift in governance models, in how you address certain risks, the emergence of new risks, and new security challenges. A lot of what we have been doing in this space has been in helping that IT organizations accelerate that shift -- that paradigm shift -- that they have to make.

    In that context, we have been working very closely with HPE to understand what the requirements of that new world are going to look like. We can build and bring to the table solutions that support those needs.

    Won: It’s absolutely critical -- this experience that PwC has is huge. We always come up with new technologies; every few years you have something new. But it’s that experience that PwC has to bring to the table that's incredibly helpful to our customer base.

    There’s this whole journey getting to that hybrid IT state and having the governing mechanisms around it. 
    Antao: So often when we think of governance, it’s more in terms of the steady state and the runtime. But there's this whole journey between getting from where we today to that hybrid IT state -- and having the governing mechanisms around it -- so that they can do it in a way that doesn't expose their business to too much risk. There is always risk involved in these large-scale transformations, but how do you manage and govern that process through getting to that hybrid IT state? That’s where we also spend a lot of time as we help clients through this transformation.

    Gardner: For IT shops that are heavily Microsoft-focused, is there a way for them to master Azure Stack, the people, process and technology that will then be an accelerant for them to go to a broader hybrid IT capability? I’m thinking of multi-cloud, and even being able to develop with DevOps and SecOps across a multiple cloud continuum as a core competency.

    Is Azure Stack for many companies a stepping-stone to a wider hybrid capability, Ro?

    Managed multi-cloud continuum

    Antao: Yes. And I think in many cases that’s inevitable. When you look at most organizations today, generally speaking, they have at least two public cloud providers that they use. They consume several Software as a service (SaaS) applications. They have multiple data center locations.  The role of IT now is to become the broker and integrator of multi-cloud environments, among and between on-premise and in the public cloud. That's where we see a lot of them evolve their management practices, their processes, the talent -- to be able to abstract these different pools and focus on the business. That's where we see a lot of the talent development.
    Hybrid Cloud Solutions
    for Microsoft Azure Stack
    Won: We see that as well at HPE as this whole multi-cloud strategy is being implemented. More and more, the challenge that organizations are having is that they have these multiple clouds, each of which is managed by a different team or via different technologies with different processes.

    So as a way to bring these together, there is huge value to the customer, by bringing together, for example, Azure Stack and Azure [public cloud] together. They may have multiple Azure Stack environments, perhaps in different data centers, in different countries, in different locales. We need to help them align their processes to run much more efficiently and more effectively. We need to engage with them not only from an IT standpoint, but also from the developer standpoint. They can use those common services to develop that application and deploy it in multiple places in the same way.

    Antao: What's making this whole environment even more complex these days is that a couple of years ago, when we talked about multi-cloud, it was really the capability to either deploy in one public cloud versus another.

    Within a given business workflow, how do you leverage different clouds, given their unique strengths and weaknesses?
    Few years later, it evolved into being able to port workloads seamlessly from one cloud to another. Today, as we look at the multi-cloud strategy that a lot of our clients are exploring this: Within a given business workflow, depending on the unique characteristics of different parts of that business process, how do you leverage different clouds given their unique strengths and weaknesses?

    There might be portions of a business process that, to your point earlier, Ken, are highly confidential. You are dealing with a lot of compliance requirements. You may want to consume from an internal private cloud. There are other parts of it that you are looking for, such as immense scale, to deal with the peaks when that particular business process gets impacted. How do you go back to where the public cloud has a history with that? In a third case, it might be enterprise-grades workloads.

    So that’s where we are seeing multi-cloud evolve, into where in one business process could have multiple sources, and so how does an IT organization manage that in a seamless way?

    Gardner: It certainly seems inevitable that the choice of such a cloud continuum configuration model will vary and change. It could be one definition in one country or region, another definition in another country and region. It could even be contextual, such as by the type of end user who's banging on the app. As the Internet of Things (IoT) kicks in, we might be thinking about not just individuals, but machine-to-machine (M2M), app-to-app types of interactions.

    So quite a bit of complexity, but dealt with in such a way that the payoff could be monumental. If you do hybrid cloud and hybrid IT well, what could that mean for your business in three to five years, Ro?

    Nimble, quick and cost-efficient

    Antao: Clearly there is the agility aspect, of being able to seamlessly leverage these different clouds to allow IT organizations to be much more nimble in how they respond to the business.

    From a cost standpoint, and this is actually a great example we had for a large-scale migration that we are currently doing to the public cloud. What the IT organization found was they consumed close to 70 percent of their migration budget for only 30 percent of the progress that they made.

    And a larger part of that was because the minute you have your workloads sitting on a public cloud -- whether it is a development workload or you are still working your way through it, but technically it’s not yet providing value -- the clock is ticking. Being able to allow for a hybrid environment, where you a do a lot of that development, get it ready -- almost production-ready -- and then when the time is right to drive value from that application -- that’s when you move to a public cloud. Those are huge cost savings right there.

    Clients that have managed to balance those two paradigms are the ones who are also seeing a lot of economic efficiencies.

    Won: The most important thing that people see value in is that agility. The ability to respond much faster to competitive actions or to new changes in the market, the ability to bring applications out faster, to be able to update applications in months -- or sometimes even weeks -- rather than the two years that it used to take.

    It's that agility to allow people to move faster and to shift their capabilities so much quicker than they have ever been able to do – that is the top reason why we're seeing people moving to this hybrid model. The cost factor is also really critical as they look at whether they are doing CAPEX or OPEX and private cloud or public cloud.

    One of the things that we have been doing at HPE through our Flexible Capacity program is that we enable our customers who were getting hardware to run these private clouds to actually pay for it on a pay-as-you-go basis. This allows them to better align their usage -- the cost to their usage. So taking that whole concept of pay-as-you-go that we see in the public cloud and bringing that into a private cloud environment.

    Hybrid Cloud Solutions
    for Microsoft Azure Stack
    Antao: That’s a great point. From a cost standpoint, there is an efficiency discussion. But we are also seeing in today's world that we are depending on edge computing a lot more. I was talking to the CIO of a large park the other day, and his comment to me was, yes, they would love to use the public cloud but they cannot afford for any kind of latency or disruption of services because that means he’s got thousands of visitors and guests in his park, because of the amount of dependency on technology he can afford that kind of latency.

    And so part of it is also the revenue impact discussion, and using public cloud in a way that allows you to manage some of those risks in terms of that analytical power and that computing power you need closer to the edge -- closer to your internal systems.

    Gardner: Microsoft Azure Stack is reinforcing the power and capability of hybrid cloud models, but Azure Stack is not going to be the same for each individual enterprise. How they differentiate, how they use and take advantage of a hybrid continuum will give them competitive advantages and give them a one-up in terms of skills.

    It seems to me that the continuum of Azure Stack, of a hybrid cloud, is super-important. But how your organization specifically takes advantage of that is going to be the key differentiator. And that's where an ecosystem solutions approach can be a huge benefit.

    Let's look at what comes next. What might we be talking about a year from now when we think about Microsoft Azure Stack in the market and the impact of hybrid cloud on businesses, Ken?

    Look at clouds from both sides now

    You will see that as a break in the boundary of private cloud versus public cloud, so think of it as a continuum. 
    Won: You will see organizations shifting from a world of using multiple clouds and having different applications or services on clouds to having an environment where services are based on multiple clouds. With the new cloud-native applications you'll be running different aspects of those services in different locations based on what are the requirements of that particular microservice

    So a service may be partially running in Azure, part of it may be running in Azure Stack. You will certainly see that as a kind of break in the boundary of private cloud versus public cloud, and so think of it as a continuum, if you will, of different environments able to support whatever applications they need.

    Gardner: Ro, as people get more into the weeds with hybrid cloud, maybe using Azure Stack, how will the market adjust?

    Antao: I completely agree with Ken in terms of how organizations are going to evolve their architecture. At PwC we have this term called the Configurable Enterprise, which essentially focuses on how the IT organization consumes services from all of these different sources to be able to ultimately solve business problems.

    To that point, where we see the market trends is in the hybrid IT space, the adoption of that continuum. One of the big pressures IT organizations face is how they are going to evolve their operating model to be successful in this new world. CIOs, especially the forward-thinking ones, are starting to ask that question. We are going to see in the next 12 months a lot more pressure in that space.

    Gardner: These are, after all, still early days of hybrid cloud and hybrid IT. Before we sign off, how should organizations that might not yet be deep into this prepare themselves? Are there some operations, culture, and skills? How might you want to be in a good position to take advantage of this when you do take the plunge?

    Plan to succeed with IT on board

    Won: One of the things we recommend is a workshop where we sit down with the customer and think through their company strategy. What is their IT strategy? How does that relate or map to the infrastructure that they need in order to be successful?

    This makes the connection between the value they want to offer as a company, as a business, to the infrastructure. It puts a plan in place so that they can see that direct linkage. That workshop is one of the things that we help a lot of customers with.

    We also have innovation centers that we've built with Microsoft where customers can come in and experience Azure Stack firsthand. They can see the latest versions of Azure Stack, they can see the hardware, and they can meet with experts. We bring in partners such as PwC to have a conversation in these innovation centers with experts.

    Gardner: Ro, how to get ready when you want to take the plunge and make the best and most of it?
    Hybrid Cloud Solutions
    for Microsoft Azure Stack
    Antao: We are at a stage right now where these transformations can no longer be done to the IT organization; the IT organization has to come along on this journey. What we have seen is, especially in the early stages, the running of pilot projects, of being able to involve the developers, the infrastructure architects, and the operations folks in pilot workloads, and learn how to manage it going forward in this new model.

    You want to create that from a top-down perspective, being able to tie in to where this adds the most value to the business. From a grassroots effort, you need to also create champions within the trenches that are going to be able to manage this new environment. Combining those two efforts has been very successful for organizations as they embark on this journey.


              Logicalis chief technologist defines the new ideology of hybrid IT        
    The next BriefingsDirect thought leader interview explores how digital disruption demands that businesses develop a new ideology of hybrid IT.

    We'll hear how such trends as Internet of things (IoT), distributed IT, data sovereignty requirements, and pervasive security concerns are combining to challenge how IT operates. And we'll learn how IT organizations are shifting to become strategists and internal service providers, and how that supports adoption of hybrid IT. We will also delve into how converged and hyper-converged infrastructures (HCI) provide an on-ramp to hybrid cloud strategies and adoption. 

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. 

    To help us define a new ideology for hybrid IT, we're joined by Neil Thurston, Chief Technologist for the Hybrid IT Practice at Logicalis Group in the UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Gardner: Why don’t we start at this notion of a new ideology? What’s wrong with the old ideology of IT?

    Thurston: Good question. What we are facing now is what we've done for an awfully long time versus what the emerging large hyper-scale providers with cloud, for example, have been developing. 

    Thurston
    The two clashing ideologies that we have are: Either we continue with the technologies that we've been developing (and the skills and processes that we've developed in-house) and push those out to the cloud, or we adopt the alternative ideology. If we think about things such as Microsoft Azure and the forthcoming Azure Stack, which means that those technologies are pulled from the cloud into our on-premise environments. The two opposing ideologies we have are: Do we push out or do we pull in?

    The technologies allow us to operate in a true hybrid environment. By that we mean not having isolated islands of innovation anymore. It's not just standing things up in hybrid hyper-scale environments, or clouds, where you have specific skills, resources, teams and tools to manage those things. Moving forward, we want to have consistency in operations, security, and automation. We want to have a single toolset or control plane that we can put across all of our workloads and data, regardless of where they happen to reside.
    Solutions for
    Hybrid and Private Cloud

    IT Infrastructure
    Gardner: One of the things I encounter, Neil, when I talk to Chief information officers (CIO)s, is their concern that as we move to a hybrid environment, they're going to be left with having the responsibility -- but without the authority to control those different elements. Is there some truth to that?

    Thurston: I can certainly see where that viewpoint comes from. A lot of our own customers reflect that viewpoint. We're seeing a lot of organizations, where they may have dabbled and cherry-picked from service management and from practices such as ITIL. We're now seeing more pragmatic IT service management (ITSM) frameworks, such as IT4IT, coming to the fore. These are really more about pushing that responsibility level up the stack. 

    You're right in that people are becoming more of a supply-chain manager than the actual manager of the hardware, facilities, and everything else within IT. There definitely is a shift toward that, but there are also frameworks coming into play that allow you to deal with that as well. 

    Gardner: The notion of shadow IT becoming distributed IT was once a very dangerous and worrisome thing. Now, it has to be embraced and perhaps is positive. Why should we view it as positive?

    Out of the shadow

    Thurston: The term shadow IT is controversial. Within our organization, we prefer to say that the shadow IT users are the digital users of the business. You have traditional IT users, but you also have digital users. I don’t really think it’s a shadow IT thing; it's that they're a totally different use-case for service consumption. 

    But you're right. They definitely need to be serviced by the organizations. They deserve to have the same level of services applied, the same governance, security, and everything else applied to them. 

    Gardner: It seems that the new ideology of hybrid IT is about getting the right mix and keeping that mix of elements under some sort of control. Maybe it's simply on the basis of management, or an automation framework of some sort, but you allow that to evolve and see what happens. We don't know what this is going to be like in five years. 

    Thurston: There are two pieces of the puzzle. There's the workload, the actual applications and services, and then there's the data. There is more importance placed on the data. Data is the new commodity, the new cash, in our industry. Data is the thing you want to protect. 

    The actual workload and service consumption piece is the commodity piece that could be worked out. What you have to do moving forward is protect your data, but you can take more of a brokering approach to the actual workloads. If you can reach that abstraction, then you're fit-for-purpose and moving forward into the hybrid IT world.

    Gardner: It’s almost like we're controlling the meta-processes over that abstraction without necessarily having full control of what goes on at those lower abstractions, but that might not be a bad thing. 

    Thurston: I have a very quick use-case. A customer of ours for the last five years has been using Amazon Web Services (AWS), and they were getting the feeling they were getting tied into the platform. Their developers over the years had been using more and more of the platform services and they weren’t able to make all that code portable and take it elsewhere. 

    This year, they made the transformation and they've decided to develop against Cloud Foundry, an open Platform as a Service (PaaS). They have instances of Cloud Foundry across Pivotal on AWS, also across IBM Bluemix, and across other cloud providers. So, they're now coding once -- and deploying anywhere for the compute workload side. Then, they have a separate data fabric that regulates the data underneath. There are emerging new architectures that help you to deal with this.

    Gardner: It's interesting that you just described an ecosystem approach. You're no longer seeing as many organizations that are supplier “XYZ” shops, where 80 or 90 percent of everything would be one brand name. You just described a highly heterogeneous environment. 

    Thurston: People have used cloud services, and hyper-scale of cloud services, and have specific use-cases, typically the more temporary types of workloads. Even companies born in the cloud, such as Uber and Netflix, reach those inflection points, where actually going to on-premise was far cheaper. It made compliance to regulations far easier. People are slowly realizing, through what other people are doing -- and also from their own good or bad experiences -- that hybrid IT really is the way forward.

    Gardner: And the good news is that if you do bring it back from the cloud or re-factor what you're doing on-premises, there are some fantastic new infrastructure technologies. We are talking about converged infrastructure, hyper-converged infrastructure, software-defined data center (SDDC). At recent HPE Discover events, we've seen more  memory-driven computing, and we’re seeing some interesting new powerful speeds and feeds along those lines. 

    So, on the economics and the price-performance equation, the public cloud is good for certain things, but there's some great attraction to some of these new technologies on-premises. Is that the mix that you are trying to help your clients factor?
    Solutions for
    Hybrid and Private Cloud

    IT Infrastructure
    Thurston: Absolutely. We're pretty much in parallel with the way that HPE approaches things, with the right mix. We see that in certain industries there's always going to be things like regulated data. Regulated data is really hard to control in a public-cloud space, where you have no real idea where things are. You can’t easily order them physically. 

    Having on-premise provides you with that far easier route to regulation, and today’s technologies, the hyper-converged platforms, for example, allow us to really condense the footprint. We don’t need these massive data centers anymore.

    We're working with customers where we have taken 10 or 12 racks worth of legacy classic equipment and with a new hyper-converged, we put in less than two racks worth of equipment. So, the actual operational footprint of facilities cost is much less. It makes it a far more compelling argument for those types of use-cases than using public cloud.

    Gardner: Then you can mirror that small footprint data center into a geography, if you need it for compliance requirements, or you could mirror it for reasons of business continuity and backup and recovery. So, there are lots of very interesting choices. 

    Neil, tell us a little bit about Logicalis. I want to make sure all of our listeners and readers understand who you are and how you fit into helping organizations make these very large strategic decisions.

    Cloud-first is not cloud-only 

    Thurston: Logicalis is essentially a digital business enabler. We take technologies across multiple areas and help our customers become digital-ready. We cover a whole breadth of technologies. 

    I look at the hybrid IT practice, but we also have the more digital-focused parts of our business, such as collaboration and analytics. The hybrid IT side is where we're working with our customers through the pains that they have, through the decisions that they have to make, and very often board-level decisions are made where you have to have a "cloud-first" strategy.

    It's unfortunate when that gets interpreted as "cloud-only." There is some process to go through for cloud readiness, because some applications are not going to be fit for the cloud. Some cannot be virtualized; most can, but there are always regulations. Certainly, in Europe at present there is a lot of fear, uncertainty, and doubt (FUD) in the market, and there is a lot of uncertainty around European Union General Data Protection Regulation (EU GDPR), for example, and overall data protection.

    There are a lot of reasons why we have to take a bit more of a factored, measured approach to looking at where workloads and data are best placed moving forward, and the models are that you want to operate in.

    Gardner: I think HPE agrees with you. Their strategy is to put more emphasis on things like high performance computing (HPC), the workloads of which won't likely be virtualized, that won't work well in a public cloud, one-size-fits-all environment. It's also factoring in the importance of the edge, even thinking about putting the equivalent of a data center on the edge for demands around information for IoT, and analytics and data requirements there as well as the compute requirements.

    What's the relationship between HPE and Logicalis? How do you operate as an alliance or as a partnership?

    Thurston: We have a very strong partnership. We have a 15- or 16-year relationship with HPE in the UK. As everyone else did, we started out selling service and storage, but we've taken the journey with HPE and with our customers. The great thing about HPE is that they've always managed to innovate, they have always managed to keep up with the curve, and that's really enabled us to work with our customers and decide what the right technologies are. Today, this allows us to work out the right mix for our customers of on-premise and off-premise equipment,

    HPE is ahead of the curve in various technologies in our area, and one of those includes HPE Synergy. We're now talking with a lot of our customers about the next curve that’s coming with infrastructure-as-code, and how we can leverage what the possible benefits and outcomes will be of enabling that technology.

    The on-ramp to that is that we're using hyper-converged technologies to virtualize all the workloads and make them portable, so that we can then abstract them and place them either within platform services or within cloud platforms, as necessary, as dictated by whatever our security policies dictate.
    Solutions for
    Hybrid and Private Cloud

    IT Infrastructure
    Gardner: Getting back to this ideology of hybrid IT, when you have disparate workloads and you're taking advantage of these benefits of platform choice, location, model and so forth, it seems that we're still confronted with that issue of having the responsibility without the authority. Is there an approach that HPE is taking with management, perhaps thinking about HPE OneView that is anticipating that need and maybe adding some value there?

    Thurston: With the HPE toolsets, we're able to set things such as policies. Today, we're at Platform 2.5 really, and the inflection that takes us on to the third platform is the policy automation. This is one part that HPE OneView allows us to do across the board. 

    It’s policies on our storage resources, policies on our compute resources, and again, policies on non-technology, so quotas on public cloud, and those types of things. It enables us to leverage the software-defined infrastructure that we have underneath to set the policies that define the operational windows that we want our infrastructure to work in, the decisions it’s allowed to make itself within that, and we'll just let it go. We really want to take IT from "high touch" to "low touch," that we can do today with policy, and potentially, in the future with infrastructure as code, to "no touch." 

    Gardner: As you say, we are at Platform 2.5, heading rapidly towards Platform 3. Do you have some examples you can point to, customers of yours and HPE’s, and describe how a hybrid IT environment translates into enablement and business benefits and perhaps even economic benefits? 

    Time is money

    Thurston: The University of Wolverhampton is one of our customers, where we've taken this journey with them with HPE, with hyper-converged platforms, and created a hybrid environment for them. 

    Today, the hybrid environment means that we're wholly virtualized on HPE hyper-converged platform. We've rolled the solutions out across their campus. Where we normally would have had disparate clouds, we now have a single plane controlled by OneView that enables them to balance all the workloads across the whole campus, all of their departments. It’s bringing them new capabilities, such as agility, so they can now react a lot quicker. 

    Before, a lot of the departments were coming to them with requirements, but those requirements were taking 12 to 16 weeks to actually fulfill. Now, we can do these things from the technology perspective within hours, and the whole process within days. We're talking a factor of 10 here in reduction of time to actually produce services. 

    As they say, success breeds success. Once someone sees what the other department is able to do, that generates more questions, more requests, and it becomes a self-fulfilling prophecy. 

    We're working with them to enable the next phase of this project. That is to leverage the hyper-scale of public clouds, but again, in a more controlled environment. Today, they're used to the platform. That’s all embedded in. They are reaping the benefits of that from mainly an agility perspective. From an operational perspective, they are reaping the benefits of vastly reduced system, and more importantly, storage administration. 

    Storage administrations have had 85 percent savings on their time required to administer the storage by having it wholly virtualized, which is fantastic from their perspective. It means they can concentrate more on developing the next phase, which is embracing or taking this ideology out to the public cloud.

    Gardner: Let's look to the future before we wrap this up. What would you like to see, not necessarily from HPE, but what can the vendors, the suppliers, or the public-cloud providers do to help you make that hybrid IT equation work better? 

    Thurston: A lot of our mainstream customers always think that they're late into adoption, but typically, they're late into adoption because they're waiting to see what becomes either a de-facto standard that is winning in the market, or they're looking for bodies to create standards. Interoperability between platforms and standards is really the key to driving better adoption.

    Today with AWS, Azure, etc., there's no real compatibility that we can take from them. We can only abstract things further up. This is why I think platform as a service, things like Cloud Foundry and open platforms will, for those forward thinkers who want to adopt the hybrid IT, become the future platforms of choice.

    Gardner: It sounds like what you are asking for is a multi-cloud set of options that actually works and is attainable. 

    Thurston: It’s like networking, with Ethernet. We have had a standard, everyone adheres to it, and it’s a commodity. Everyone says public cloud is a commodity. It is, but unfortunately what we don’t have is the interoperability of the other standards, such as we find in networking. That’s what we need to drive better adoption, moving forward.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. Sponsor: Hewlett Packard Enterprise.

    You may also be interested in:


              Converged IoT systems: Bringing the data center to the edge of everything        
    The next BriefingsDirect thought leadership panel discussion explores the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support Internet of Things (IoT) requirements.

    The demands of data processing, real-time analytics, and platform efficiency at the intercept of IoT and business benefits have forced new technology approaches. We'll now learn how converged systems and high-performance data analysis platforms are bringing the data center to the operational technology (OT) edge.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

    To hear more about the latest capabilities in gaining unprecedented measurements and operational insights where they’re needed most, please join me in welcoming Phil McRell, General Manager of the IoT Consortia at PTC; Gavin Hill, IoT Marketing Engineer for Northern Europe at National Instruments (NI) in London, and Olivier Frank, Senior Director of Worldwide Business Development and Sales for Edgeline IoT Systems at Hewlett Packard Enterprise (HPE). The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Gardner: What's driving this need for a different approach to computing when we think about IoT and we think about the “edge” of organizations? Why is this becoming such a hot issue?

    McRell: There are several drivers, but the most interesting one is economics. In the past, the costs that would have been required to take an operational site -- a mine, a refinery, or a factory -- and do serious predictive analysis, meant you would have to spend more money than you would get back.

    For very high-value assets -- assets that are millions or tens of millions of dollars -- you probably do have some systems in place in these facilities. But once you get a little bit lower in the asset class, there really isn’t a return on investment (ROI) available. What we're seeing now is that's all changing based on the type of technology available.

    Gardner: So, in essence, we have this whole untapped tier of technologies that we haven't been able to get a machine-to-machine (M2M) benefit from for gathering information -- or the next stage, which is analyzing that information. How big an opportunity is this? Is this a step change, or is this a minor incremental change? Why is this economically a big deal, Olivier?
    Frank

    Frank: We're talking about Industry 4.0, the fourth generation of change -- after steam, after the Internet, after the cloud, and now this application of IoT to the industrial world. It’s changing at multiple levels. It’s what's happening within the factories and within this ecosystem of suppliers to the manufacturers, and the interaction with consumers of those suppliers and customers. There's connectivity to those different parties that we can then put together.

    While our customers have been doing process automation for 40 years, what we're doing together is unleashing the IT standardization, taking technologies that were in the data centers and applying them to the world of process automation, or opening up.

    The analogy is what happened when mainframes were challenged by mini computers and then by PCs. It's now open architecture in a world that has been closed.

    Gardner: Phil mentioned ROI, Gavin. What is it about the technology price points and capabilities that have come down to the point where it makes sense now to go down to this lower tier of devices and start gathering information?


    Hill
    Hill: There are two pieces to that. The first one is that we're seeing that understanding more about the IoT world is more valuable than we thought. McKinsey Global Institute did a study that said that by about 2025 we're going to be in a situation where IoT in the factory space is going to be worth somewhere between $1.2 trillion and $3.7 trillion. That says a lot.

    The second piece is that we're at a stage where we can make technology at a much lower price point. We can put that onto the assets that we have in these industrial environments quite cheaply.

    Then, you deal with the real big value, the data. All three of us are quite good at getting the value from our own respective areas of expertise.

    Look at someone that we've worked with, Jaguar Land Rover. In their production sites, in their power train facilities, they were at a stage where they created an awful lot of data but didn't do anything with it. About 90 percent of their data wasn't being used for anything. It doesn't matter how many sensors you put on something. If you can't do anything with the data, it's completely useless.

    They have been using techniques similar to what we've been doing in our collaborative efforts to gain insight from that data. Now, they're at a stage where probably 90 percent of their data is usable, and that's the big change.

    Collaboration is key

    Gardner: Let's learn more about your organizations and how you're working collaboratively, as you mentioned, before we get back into understanding how to go about architecting properly for IoT benefits. Phil, tell us about PTC. I understand you won an award in Barcelona recently.

    McRell: That was a collaboration that our three organizations did with a pump and valve manufacturer, Flowserve. As Gavin was explaining, there was a lot of learning that had to be done upfront about what kind of sensors you need and what kind of signals you need off those sensors to come up with accurate predictions.

    When we collaborate, we rely heavily on NI for their scientists and engineers to provide their expertise. We really need to consume digital data. We can't do anything with analog signals and we don't have the expertise to understand what kind of signals we need. When we obtain that, then with HPE, we can economically crunch that data, provide those predictions, and provide that optimization, because of HPE's hardware that now can live happily in those production environments.

    Gardner: Tell us about PTC specifically; what does your organization do?

    McRell: For IoT, we have a complete end-to-end platform that allows everything from the data acquisition gateway with NI all the way up to machine learning, augmented reality, dashboards, and mashups, any sort of interface that might be needed for people or other systems to interact.

    In an operational setting, there may be one, two, or dozens of different sources of information. You may have information coming from the programmable logic controllers (PLCs) in a factory and you may have things coming from a Manufacturing Execution System (MES) or an Enterprise Resource Planning (ERP) system. There are all kinds of possible sources. We take that, orchestrate the logic, and then we make that available for human decision-making or to feed into another system.

    Gardner: So the applications that PTC is developing are relying upon platforms and the extension of the data center down to the edge. Olivier, tell us about Edgeline and how that fits into this?
    Explore
    HPE's Edgeline

    IoT Systems
    Frank: We came up with this idea of leveraging the enterprise computing excellence that is our DNA within HPE. As our CEO said, we want to be the IT in the IoT.

    According to IDC, 40 percent of the IoT computing will happen at the edge. Just to clarify, it’s not an opposition between the edge and the hybrid IT that we have in HPE; it’s actually a continuum. You need to bring some of the workloads to the edge. It's this notion of time of insight and time of action. The closer you are to what you're measuring, the more real-time you are.

    We came up with this idea. What if we could bring the depth of computing we have in the data center in this sub-second environment, where I need to read this intelligent data created by my two partners here, but also, actuate them and do things with them?

    Take the example of an electrical short circuit that for some reason caught fire. You don’t want to send the data to the cloud; you want to take immediate action. This is the notion of real-time, immediate action.

    We take the deep compute. We integrate the connectivity with NI. We're the first platform that has integrated an industry standard called PXI, which allows NI to integrate the great portfolio of sensors and acquisition and analog-to-digital conversion technologies into our systems.

    Finally, we bring enterprise manageability. Since we have proliferation of systems, system management at the edge becomes a problem. So, we bring our award-winning and millions-of-licenses sold our Integrated Lights-Out (iLO) that we sell in all our ProLiant servers, and we bring that technology at the edge as well.

    Gardner: We have the computing depth from HPE, we have insightful analytics and applications from PTC, what does NI bring to the table? Describe the company for us, Gavin?

    Working smarter

    Hill: As a company, NI is about a $1.2 billion company worldwide. We get involved in an awful lot of industries. But in the IoT space, where we see ourselves fitting within this collaboration with PTC and HPE, is our ability to make a lot of machines smarter.

    There are already some sensors on assets, machines, pumps, whatever they may be on the factory floor, but for older or potentially even some newer devices, there are not natively all the sensors that you need to be able to make really good decisions based on that data. To be able to feed in to the PTC systems, the HPE systems, you need to have the right type of data to start off with.

    We have the data acquisition and control units that allow us to take that data in, but then do something smart with it. Using something like our CompactRIO System, or as you described, using the PXI platform with the Edgeline products, we can add a certain level of understanding and just a smart nature to these potentially dumb devices. It allows us not only to take in signals, but also potentially control the systems as well.

    We not only have some great information from PTC that lets us know when something is going to fail, but we could potentially use their data and their information to allow us to, let’s say, decide to run a pump at half load for a little bit longer. That means that we could get a maintenance engineer out to an oil rig in an appropriate time to fix it before it runs to failure. We have the ability to control as well as to read in.

    The other piece of that is that sensor data is great. We like to be as open as possible in taking from any sensor vendor, any sensor provider, but you want to be able to find the needle in the haystack there. We do feature extraction to try and make sure that we give the important pieces of digital data back to PTC, so that can be processed by the HPE Edgeline system as well.
    Explore
    HPE's Edgeline

    IoT Systems
    Frank: This is fundamental. Capturing the right data is an art and a science and that’s really what NI brings, because you don’t want to capture noise; it’s proliferation of data. That’s a unique expertise that we're very glad to integrate in the partnership.

    Gardner: We certainly understand the big benefit of IoT extending what people have done with operational efficiency over the years. We now know that we have the technical capabilities to do this at an acceptable price point. But what are the obstacles, what are the challenges that organizations still have in creating a true data-driven edge, an IoT rich environment, Phil?

    Economic expertise

    McRell: That’s why we're together in this consortium. The biggest obstacle is that because there are so many different requirements for different types of technology and expertise, people can become overwhelmed. They'll spend months or years trying to figure this out. We come to the table with end-to-end capability from sensors and strategy and everything in between, pre-integrated at an economical price point.

    Speed is important. Many of these organizations are seeing the future, where they have to be fast enough to change their business model. For instance, some OEM discrete manufacturers are going to have to move pretty quickly from just offering product to offering service. If somebody is charging $50 million for capital equipment, and their competitor is charging $10 million a year and the service level is actually better because they are much smarter about what those assets are doing, the $50 million guy is going to go out of business.

    McRell
    We come to the table with the ability to come and quickly get that factory, get those assets smart and connected, make sure the right people, parts, and processes are brought to bear at exactly the right time. That drives all the things people are looking for -- the up-time, the safety, the yield,  and performance of that facility. It comes down to the challenge, if you don't have all the right parties together with that technology and expertise, you can very easily get stuck on something that takes a very long time to unravel.

    Gardner: That’s very interesting when you move from a Capital Expenditure (CAPEX) to an Operational Expenditure (OPEX) mentality. Every little bit of that margin goes to your bottom line and therefore you're highly incentivized to look for whole new categories of ways to improve process efficiency.

    Any other hurdles, Olivier, that you're trying to combat effectively with the consortium?

    Frank: The biggest hurdle is the level of complexity, and our customers don't know where to start. So, the promise of us working together is really to show the value of this kind of open architecture injected into a 40-year-old process automation infrastructure and demonstrate, as we did yesterday with our robot powered by our HPE Edgeline is this idea that I can show immediate value to the plant manager, to the quality manager, to the operation manager using the data that resides in that factory already, and that 70 percent or more is unused. That’s the value.

    So how do you get that quickly and simply? That’s what we're working to solve so that our customers can enjoy the benefit of the technology faster and faster.

    Bridge between OT and IT

    Gardner: Now, this is a technology implementation, but it’s done in a category of the organization that might not think of IT in the same way as the business side -- back office applications and data processing. Is the challenge for many organizations a cultural one, where the IT organization doesn't necessarily know and understand this operational efficiency equation and vice versa, and how are we bridging that?

    Hill: I'm probably going to give you the high-level end from the operational technology (OT) side as well. These guys will definitely have more input from their own domain of expertise. But, that these guys have that piece of information for that part that they know well is exactly why this collaboration works really well.

    You have situations with the idea of the IoT, where a lot of people stood up and said, "Yeah, I can provide a solution. I have the answer," but without having a plan -- never mind a solution. But we've done a really good job of understanding that we can do one part of this system, this solution, really well, and if we partner with the people who are really good in the other aspects, we provide real solutions to customers. I don't think anyone can compete with us with at this stage, and that is exactly why we're in this situation.

    Frank: Actually, the biggest hurdle is more on the OT side, not really relying on the IT of the company. For many of our customers, the factory's a silo. At HPE, we haven't been selling too much to that environment. That’s also why, when working as a consortium, it’s important to get to the right audience, which is in the factory. We also bring our IT expertise, especially in the areas of security, because at the moment, when you put an IT device in an OT environment, you potentially have problems that you didn’t have before.

    We're living in a closed world, and now the value is to open up. Bringing our security expertise, our managed service, our services competencies to that problem is very important.

    Speed and safety out in the open

    Hill: There was a really interesting piece in the HPE Discover keynote in December, when HPE Aruba started to talk about how they had an issue when they started bringing conferencing and technology out, and then suddenly everything wanted to be wireless. They said, "Oh, there's a bit of a security issue here now, isn’t there? Everything is out there."

    We can see what HPE has contributed to helping them from that side. What we're talking about here on the OT side is a similar state from the security aspect, just a little bit further along in the timeline, and we are trying to work on that as well. Again, we have HPE here and they have a lot of experience in similar transformations.

    Frank: At HPE, as you know, we have our Data Center and Hybrid Cloud Group and then we have our Aruba Group. When we do OT or our Industrial IoT, we bring the combination of those skills.

    For example, in security, we have HPE Aruba ClearPass technology that’s going to secure the industrial equipment back to the network and then bring in wireless, which will enable the augmented-reality use cases that we showed onstage yesterday. It’s a phased approach, but you see the power of bringing ubiquitous connectivity into the factory, which is a challenge in itself, and then securely connecting the IT systems to this OT equipment, and you understand better the kind of the phases and the challenges of bringing the technology to life for our customers.

    McRell: It’s important to think about some of these operational environments. Imagine a refinery the size of a small city and having to make sure that you have the right kind of wireless signal that’s going to make it through all that piping and all those fluids, and everything is going to work properly. There's a lot of expertise, a lot of technology, that we rely on from HPE to make that possible. That’s just one slice of that stack where you can really get gummed up if you don’t have all the right capabilities at the table right from the beginning. 

    Gardner: We've also put this in the context of IoT not at the edge isolated, but in the context of hybrid computing and taking advantage of what the cloud can offer. It seems to me that there's also a new role here for a constituency to be brought to the table, and that’s the data scientists in the organization, a new trove of data, elevated abstraction of analytics. How is that progressing? Are we seeing the beginnings of taking IoT data and integrating that, joining that, analyzing that, in the context of data from other aspects of the company or even external datasets?

    McRell: There are a couple of levels. It’s important to understand that when we talk about the economics, one of the things that has changed quite a bit is that you can actually go in, get assets connected, and do what we call anomaly detection, pretty simplistic machine learning, but nonetheless, it’s a machine-learning capability.

    In some cases, we can get that going in hours. That’s a ground zero type capability. Over time, as you learn about a line with multiple assets, about how all these function together, you learn how the entire facility functions, and then you compare that across multiple facilities, at some point, you're not going to be at the edge anymore. You're going to be doing a systems type analytics, and that’s different and combined.

    At that point, you're talking about looking across weeks, months, years. You're going to go into a lot of your back-end and maybe some of your IT systems to do some of that analysis. There's a spectrum that goes back down to the original idea of simply looking for something to go wrong on a particular asset.

    The distinction I'm making here is that, in the past, you would have to get a team of data scientists to figure out almost asset by asset how to create the models and iterate on that. That's a lengthy process in and of itself. Today, at that ground-zero level, that’s essentially automated. You don't need a data scientist to get that set up. At some point, as you go across many different systems and long spaces of time, you're going to pull in additional sources and you will get data scientists involved to do some pretty in-depth stuff, but you actually can get started fairly quickly without that work.

    The power of partnership

    Frank: To echo what Phil just said, in HPE we're talking about the tri-hybrid architecture -- the edge, so let’s say close to the things; the data center; and then the cloud, which would be a data center that you don’t know where it is. It's kind of these three dimensions.

    The great thing partnering with PTC is that the ThingWorx platform, the same platform, can run in any of those three locations. That’s the beauty of our HPE Edgeline architecture. You don't need to modify anything. The same thing works, whether we're in the cloud, in the data center, or on the Edgeline.

    To your point about the data scientists, it's time-to-insight. There are things you want to do immediately, and as Phil pointed out, the notion of anomaly detection that we're demonstrating on the show floor is understanding those nominal parameters after a few hours of running your thing, and simply detecting something going off normal. That doesn't require data scientists. That takes us into the ThingWorx platform.
    Explore
    HPE's Edgeline

    IoT Systems
    But then, to the industrial processes, we're involving systems integration partners and using our own knowledge to bring to the mix along with our customers, because they own the intelligence of their data. That’s where it creates a very powerful solution.

    Gardner: I suppose another benefit that the IT organization can bring to this is process automation and extension. If you're able to understand what's going on in the device, not only would you need to think about how to fix that device at the right time -- not too soon, not too late -- but you might want to look into the inventory of the part, or you might want to extend it to the supply chain if that inventory is missing, or you might want to analyze the correct way to get that part at the lowest price or under the RFP process. Are we starting to also see IT as a systems integrator or in a process integrator role so that the efficiency can extend deeply into the entire business process?

    McRell: It's interesting to see how this stuff plays out. Once you start to understand in your facility -- or maybe it’s not your facility, maybe you are servicing someone's facility -- what kind of inventory should you have on hand, what should you have globally in a multi-tier, multi-echelon system, it opens up a lot of possibilities.

    Today PTC provides a lot of network visibility, a lot of spare-parts inventory, management, and systems, but there's a limit to what these algorithms can do. They're really the best that’s possible at this point, except when you now have everything connected. That feedback loop allows you to modify all your expectations in real time, get things on the move proactively so the right person and parts, process, kit, all show up at the right time.

    Then, you have augmented reality and other tools, so that maybe somebody hasn't done this service procedure before, maybe they've never seen these parts before, but they have a guided walk-through and have everything showing up all nice and neat the day of, without anybody having to actually figure that out. That's a big set of improvements that can really change the economics of how these facilities run.

    Connecting the data

    Gardner: Any other thoughts on process integration?

    Frank: Again, the premise behind industrial IoT is indeed, as you're pointing out, connecting the consumer, the supplier, and the manufacturer. That’s why you have also the emergence of a low-power communication layer, like LoRa or Sigfox, that really can bring these millions of connected devices together and inject them into the systems that we're creating.

    Hill: Just from the conversation, I know that we’re all really passionate about this. IoT and the industrial IoT is really just a great topic for us. It's so much bigger than what we're talking about. You've talked a little bit about security, you have asked us about the cloud, you have asked us about the integration of the inventory and to the production side, and it is so much bigger than what we are talking about now.

    We probably could have twice this long of a conversation on any one of these topics and still never get halfway to the end of it. It's a really exciting place to be right now. And the really interesting thing that I think all of us are now realizing, the way that we have made advancements as a partnership as well is that you don't know what you don't know. A lot of companies are waking up to that as well, and we're using our collaborations to allow us to know what we don’t know

    Frank: Which is why speed is so important. We can theorize and spend a lot of time in R&D, but the reality is, bring those systems to our customers, and we learn new use cases and new ways to make the technology advance.

    Hill: The way that technology has gone, no one releases a product anymore -- that’s the finished piece, and that is going to stay there for 20, 30 years. That’s not what happens. Products and services are being provided that get constantly updated. How many times a week does your phone update with different pieces of firmware, the app is being updated. You have to be able to change and take the data that you get to adjust everything that’s going on. Otherwise you will not stay ahead of the market.

    And that’s exactly what Phil described earlier when he was talking about whether you sell a product or a service that goes alongside a set of products. For me, one of the biggest things is that constant innovation -- where we are going. And we've changed. We were in kind of a linear motion of progression. In the last little while, we've seen a huge amount of exponential growth in these areas.

    We had a video at the end of the London HPE Discover keynote, where it was one of HPE’s pieces of what the future could be. We looked at it and thought it was quite funny. There was an automated suitcase that would follow you after you left the airport. I started to laugh at that, but then I took a second and I realized that maybe that’s not as ridiculous as it sounds, because we as humans think linearly. That’s incumbent upon us. But if the technology is changing in an exponential way, that means that we physically cannot ignore some of the most ridiculous ideas that are out there, because that’s what’s going to change the industry.

    And even by having that video there and by seeing what PTC is doing with the development that they have and what we ourselves are doing in trying out different industries and different applications, we see three companies that are constantly looking through what might happen next and are ready to pounce on that to take advantage of it, each with their own expertise.

    Gardner: We're just about out of time, but I'd like to hear a couple of ridiculous examples -- pushing the envelope of what we can do with these sorts of technologies now. We don’t have much time, so less than a minute each, if you can each come up perhaps with one example, named or unnamed, that might have seemed ridiculous at the time, but in hindsight has proven to be quite beneficial and been productive. Phil?

    McRell: You can do this as engineering with us, you can do this in service, but we've been talking a lot about manufacturing. In a manufacturing journey, the opportunity, as Gavin and Olivier are describing here, is at the level of what happened between pre- and post-electricity. How fast things will run, the quality at which they will produce products, and then therefore the business model that now you can have because of that capability. These are profound changes. You will see up-times in some of the largest factories in the world go up double digits. You will see lines run multiple times faster over time.

    These are things that, if you just walked in today and walked in in a couple of years to some of the people who run the hardest, it would be really hard to believe what your eyes are seeing at that point, just like somebody who was around before factories had electricity would be astounded by what they see today.

    Back to the Future

    Gardner: One of the biggest issues at the most macro level in economics is the fact that productivity has plateaued for the past 10 or 15 years. People want to get back to what productivity was -- 3 or 4 percent a year. This sounds like it might be a big part of getting there. Olivier, an example?

    Frank: Well, an example would be more like an impact on mankind and wealth for humanity. Think about that with those technologies combined with 3D printing, you can have new class of manufacturers anywhere in the world -- in Africa, for example. With real-time engineering, some of the concepts that we are demonstrating today, you have designing.

    Another part of PTC is Computer-Aided Design (CAD) systems and Product Lifecycle Management (PLM), and we're showing real-time engineering on the floor again. You design those products and you do quick prototyping with your 3D printing. That could be anywhere in the world. And you have your users testing the real thing, understanding whether your engineering choices were relevant, if there are some differences between the digital model and the physical model, this digital twin ID.

    Then, you're back to the drawing board. So, a new class of manufacturers that we don’t even know, serving customers across the world and creating wealth in areas that are (not) up to date, not industrialized.

    Gardner: It's interesting that if you have a 3D printer you might not need to worry about inventory or supply chain.

    Hill: Just to add on that one point, the bit that really, really excites me about where we are with technology, as a whole, not even just within the collaboration, you have 3D printing, you have the availability of open software. We all provide very software-centric products, stuff that you can adjust yourself, and that is the way of the future.

    That means that among the changes that we see in the manufacturing industry, the next great idea could come from someone who has been in the production plant for 20 years, or it could come from Phil who works in the bank down the road, because at a really good price point, he has the access to that technology, and that is one of the coolest things that I can think about right now.

    Where we've seen this sort of development and this use of these sort of technologies and implementations and seen a massive difference, look at someone like Duke Energy in the US. We worked with them before we realized where our capabilities were, never mind how we could implement a great solution with PTC and with HPE. Even there, based on our own technology, those guys in the para-production side of things in some legacy equipment decided to try and do this sort of application, to have predictive maintenance to be able to see what’s going on in their assets, which are across the continent.

    They began this at the start of 2013 and they have seen savings of an estimated $50 billion up to this point. That’s a number.

    Listen to the podcast. Find it on iTunes. Get the mobile appRead a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

    You may also be interested in:


              IDOL-powered appliance delivers better decisions via comprehensive business information searches        
    The next BriefingsDirect digital transformation case study highlights how a Swiss engineering firm created an appliance that quickly deploys to index and deliver comprehensive business information.

    By scouring thousands of formats and hundreds of languages, the approach then provides via a simple search interface unprecedented access to trends, leads, and the makings of highly informed business decisions.

    We will now explore how SEC 1.01 AG delivers a truly intelligent services solution -- one that returns new information to ongoing queries and combines internal and external information on all sorts of resources to produce a 360-degree view of end users’ areas of intense interest.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

    To learn how to access the best available information in about half the usual time, we're joined by David Meyer, Chief Technology Officer at SEC 1.01 AG in Switzerland. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Meyer
    Gardner: What are some of the trends that are driving the need for what you've developed. It's called the i5 appliance?

    Meyer: The most important thing is that we can provide instant access to company-relevant information. This is one of today’s biggest challenges that we address with our i5 appliance.

    Decisions are only as good as the information bases they are made on. The i5 provides the ability to access more complete information bases to make substantiated decisions. Also, you don’t want to search all the time; you want to be proactively informed. We do that with our agents and our automated programs that are searching for new information that you're interested in.

    Gardner: As an organization, you've been around for quite a while and involved with  large applications, packaged applications -- SAP, for example and R/3 -- but over time, more data sources and ability to gather information came on board, and you saw the need in the market for this appliance. Tell us a little bit about what led you to create it?

    Accelerating the journey

    Meyer: We started to dive into big data about the time that HPE acquired Autonomy, December 2011, and we saw that it’s very hard for companies to start to become a data-driven organization. With the i5 appliance, we would like to help companies accelerate their journey to become such a company.

    Gardner: Tell us what you mean by a 360-degree view? What does that really mean in terms of getting the right information to the right people at the right time?

    Meyer: In a company's information scope, you don’t just talk about internal information, but you also have external information like news feeds, social media feeds, or even governmental or legal information that you need and don’t have to time to search for every day.

    So, you need to have a search appliance that can proactively inform you about things that happen outside. For example, if there's a legal issue with your customer or if you're in a contract discussion and your partner loses his signature authority to sign that contract, how would you get this information if you don't have support from your search engine?
    Mission Critical
    Server Choices

    Have Never Been Better
    Gardner: And search has become such a popular paradigm for acquiring information, asking a question, and getting great results. Those results are only as good as the data and content they can access. Tell us a little bit about your company SEC 1.01 AG, your size and your scope or your market. Give us a little bit of background about your company.

    Meyer: We've been an HPE partner for 26 years, and we build business-critical platforms based on HPE hardware and also the HPE operating system, HP-UX. Since the merger of Autonomy and HPE in 2011, we started to build solutions based on HPE's big-data software, particularly IDOL and Vertica.

    Gardner: What was it about the environment that prevented people from doing this on their own? Why wouldn't you go and just do this yourself in your own IT shop?

    Meyer: The HPE IDOL software ecosystem, is really an ecosystem of different software, and these parts need to be packed together to something that can be installed very quickly and that can provide very quick results. That’s what we did with the i5 appliance.

    We put all this good software from HPE IDOL together into one simple appliance, which is simple to install. We want to accelerate the time that is needed to start with big data to get results from it and to get started with the analytical part of using your data and gain money out of it.

    Multiple formats

    Gardner: As we mentioned earlier, getting the best access to the best data is essential. There are a lot of APIs and a lot of tools that come with the IDOL ecosystem as you described it, but you were able to dive into a thousand or more file formats, support a 150 languages, and 400 data sources. That's very impressive. Tell us how that came about.

    Meyer: When you start to work with unstructured data, you need some important functionality. For example, you need to have support for lot of languages. Imagine all these social media feeds in different languages. How do you track that if you don't support sentiment analysis on these messages?

    On the other hand, you also need to understand any unstructured format. For example, if you have video broadcasts or radio broadcasts and you want to search for the content inside these broadcasts, you need to have a tool to translate the speech to text. HPE IDOL brings all the functionality that is needed to work with unstructured data, and we packed that together in our i5 appliance.

    Gardner: That includes digging into PDFs and using OCR. It's quite impressive how deep and comprehensive you can be in terms of all the types of content within your organization.
    Access the Free
    HPE Vertica

    Community Edition
    How do you physically do this? If it's an appliance, you're installing it on-premises, you're able to access data sources from outside your organization, if you choose to do that, but how do you actually implement this and then get at those data sources internally? How would an IT person think about deploying this?

    Meyer: We've prepared installable packages. Mainly, you need to have connectors to connect to repositories, to data ports. For example, if you have a Microsoft Exchange Server, you have a connector that understands very well how the Exchange server can communicate to that connector. So, you have the ability to connect to that data source and get any content including the metadata.

    You talk about metadata for an e-mail, for example, the “From” to “To”, to “Subject,” whatever. You have the ability to put all that content and this metadata into a centralized index, and then you're able to search that information and refine the information. Then, you have a reference to your original document.

    When you want to enrich the information that you have in your company with external information, we developed a so-called SECWebConnector that can capture any information from the Internet. For example, you just need to enter an RSS feed or a webpage, and then you can capture the content and the metadata you want it to search for or that is important for your company.

    Gardner: So, it’s actually quite easy to tailor this specifically to an industry focus, if you wish, to a geographic focus. It’s quite easy to develop an index that’s specific to your organization, your needs, and your people.

    Informational scope

    Meyer: Exactly. In our crowded informational system that we have with the Internet and everything, it’s important that companies can choose where they want to have the information that is important for them. Do I need legal information, do I need news information, do I need social media information, and do I need broadcasting information? It’s very important to build your own informational scope that you want to be informed about, news that you want to be able to search for.

    Gardner: And because of the way you structured and engineered this appliance, you're not only able to proactively go out and request things, but you can have a programmatic benefit, where you can tell it to deliver to you results when they arise or when they're discovered. Tell us a little bit how that works.

    Meyer: We call them agents. You can define which topics you're interested in, and when some new documents are found by that search or by that topic, then you get informed, with an email or with a push notification on the mobile app.

    Gardner: Let’s dig into a little bit of this concept of an appliance. You're using IDOL and you're using Vertica, the column-based or high-performance analytics engine, also part of HPE, but soon to be part of Micro Focus. You're also using 3PAR StoreServ and ProLiant DL380 servers. Tell us how that integration happened and why you actually call this an appliance, rather than some other name?
    In our crowded informational system that we have with the Internet and everything, it’s important that companies can choose where they want to have the information that is important for them.

    Meyer: Appliance means that all the software is patched together. Every component can talk to the others, talks the same language, and can be configured the same way. We preconfigure a lot, we standardize a lot, and that’s the appliance thing.

    And it’s not bound on hardware. So, it doesn’t need to be this DL380 or whatever. It also depends on how big your environment will be. It can also be a c7000 Blade Chassis or whatever.

    When we install an appliance, we have one or two days until it’s installed, and then it starts the initial indexing program, and this takes a while until you have all the data in the index. So, the initial load is big, but after two or three days, you're able to search for information.

    You mentioned the HPE Vertica part. We use Vertica to log every action that goes on, on the appliance. On one hand, this is a security feature. You need to prove if nobody has found the salary list, for example. You need to prove that and so you need to log it.

    On the other hand, you can analyze what users are doing. For example, if they don’t find something and it’s always the same thing that people are searching in the company and can't find, perhaps there's some information you need to implement into the appliance.

    Gardner: You mentioned security and privileges. How does the IT organization allow the right people to access the right information? Are you going to use some other policy engine? How does that work?

    Mapped security

    Meyer: It's included. It's called mapped security. The connector takes the security information with the document and indexes that security information within the index. So, you will never be able to find a document that you don't have access to in your environment. It's important that this security is given by default.

    Gardner: It sounds to me, David, like were, in a sense, democratizing big data. By gathering and indexing all the unstructured data that you can possibly want to, point at it, and connect to, you're allowing anybody in a company to get access to queries without having to go through a data scientist or a SQL query author. It seems to me that you're really opening up the power of data analysis to many more people on their terms, which are basic search queries. What does that get an organization? Do you have any examples of the ways that people are benefiting by this democratization, this larger pool of people able to use these very powerful tools?

    Meyer: Everything is more data-driven. The i5 appliance can give you access to all of that information. The appliance is here to simplify the beginning of becoming a data-driven organization and to find out what power is in the organization's data.
    Mission Critical
    Server Choices

    Have Never Been Better
    For example, we enabled a Swiss company called Smartinfo to become a proactive news provider. That means they put lots of public information, newspapers, online newspapers, TV broadcasts, radio broadcasts into that index. The customers can then define the topics they're interested in and they're proactively informed about new articles about their interests.

    Gardner: In what other ways do you think this will become popular? I'm guessing that a marketing organization would really benefit from finding relationships within their internal organization, between product and service, go-to market, and research and development. The parts of a large distributed organization don't always know what the other part is doing, the unknown unknowns, if you will. Any other examples of how this is a business benefit?

    Meyer: You mentioned the marketing organization. How could a marketing organization listen what customers are saying? For example, on social media they're communicating there, and when you have an engine like i5, you can capture these social media feeds, you can do sentiment analysis on that, and you will see an analyzed view on what's going on about your products, company, or competitors.

    You can detect, for example, a shitstorm about your company, a shitstorm about your competitor, or whatever. You need to have an analytic platform to see that, to visualize that, and this is a big benefit.

    On the other hand, it's also this proactive information you get from it, where you can see that your competitor has a new campaign and you get that information right now because you have an agent with the customer's name. You can see that there is something happening and you can act on that information.

    Gardner: When you think about future capabilities, are there other aspects that you can add on? It seems extensible to me. What would we be talking about a year from now, for example?

    Very extensible

    Meyer: It's pretty much extensible. I think about all these different verticals. You can expand it for the health sector, for the transportation sector, whatever. It doesn't really matter.

    We do network analysis. That means when you prepare yourself to visit a company, you can have a network picture, what relationships this company has, what employees work there, who is a shareholder of that company, which company has contracts with any of other companies?

    This is a new way to get a holistic image of a company, a person, or of something that you want to know. It's thinking how to visualize things, how to visualize information, and that's the main part we are focusing on. How can we visualize or bring new visualizations to the customer?

    Gardner: In the marketplace, because it's an ecosystem, we're seeing new APIs coming online all the time. Many of them are very low cost and, in many cases, open source or free. We're also seeing the ability to connect more adequately to LinkedIn and Salesforce, if you have your license for that of course. So, this really seems to me a focal point, a single pane of glass to get a single view of a customer, a market, or a competitor, and at the same time, at an affordable price.

    Let's focus on that for a moment. When you have an appliance approach, what we're talking about used to be only possible at very high cost, and many people would need to be involved -- labor, resources, customization. Now, we've eliminated a lot of the labor, a lot of the customization, and the component costs have come down.
    Access the Free
    HPE Vertica

    Community Edition
    We've talked about all the great qualitative benefits, but can we talk about the cost differential between what used to be possible five years ago with data analysis, unstructured data gathering, and indexing, and what you can do now with the i5?

    Meyer: You mentioned the price. We have an OEM contract, and that that's something that makes us competitive in the market. Companies can build their own intelligence service. It's affordable also for small and medium businesses. It doesn't need to be a huge company with own engineering and IT staff. It's affordable, it's automated, it's packed together, and simple to install.

    Companies can increase the workplace performance and shorten the processes. Anybody has access to all the information they need in their daily work, and they can focus more on their core business. They don't lose time in searching for information and not finding it and stuff like that.

    Gardner: For those folks who have been listening or reading, are intrigued by this, and want to learn more, where would you point them? How can they get more information on the i5 appliance and some of the concepts we have been discussing?

    Meyer: That's our company website, sec101.ch. There you can find any information you would like to have. And this is available now.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

    You may also be interested in:


              Sumo Logic CEO on how modern apps benefit from 'continuous intelligence' and DevOps insights        
    The next BriefingsDirect applications health monitoring interview explores how a new breed of continuous intelligence emerges by gaining data from systems infrastructure logs -- either on-premises or in the cloud -- and then cross-referencing that with intrinsic business metrics information.

    We’ll now explore how these new levels of insight and intelligence into what really goes on underneath the covers of modern applications help ensure that apps are built, deployed, and operated properly.

    Today, more than ever, how a company's applications perform equates with how the company itself performs and is perceived. From airlines to retail, from finding cabs to gaming, how the applications work deeply impacts how the business processes and business outcomes work, too.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

    We’re joined by an executive from Sumo Logic to learn why modern applications are different, what's needed to make them robust and agile, and how the right mix of data, metrics and machine learning provides the means to make and keep apps operating better than ever.

    To describe how to build and maintain the best applications, welcome Ramin Sayar, President and CEO of Sumo Logic. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Gardner: There’s no doubt that the apps make the company, but what is it about modern applications that makes them so difficult to really know? How is that different from the applications we were using 10 years ago?

    Sayar: You hit it on the head a little bit earlier. This notion of always-on, always-available, always-accessible types of applications, either delivered through rich web mobile interfaces or through traditional mechanisms that are served up through laptops or other access points and point-of-sale systems are driving a next wave of technology architecture supporting these apps.

    These modern apps are around a modern stack, and so they’re using new platform services that are created by public-cloud providers, they’re using new development processes such as agile or continuous delivery, and they’re expected to constantly be learning and iterating so they can improve not only the user experience -- but the business outcomes.

    Gardner: Of course, developers and business leaders are under pressure, more than ever before, to put new apps out more quickly, and to then update and refine them on a continuous basis. So this is a never-ending process.

    User experience

    Sayar: You’re spot on. The obvious benefits around always on is centered on the rich user interaction and user experience. So, while a lot of the conversation around modern apps tends to focus on the technology and the components, there are actually fundamental challenges in the process of how these new apps are also built and managed on an ongoing basis, and what implications that has for security. A lot of times, those two aspects are left out when people are discussing modern apps.

    Sayar
    Gardner: That's right. We’re now talking so much about DevOps these days, but in the same breath, we’re taking about SecOps -- security and operations. They’re really joined at the hip.

    Sayar: Yes, they’re starting to blend. You’re seeing the technology decisions around public cloud, around Docker and containers, and microservices and APIs, and not only led by developers or DevOps teams. They’re heavily influenced and partnering with the SecOps and security teams and CISOs, because the data is distributed. Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements (SLAs).

    Gardner: What’s different from say 10 years ago? Distributed used to mean that I had, under my own data-center roof, an application that would be drawing from a database, using an application server, perhaps a couple of services, but mostly all under my control. Now, it’s much more complex, with many more moving parts.

    Sayar: We like to look at the evolution of these modern apps. For example, a lot of our customers have traditional monolithic apps that follow the more traditional waterfall approach for iterating and release. Often, those are run on bare-metal physical servers, or possibly virtual machines (VMs). They are simple, three-tier web apps.
    Access the Webinar
    On Gaining Operational Visibility
    Into AWS
    We see one of two things happening. The first is that there is a need for either replacing the front end of those apps, and we refer to those as brownfield. They start to change from waterfall to agile and they start to have more of an N-tier feel. It's really more around the front end. Maybe your web properties are a good example of that. And they start to componentize pieces of their apps, either on VMs or in private clouds, and that's often good for existing types of workloads.
    Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements.

    The other big trend is this new way of building apps, what we call greenfield workloads, versus the brownfield workloads, and those take a fundamentally different approach.

    Often it's centered on new technology, a stack entirely using microservices, API-first development methodology, and using new modern containers like Docker, Mesosphere, CoreOS, and using public-cloud infrastructure and services from Amazon Web Services (AWS), or Microsoft Azure. As a result, what you’re seeing is the technology decisions that are made there require different skill sets and teams to come together to be able to deliver on the DevOps and SecOps processes that we just mentioned.

    Gardner: Ramin, it’s important to point out that we’re not just talking about public-facing business-to-consumer (B2C) apps, not that those aren't important, but we’re also talking about all those very important business-to-business (B2B) and business-to-employee (B2E) apps. I can't tell you how frustrating it is when you get on the phone with somebody and they say, “Well, I’ll help you, but my app is down,” or the data isn’t available. So this is not just for the public facing apps, it's all apps, right?

    It's a data problem

    Sayar: Absolutely. Regardless of whether it's enterprise or consumer, if it's mid-market small and medium business (SMB) or enterprise that you are building these apps for, what we see from our customers is that they all have a similar challenge, and they’re really trying to deal with the volume, the velocity, and the variety of the data around these new architectures and how they grapple and get their hands around it. At the end of day, it becomes a data problem, not just a process or technology problem.

    Gardner: Let's talk about the challenges then. If we have many moving parts, if we need to do things faster, if we need to consider the development lifecycle and processes as well as ongoing security, if we’re dealing with outside third-party cloud providers, where do we go to find the common thread of insight, even though we have more complexity across more organizational boundaries?

    Sayar: From a Sumo Logic perspective, we’re trying to provide full-stack visibility, not only from code and your repositories like GitHub or Jenkins, but all the way through the components of your code, to API calls, to what your deployment tools are used for in terms of provisioning and performance.

    We spend a lot of effort to integrate to the various DevOps tool chain vendors, as well as provide the holistic view of what users are doing in terms of access to those applications and services. We know who has checked in which code or which branch and which build created potential issues for the performance, latency, or outage. So we give you that 360-view by providing that full stack set of capabilities.
    Unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

    Gardner: So, the more information the better, no matter where in the process, no matter where in the lifecycle. But then, that adds its own level of complexity. I wonder is this a fire-hose approach or boiling-the-ocean approach? How do you make that manageable and then actionable?

    Sayar: We’ve invested quite a bit of our intellectual property (IP) on not only providing integration with these various sources of data, but also a lot in the machine learning  and algorithms, so that we can take advantage of the architecture of being a true cloud native multitenant fast and simple solution.

    So, unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

    As the data is coming in, our data-streaming engine is allowing developers, IT ops administrators, sys admins, and security professionals to be able to have their own view, coarse-grained or granular-grained, from our back controls that we have in the system to be able to leverage the same data for different purposes, versus having to wait for someone to create a dashboard, create a view, or be able to get access to a system when something breaks.

    Gardner: That’s interesting. Having been in the industry long enough, I remember when logs basically meant batch. You'd get a log dump, and then you would do something with it. That would generate a report, many times with manual steps involved. So what's the big step to going to streaming? Why is that an essential part of making this so actionable?

    Sayar: It’s driven based on the architectures and the applications. No longer is it acceptable to look at samples of data that span 5 or 15 minutes. You need the real-time data, sub-second, millisecond latency to be able to understand causality, and be able to understand when you’re having a potential threat, risk, or security concern, versus code-quality issues that are causing potential performance outages and therefore business impact.

    The old way was hope and pray, when I deployed code, that I would find something when a user complains is no longer acceptable. You lose business and credibility, and at the end of the day, there’s no real way to hold developers, operations folks, or security folks accountable because of the legacy tools and process approach.

    Center of the business

    Those expectations have changed, because of the consumerization of IT and the fact that apps are the center of the business, as we’ve talked about. What we really do is provide a simple way for us to analyze the metadata coming in and provide very simple access through APIs or through our user interfaces based on your role to be able to address issues proactively.

    Conceptually, there’s this notion of wartime and peacetime as we’re building and delivering our service. We look at the problems that users -- customers of Sumo Logic and internally here at Sumo Logic -- are used to and then we break that down into this lifecycle -- centered on this concept of peacetime and wartime.

    Peacetime is when nothing is wrong, but you want to stay ahead of issues and you want to be able to proactively assess the health of your service, your application, your operational level agreements, your SLAs, and be notified when something is trending the wrong way.

    Then, there's this notion of wartime, and wartime is all hands on deck. Instead of being alerted 15 minutes or an hour after an outage has happened or security risk and threat implication has been discovered, the real-time data-streaming engine is notifying people instantly, and you're getting PagerDuty alerts, you're getting Slack notifications. It's no longer the traditional helpdesk notification process when people are getting on bridge lines.
    No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products.

    Because the teams are often distributed and it’s shared responsibility and ownership for identifying an issue in wartime, we're enabling collaboration and new ways of collaboration by leveraging the integrations to things like Slack, PagerDuty notification systems through the real-time platform we've built.

    So, the always-on application expectations that customers and consumers have, have now been transformed to always-on available development and security resources to be able to address problems proactively.

    Gardner: It sounds like we're able to not only take the data and information in real time from the applications to understand what’s going on with the applications, but we can take that same information and start applying it to other business metrics, other business environmental impacts that then give us an even greater insight into how to manage the business and the processes. Am I overstating that or is that where we are heading here?

    Sayar: That’s exactly right. The essence of what we provide in terms of the service is a platform that leverages the machine logs and time-series data from a single platform or service that eliminates a lot of the complexity that exists in traditional processes and tools. No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products. No longer do you have to wait for the helpdesk person to notify you. We're trying to provide that instant knowledge and collaboration through the real-time data-streaming platform we've built to bring teams together versus divided.

    Gardner: That sounds terrific if I'm the IT guy or gal, but why should this be of interest to somebody higher up in the organization, at a business process, even at a C-table level? What is it about continuous intelligence that cannot only help apps run on time and well, but help my business run on time and well?

    Need for agility

    Sayar: We talked a little bit about the whole need for agility. From a business point of view, the line-of-business folks who are associated with any of these greenfield projects or apps want to be able to increase the cycle times of the application delivery. They want to have measurable results in terms of application changes or web changes, so that their web properties have either increased or potentially decreased in terms of user satisfaction or, at the end of the day, business revenue.

    So, we're able to help the developers, the DevOps teams, and ultimately, line of business deliver on the speed and agility needs for these new modes. We do that through a single comprehensive platform, as I mentioned.

    At the same time, what’s interesting here is that no longer is security an afterthought. No longer is security in the back room trying to figure out when a threat or an attack has happened. Security has a seat at the table in a lot of boardrooms, and more importantly, in a lot of strategic initiatives for enterprise companies today.

    At the same time we're helping with agility, we're also helping with prevention. And so a lot of our customers often start with the security teams that are looking for a new way to be able to inspect this volume of data that’s coming in -- not at the infrastructure level or only the end-user level -- but at the application and code level. What we're really able to do, as I mentioned earlier, is provide a unifying approach to bring these disparate teams together.
    Download the State
    Of Modern Applications
    In AWS Report
    Gardner: And yet individuals can extract the intelligence view that best suits what their needs are in that moment.

    Sayar: Yes. And ultimately what we're able to do is improve customer experience, increase revenue-generating services, increase efficiencies and agility of actually delivering code that’s quality and therefore the applications, and lastly, improve collaboration and communication.

    Gardner: I’d really like to hear some real world examples of how this works, but before we go there, I’m still interested in the how. As to this idea of machine learning, we're hearing an awful lot today about bots, artificial intelligence (AI), and machine learning. Parse this out a bit for me. What is it that you're using machine learning  for when it comes to this volume and variety in understanding apps and making that useable in the context of a business metric of some kind?

    Sayar: This is an interesting topic, because of a lot of noise in the market around big data or machine learning and advanced analytics. Since Sumo Logic was started six years ago, we built this platform to ensure that not only we have the best in class security and encryption capabilities, but it was centered on the fundamental purpose around democratizing analytics, making it simpler to be able to allow more than just a subset of folks get access to information for their roles and responsibilities, whether you're security, ops, or development teams.

    To answer your question a little bit more succinctly, our platform is predicated on multiple levels of machine learning and analytics capabilities. Starting at the lowest level, something that we refer to as LogReduce is meant to separate the signal-to-noise ratio. Ultimately, it helps a lot of our users and customers reduce mean time to identification by upwards of 90 percent, because they're not searching the irrelevant data. They're searching the relevant and oftentimes occurring data that's not frequent or not really known, versus what’s constantly occurring in their environment.

    In doing so, it’s not just about mean time to identification, but it’s also how quickly we're able to respond and repair. We've seen customers using LogReduce reduce the mean time to resolution by upwards of 50 percent.

    Predictive capabilities

    Our core analytics, at the lowest level, is helping solve operational metrics and value. Then, we start to become less reactive. When you've had an outage or a security threat, you start to leverage some of our other predictive capabilities in our stack.

    For example, I mentioned this concept of peacetime and wartime. In the notion of peacetime, you're looking at changes over time when you've deployed code and/or applications to various geographies and locations. A lot of times, developers and ops folks that use Sumo want to use log compare or outlier predictor operators that are in their machine learning capabilities to show and compare differences of branches of code and quality of their code to relevancy around performance and availability of the service and app.

    We allow them, with a click of a button, to compare this window for these events and these metrics for the last hour, last day, last week, last month, and compare them to other time slices of data and show how much better or worse it is. This is before deploying to production. When they look at production, we're able to allow them to use predictive analytics to look at anomalies and abnormal behavior to get more proactive.

    So, reactive, to proactive, all the way to predictive is the philosophy that we've been trying to build in terms of our analytics stack and capabilities.
    Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

    Gardner: How are some actual customers using this and what are they getting back for their investment?

    Sayar: We have customers that span retail and e-commerce, high-tech, media, entertainment, travel, and insurance. We're well north of 1,200 unique paying customers, and they span anyone from Airbnb, Anheuser-Busch, Adobe, Metadata, Marriott, Twitter, Telstra, Xora -- modern companies as well as traditional companies.

    What do they all have in common? Often, what we see is a digital transformation project or initiative. They either have to build greenfield or brownfield apps and they need a new approach and a new service, and that's where they start leveraging Sumo Logic.

    Second, what we see is that's it’s not always a digital transformation; it's often a cost reduction and/or a consolidation project. Consolidation could be tools or infrastructure and data center, or it could be migration to co-los or public-cloud infrastructures.

    The nice thing about Sumo Logic is that we can connect anything from your top of rack switch, to your discrete storage arrays, to network devices, to operating system, and middleware, through to your content-delivery network (CDN) providers and your public-cloud infrastructures.

    As it’s a migration or consolidation project, we’re able to help them compare performance and availability, SLAs that they have associated with those, as well as differences in terms of delivery of infrastructure services to the developers or users.

    So whether it's agility-driven or cost-driven, Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

    Gardner: Ramin, how about a couple of concrete examples of what you were just referring to.

    Cloud migration

    Sayar: One good example is in the media space or media and entertainment space, for example, Hearst Media. They, like a lot of our other customers, were undergoing a digital-transformation project and a cloud-migration project. They were moving about 36 apps to AWS and they needed a single platform that provided machine-learning analytics to be able to recognize and quickly identify performance issues prior to making the migration and updates to any of the apps rolling over to AWS. They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

    Another example would be JetBlue. We do a lot in the travel space. JetBlue is also another AWS and cloud customer. They provide a lot of in-flight entertainment to their customers. They wanted to be able to look at the service quality for the revenue model for the in-flight entertainment system and be able to ascertain what movies are being watched, what’s the quality of service, whether that’s being degraded or having to charge customers more than once for any type of service outages. That’s how they're using Sumo Logic to better assess and manage customer experience. It's not too dissimilar from Alaska Airlines or others that are also providing in-flight notification and wireless type of services.

    The last one is someone that we're all pretty familiar with and that’s Airbnb. We're seeing a fundamental disruption in the travel space and how we reserve hotels or apartments or homes, and Airbnb has led the charge, like Uber in the transportation space. In their case, they're taking a lot of credit-card and payment-processing information. They're using Sumo Logic for payment-card industry (PCI) audit and security, as well as operational visibility in terms of their websites and presence.
    They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

    Gardner: It’s interesting. Not only are you giving them benefits along insight lines, but it sounds to me like you're giving them a green light to go ahead and experiment and then learn very quickly whether that experiment worked or not, so that they can find refine. That’s so important in our digital business and agility drive these days.

    Sayar: Absolutely. And if I were to think of another interesting example, Anheuser-Busch is another one of our customers. In this case, the CISO wanted to have a new approach to security and not one that was centered on guarding the data and access to the data, but providing a single platform for all constituents within Anheuser-Busch, whether security teams, operations teams, developers, or support teams.

    We did a pilot for them, and as they're modernizing a lot of their apps, as they start to look at the next generation of security analytics, the adoption of Sumo started to become instant inside AB InBev. Now, they're looking at not just their existing real estate of infrastructure and apps for all these teams, but they're going to connect it to future projects such as the Connected Path, so they can understand what the yield is from each pour in a particular keg in a location and figure out whether that’s optimized or when they can replace the keg.

    So, you're going from a reactive approach for security and processes around deployment and operations to next-gen connected Internet of Things (IoT) and devices to understand business performance and yield. That's a great example of an innovative company doing something unique and different with Sumo Logic.

    Gardner: So, what happens as these companies modernize and they start to avail themselves of more public-cloud infrastructure services, ultimately more-and-more of their apps are going to be of, by, and for somebody else’s public cloud? Where do you fit in that scenario?

    Data source and location

    Sayar: Whether you’re running on-prem, whether you're running co-los, whether you're running through CDN providers like Akamai, whether you're running on AWS or Azure, Heroku, whether you're running SaaS platforms and renting a single platform that can manage and ingest all that data for you. Interestingly enough, about half our customers’ workloads run on-premises and half of them run in the cloud.

    We’re agnostic to where the data is or where their applications or workloads reside. The benefit we provide is the single ubiquitous platform for managing the data streams that are coming in from devices, from applications, from infrastructure, from mobile to you, in a simple, real-time way through a multitenant cloud service.

    Gardner: This reminds me of what I heard, 10 or 15 years ago about business intelligence (BI), drawing data, analyzing it, making it close to being proactive in its ability to help the organization. How is continuous intelligence different, or even better, and something that would replace what we refer to as BI?
    The expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

    Sayar: The issue that we faced with the first generation of BI was it was very rear-view and mirror-centric, meaning that it was looking at data and things in the past. Where we're at today with this need for speed and the necessity to be always on, always available, the expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

    I'd say that we're on V2 or next generation of what was traditionally called BI, and we refer to that as continuous intelligence, because you're continuously adapting and learning. It's not only based on what humans know and what rules and correlation that they try to presuppose and create alarms and filters and things around that. It’s what machines and machine intelligence needs to supplement that with to provide the best-in-class type of capability, which is what we refer to as continuous intelligence.

    Gardner: We’re almost out of time, but I wanted to look to the future a little bit. Obviously, there's a lot of investing going on now around big data and analytics as it pertains to many different elements of many different businesses, depending on their verticals. Then, we're talking about some of the logic benefit and continuous intelligence as it applies to applications and their lifecycle.

    Where do we start to see crossover between those? How do I leverage what I’m doing in big data generally in my organization and more specifically, what I can do with continuous intelligence from my systems, from my applications?

    Business Insights

    Sayar: We touched a little bit on that in terms of the types of data that we integrate and ingest. At the end of the day, when we talk about full-stack visibility, it's from everything with respect to providing business insights to operational insights, to security insights.

    We have some customers that are in credit-card payment processing, and they actually use us to understand activations for credit cards, so they're extracting value from the data coming into Sumo Logic to understand and predict business impact and relevant revenue associated with these services that they're managing; in this case, a set of apps that run on a CDN.
    Try Sumo Logic for Free
    To Get Critical Data and Insights
    Into Apps and Infrastructure Operations
    At the same time, the fraud and risk team are using us for threat and prevention. The operations team is using us for understanding identification of issues proactively to be able to address any application or infrastructure issues, and that’s what we refer to as full stack.

    Full stack isn’t just the technology; it's providing business visibility insights to line the business users or users that are looking at metrics around user experience and service quality, to operational-level impacts that help you become more proactive, or in some cases, reactive to wartime issues, as we've talked about. And lastly, the security team helps you take a different security posture around reactive and proactive, around threat, detection, and risk.

    In a nutshell, where we see these things starting to converge is what we refer to as full stack visibility around our strategy for continuous intelligence, and that is technology to business to users.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Sumo Logic.

    You may also be interested in:


              OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients        
    The next BriefingsDirect digital transformation case study explores how UK IT consultancy OCSL has set its sights on the holy grail of hybrid IT -- helping its clients to find and attain the right mix of hybrid cloud.

    We'll now explore how each enterprise -- and perhaps even units within each enterprise -- determines the path to a proper mix of public and private cloud. Closer to home, they're looking at the proper fit of converged infrastructure, hyper-converged infrastructure (HCI), and software-defined data center (SDDC) platforms.

    Implementing such a services-attuned architecture may be the most viable means to dynamically apportion applications and data support among and between cloud and on-premises deployments.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

    To describe how to rationalize the right mix of hybrid cloud and hybrid IT services along with infrastructure choices on-premises, we are joined by Mark Skelton, Head of Consultancy at OCSL in London. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

    Here are some excerpts:

    Gardner: People increasingly want to have some IT on premises, and they want public cloud -- with some available continuum between them. But deciding the right mix is difficult and probably something that’s going to change over time. What drivers are you seeing now as organizations make this determination?
    Accelerate Your Business
    With Hybrid Cloud from HPE

    Learn More
    Skelton: It’s a blend of lot of things. We've been working with enterprises for a long time on their hybrid and cloud messaging. Our clients have been struggling just to understand what hybrid really means, but also how we make hybrid a reality, and how to get started, because it really is a minefield. You look at what Microsoft is doing, what AWS is doing, and what HPE is doing in their technologies. There's so much out there. How do they get started?

    We've been struggling in the last 18 months to get customers on that journey and get started. But now, because technology is advancing, we're seeing customers starting to embrace it and starting to evolve and transform into those things. And, we've matured our models and frameworks as well to help customer adoption.

    Gardner: Do you see the rationale for hybrid IT shaking down to an economic equation? Is it to try to take advantage of technologies that are available? Is it about compliance and security? You're probably temped to say all of the above, but I'm looking for what's driving the top-of-mind decision-making now.

    Start with the economics

    Skelton: The initial decision-making process begins with the economics. I think everyone has bought into the marketing messages from the public cloud providers saying, "We can reduce your costs, we can reduce your overhead -- and not just from a culture perspective, but from management, from personal perspective, and from a technology solutions perspective."

    Skelton

    CIOs, and even financial officers, are seeing economics as the tipping point they need to go into a hybrid cloud, or even all into a public cloud. But it’s not always cheap to put everything into a public cloud. When we look at business cases with clients, it’s the long-term investment we look at. Over time, it’s not always cheap to put things into public cloud. That’s where hybrid started to come back into the front of people’s minds.

    We can use public cloud for the right workloads and where they want to be flexible and burst and be a bit more agile or even give global reach to long global businesses, but then keep the crown jewels back inside secured data centers where they're known and trusted and closer to some of the key, critical systems.

    So, it starts with the finance side of the things, but quickly evolves beyond that, and financial decisions aren't the only reasons why people are going to public or hybrid cloud.

    Gardner: In a more perfect world, we'd be able to move things back and forth with ease and simplicity, where we could take the A/B testing-type of approach to a public and private cloud decision. We're not quite there yet, but do you see a day where that choice about public and private will be dynamic -- and perhaps among multiple clouds or multi-cloud hybrid environment?

    Skelton: Absolutely. I think multi-cloud is the Nirvana for every organization, just because there isn't one-size-fits-all for every type of work. We've been talking about it for quite a long time. The technology hasn't really been there to underpin multi-cloud and truly make it easy to move on-premises to public or vice versa. But I think now we're getting there with technology.

    Are we there yet? No, there are still a few big releases coming, things that we're waiting to be released to market, which will help simplify that multi-cloud and the ability to migrate up and back, but we're just not there yet, in my opinion.
    There are still a few big releases coming, things that we're waiting to be released to market, which will help simplify that multi-cloud and the ability to migrate up and back, but we're just not there yet.

    Gardner: We might be tempted to break this out between applications and data. Application workloads might be a bit more flexible across a continuum of hybrid cloud, but other considerations are brought to the data. That can be security, regulation, control, compliance, data sovereignty, GDPR, and so forth. Are you seeing your customers looking at this divide between applications and data, and how they are able to rationalize one versus the other?

    Skelton: Applications, as you have just mentioned, are the simpler things to move into a cloud model, but the data is really the crown jewels of the business, and people are nervous about putting that into public cloud. So what we're seeing lot of is putting applications into the public cloud for the agility, elasticity, and global reach and trying to keep data on-premises because they're nervous about those breaches in the service providers’ data centers.

    That's what we are seeing, but we are seeing an uprising of things like object storage, so we're working with Scality, for example, and they have a unique solution for blending public and on-premises solutions, so we can pin things to certain platforms in a secure data center and then, where the data is not quite critical, move it into a public cloud environment.

    Gardner: It sounds like you've been quite busy. Please tell us about OCSL, an overview of your company and where you're focusing most of your efforts in terms of hybrid computing.

    Rebrand and refresh

    Skelton: OCSL had been around for 26 years as a business. Recently, we've been through a re-brand and a refresh of what we are focusing on, and we're moving more to a services organization, leading with our people and our consultants.

    We're focusing on transforming customers and clients into the cloud environment, whether that's applications or, if it's data center, cloud, or hybrid cloud. We're trying to get customers on that journey of transformation and engaging with business-level people and business requirements and working out how we make cloud a reality, rather than just saying there's a product and you go and do whatever you want with it. We're finding out what those businesses want, what are the key requirements, and then finding the right cloud models that to fit that.

    Gardner: So many organizations are facing not just a retrofit or a rethinking around IT, but truly a digital transformation for the entire organization. There are many cases of sloughing off business lines, and other cases of acquiring. It's an interesting time in terms of a mass reconfiguration of businesses and how they identify themselves.

    Skelton: What's changed for me is, when I go and speak to a customer, I'm no longer just speaking to the IT guys, I'm actually engaging with the finance officers, the marketing officers, the digital officers -- that's he common one that is creeping up now. And it's a very different conversation.
    Accelerate Your Business
    With Hybrid Cloud from HPE

    Learn More
    We're looking at business outcomes now, rather than focusing on, "I need this disk, this product." It's more: "I need to deliver this service back to the business." That's how we're changing as a business. It's doing that business consultancy, engaging with that, and then finding the right solutions to fit requirements and truly transform the business.

    Gardner: Of course, HPE has been going through transformations itself for the past several years, and that doesn't seem to be slowing up much. Tell us about the alliance between OCSL and HPE. How do you come together as a whole greater than the sum of the parts?

    Skelton: HPE is transforming and becoming a more agile organization, with some of the spinoffs that we've had recently aiding that agility. OCSL has worked in partnership with HPE for many years, and it's all about going to market together and working together to engage with the customers at right level and find the right solutions. We've had great success with that over many years.

    Gardner: Now, let’s go to the "show rather than tell" part of our discussion. Are there some examples that you can look to, clients that you work with, that have progressed through a transition to hybrid computing, hybrid cloud, and enjoyed certain benefits or found unintended consequences that we can learn from?

    Skelton: We've had a lot of successes in the last 12 months as I'm taking clients on the journey to hybrid cloud. One of the key ones that resonates with me is a legal firm that we've been working with. They were in a bit of a state. They had an infrastructure that was aging, was unstable, and wasn't delivering quality service back to the lawyers that were trying to embrace technology -- so mobile devices, dictation software, those kind of things.

    We came in with a first prospectus on how we would actually address some of those problems. We challenged them, and said that we need to go through a stabilization phase. Public cloud is not going to be the immediate answer. They're being courted by the big vendors, as everyone is, about public cloud and they were saying it was the Nirvana for them.

    We challenged that and we got them to a stable platform first, built on HPE hardware. We got instant stability for them. So, the business saw immediate returns and delivery of service. It’s all about getting that impactful thing back to the business, first and foremost.

    Building cloud model

    Now, we're working through each of their service lines, looking at how we can break them up and transform them into a cloud model. That involves breaking down those apps, deconstructing the apps, and thinking about how we can use pockets of public cloud in line with the hybrid on-premise in our data-center infrastructure.

    They've now started to see real innovative solutions taking that business forward, but they got instant stability.

    Gardner: Were there any situations where organizations were very high-minded and fanciful about what they were going to get from cloud that may have led to some disappointment -- so unintended consequences. Maybe others might benefit from hindsight. What do you look out for, now that you have been doing this for a while in terms of hybrid cloud adoption?

    Skelton: One of the things I've seen a lot of with cloud is that people have bought into the messaging from the big public cloud vendors about how they can just turn on services and keep consuming, consuming, consuming. A lot of people have gotten themselves into a state where bills have been rising and rising, and the economics are looking ridiculous. The finance officers are now coming back and saying they need to rein that back in. How do they put some control around that?
    People have bought into the messaging from the big public-cloud vendors about how they can just turn on services and keep consuming, consuming, consuming.

    That’s where hybrid is helping, because if you start to hook up some workloads back in an isolated data center, you start to move some of those workloads back. But the key for me is that it comes down to putting some thought process into what you're putting into cloud. Just think through to how can you transform and use the services properly. Don't just turn everything on, because it’s there and it’s click of a button away, but actually think about put some design and planning into adopting cloud.

    Gardner: It also sounds like the IT people might need to go out and have a pint with the procurement people and learn a few basics about good contract writing, terms and conditions, and putting in clauses that allow you to back out, if needed. Is that something that we should be mindful of -- IT being in the procurement mode as well as specifying technology mode?

    Skelton: Procurement definitely needs to be involved in the initial set-up with the cloud  whenever they're committing to a consumption number, but then once that’s done, it’s IT’s responsibility in terms of how they are consuming that. Procurement needs to be involved all the way through in keeping constant track of what’s going on; and that’s not happening.

    The IT guys don’t really care about the cost; they care about the widgets and turning things on and playing around that. I don’t think they really realized how much this is going to cost-back. So yeah, there is a bit of disjoint in lots of organizations in terms of procurement in the upfront piece, and then it goes away, and then IT comes in and spends all of the money.

    Gardner: In the complex service delivery environment, that procurement function probably should be constant and vigilant.

    Big change in procurement

    Skelton: Procurement departments are going to change. We're starting to see that in some of the bigger organizations. They're closer to the IT departments. They need to understand that technology and what’s being used, but that’s quite rare at the moment. I think that probably over the next 12 months, that’s going to be a big change in the larger organizations.

    Gardner: Before we close, let's take a look to the future. A year or two from now, if we sit down again, I imagine that more micro services will be involved and containerization will have an effect, where the complexity of services and what we even think of as an application could be quite different, more of an API-driven environment perhaps.

    So the complexity about managing your cloud and hybrid cloud to find the right mix, and pricing that, and being vigilant about whether you're getting your money’s worth or not, seems to be something where we should start thinking about applying artificial intelligence (AI), machine learning, what I like to call BotOps, something that is going to be there for you automatically without human intervention.
    Hopefully, in 12 months, we can have those platforms and we can then start to embrace some of this great new technology and really rethink our applications.

    Does that sound on track to you, and do you think that we need to start looking to advanced automation and even AI-driven automation to manage this complex divide between organizations and cloud providers?

    Skelton: You hit a lot of key points there in terms of where the future is going. I think we are still in this phase if we start trying to build the right platforms to be ready for the future. So we see the recent releases of HPE Synergy for example, being able to support these modern platforms, and that’s really allowing us to then embrace things like micro services. Docker and Mesosphere are two types of platforms that will disrupt organizations and the way we do things, but you need to find the right platform first.

    Hopefully, in 12 months, we can have those platforms and we can then start to embrace some of this great new technology and really rethink our applications. And it’s a challenge to the ISPs. They've got to work out how they can take advantage of some of these technologies.
    Accelerate Your Business
    With Hybrid Cloud from HPE

    Learn More
    We're seeing a lot of talk about Cervalis and computing. It's where there is nothing and you need to spin up results as and when you need to. The classic use case for that is Uber; and they have built a whole business on that Cervalis type model. I think that in 12 months time, we're going to see a lot more of that and more of the enterprise type organizations.

    I don’t think we have it quite clear in our minds how we're going to embrace that but it’s the ISV community that really needs to start driving that. Beyond that, it's absolutely with AI and bots. We're all going to be talking to computers, and they're going to be responding with very human sorts of reactions. That's the next way.

    I am bringing that into enterprise organizations for how we can solve some business challenges. Service test management is one of the use cases where we're seeing, in some of our clients, whether they can get immediate response from bots and things like that to common queries, so they don’t need as many support staff. It’s already starting to happen.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

    You may also be interested in:


              DevOps, pour réussir votre transformation digitale        

    The post DevOps, pour réussir votre transformation digitale appeared first on Bull.


              With DevOps security must work differently        
    Because+%E2%80%9Csoftware+is+eating+the+world%2C%E2%80%9D+as+Mark+Andreessen+famously+noted%2C+appli
              DevOps in the enterprise with new Shippable Server        
    Shippable+announces+the+general+availability+of+Shippable+Server%2C+the+enterprise+version+of+its+po
              Adopting DevOps should be a top priority for you right now        
    As+DevOps+becomes+more+mainstream%2C+there+has+been+a+rush+for+companies+to+implement+it+and+agile+w
              Many fintech DevOps are not enforcing security        
    Venafi+has+announced+the+results+of+a+study+on+the+cryptographic+security+practices+of+DevOps+teams+
              The power of community in DevOps        
    We+interviewed+Jason+Hand%2C+a+DevOps+evangelist+for+for+VictorOps%2C+to+learn+about+how+much+the+co
              A DevOps framework for federal customers        
    Last+Thursday%2C+President+Trump+signed+an+executive+order+designed+to+strengthen+the+cybersecurity+
              Welcome Drew Lafferty and Lizelotte Green!        

    Hello, friends of Balsamiq! Our not-so-little-anymore team keeps on growing! Today I would like to introduce to you our two new team members: Drew Lafferty and Liz Green! Drew Lafferty Drew is a jack-of-all-trades Developer / DevOps, based in Chicago, Illinois. He loves working full stack and diving into both front-end and back-end code, as […]

    The post Welcome Drew Lafferty and Lizelotte Green! appeared first on Life @ Balsamiq.


              292 RR Bootcamps        

    1:25 - Is a bootcamp the best way for someone to enter programming?

    7:25 - Learning social skills for working with development teams

    9:25 - Getting a well-rounded education

    20:00 - Learning how to find a job and have a career

    27:20 - The responsibility of code schools on helping you find a job

    30:55 - Job searches for the programmer

    32:30 - Picking the right bootcamp

    35:50 - Placement as a junior dev

    45:30 - Finding the time

    52:40 - Deciding if bootcamp is right for you

    Picks:

    The 7 Habits of Highly Effective People (Jason)

    How To Win Friends and Influence People (Jason)

    Corporate Confidential (Brian)

    Land the Tech Job You Love by Andy Lester (Brian)

    The Way of the Fight by George St. Pierre (Jerome)

    ID.me (Jerome)

    10 Ways to Get Noticed by Potential Employers free email course (Charles)

    Devops Remote Conf (Charles)

    JS Remote Conf (Charles)

    Freelance Remote Conf (Charles)

    Free Code Camp (Charles)

    Flatiron School (Charles)


              273 RR Contempt Culture with Aurynn Shaw        

    01:11 - Aurynn Shaw Introduction

    01:56 - Contempt Culture

    07:32 - “But PHP is objectively bad….”; True Objectivity

    10:35 - The History of The Contempt Culture in Tech Spaces

    12:40 - Reinventing Tools

    15:00 - “Intent is not magic.”

    20:09 - Contempt Culture in the Ruby Community Towards PHP

    21:56 - Why Contempt Culture Forms

    29:08 - DevOps and the Disruption of Culture

    32:34 - Open Source vs Free Software

    36:33 - Cultural Implications/Ramifications Around Open Source

    41:32 - Service Culture

     

    Picks


              243 RR Books That Aren't POODR        

    02:36 - Software Development and Reality Construction by Christiane Floyd

    05:42 - Peter Naur: Programming as Theory Building  

    07:55 - The Art of Empathy: A Complete Guide to Life's Most Essential Skill by Karla McLaren

    13:14 - Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun by Dave Thomas

    14:32 - ng-book 2

    16:09 - Paper Reading Group

    19:58 - Mindset: The New Psychology of Success by Carol Dweck

    20:29 - Cracking the Coding Interview, 6th Edition: 189 Programming Questions and Solutions by Gayle Laakmann McDowell

    22:01 - Ruby Rogues Book Club Books Episodes

    22:43 - Books to Learn When You’re Learning to Become a Software Developer

    33:07 - Technical Programming Books

    41:17 - Pramming and Business Books

    Picks

    Mark Manson: The Most Important Question of Your Life (Jessica)
    Dan Luu: Normalization of Deviance in Software: How Completely Messed Up Practices Become Normal (Coraline)
    The Noun Project (Avdi)
    Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong by James W. Loewen (Avdi)
    CES (Chuck)
    Bill Buxton: Avoiding the Big Crash (Jessica)


              238 RR Refactoring        

    Check out JS Remote Conf and submit a Ruby Remote Conf CFP!

     

    02:26 - Refactoring (Definition) and Where People Fail

    03:55 - Tests and Refactoring

    • How do you decide when your codebase is untestable?

    10:59 - Managing Scope

    11:42 - Why We Refactor; The Value of Refactoring

    17:13 - Refactoring Tools

    20:40 - When Refactoring Gets Put Off; Establishing a Code Culture

    26:23 - Refactoring Strategies

    37:38 - Performance Tradeoffs

    41:42 - Generative Testing

    50:33 - Measurement

    53:08 - Examples and Resources

    Picks

    Longmire (Avdi)
    Clash of Clans (Chuck)
    Star Wars Commander (Chuck)
    Cleaning your office (Chuck)
    Hsing-Hui Hsu: Time flies like an arrow; Fruit flies like a banana: Parsers for Great Good @ RubyConf 2015 (Coraline)
    Betsy Haibel: s/regex/DSLs/: What Regex Teaches Us About DSL Design @ RubyConf 2015 (Coraline)
    Velocity 2016 Call for speakers (Saron)
    RailsConf 2016 (Saron)


              199 RR Deployments with Noah Gibbs        

    02:08 - Noah Gibbs Introduction

    02:38 - Rebuilding Rails: Understand Rails by Building a Ruby Web Framework by Noah Gibbs

    03:06 - Sinatra

    03:47 - Rack

    07:32 - Deploying Apps

    12:22 - Support, Operations, and Monitoring

    20:36 - Social Differences Between Communities: Ruby vs Python

    27:18 - Deployment Tools Targeting Polyglot Architectures

    28:39 - Ease of Deployment

    32:26 - The Success of a Language = The Deployment Story

    33:51 - Feedback Cycle

    34:57 - Reproducibility

    35:44 - Docker and Configuration Management Tools

    44:06 - Deployment Problems

    46:45 - Ruby Mad Science

    • madscience_gem
    • Community Feedback
    • The Learning Curve
    • Roadmap
      • Multiple VM Setups

    Picks

    TuneMyGC (Coraline)
    Bear Metal: Rails Garbage Collection: Tuning Approaches (Coraline)
    Rbkit (Coraline)
    Get out and jump in a mud puddle! (Jessica)

    Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard (Noah)
    Ruby DSL Handbook by Jim Gay (Noah)


              195 RR Building Your Technology Radar with Neal Ford        

    02:25 - Neal Ford Introduction

    02:20 - The Thoughtworks Technology Radar

    06:28 - Quadrants

    • Techniques
    • Tools
    • Languages & Frameworks
    • Platforms

    07:01 - Categories (Rings)

    • Hold
    • Assess
    • Trial
    • Adopt

    09:23 - Adopting New Technologies

    14:42 - Providing Familiarity Resources

    15:24 - Radars as Resources and Lifecycle Assessment Tools

    18:36 - Themes

    22:17 - Making Decisions

    • Diversify
    • Testability

    27:40 - Jamming Radars

    31:53 - Hireability?

    • Paying Developers to Learn

    36:54 - Financial Portfolios and Planning Your Career

    • Specialization vs Generalization

    42:03 - Software Architecture & Engineering Practices

    • Microservices

    43:57 - Functional Programming

    44:16 - Estimation

    46:03 - Creating Your Own Radar

    Picks

    All Watched Over by Machines of Loving Grace (Avdi)
    The Project Euler Sprint (Coraline)
    Gloom (Coraline)
    The Bad Plus: Inevitable Western (Jessica)
    tmate (Jessica)
    Screenhero (Chuck)
    Slack (Chuck)
    DevOps Bookmarks (Neal)
    Elvis has left the ivory tower by Neal Ford (Neal)
    Culture Series (Neal)


              113 RR DevOps with Nathen Harvey        
    In this episode, the Rogues talk about DevOps with Nathen Harvey of Chef.
              New Post: Ongoing support and updates?        
    That's OK. If it's not being actively updated I guess I'll look for another solution(?). Some of the issues reported, like breaking the line breaks, nobody seems interested in addressing. I know people offered some alternative solutions and you pointed the OP to where to look at the code, but I'd really prefer that the output be the same as the input except for the "documented" updates via the transform files. I can't get this past the technical evaluation folks if it dramatically reformats the config file and I don't have the coding skill set to update the tool. I'm just a DevOps guy looking for a tool to do XML Transforms outside MSBuild.

    Thanks.

              Building a Business Case for DevSecOps?  Our New Dashboard Can Help.        

    Many DevOps practices have implemented tools to deliver applications faster, while minimizing risk related to open source and third party components. Our Nexus Lifecycle solution was designed to do just that, scale DevOps early and everywhere with precise intelligence about the hygiene of the open source components you are consuming. With this intelligence, you can automate the enforcement of your open source policies and deliver secure applications at scale.


              The Difference Between DevOps and Everything Else        

     

    In my role I get to attend several conferences, meet with customers, give talks, and sit on a lot of panel discussions where the main topic is DevOps. I can report that while there has been a decline in folks asking, "what is DevOps," it is a question that still lingers. For many, the conversation has moved on to discussing the challenges others have encountered in their DevOps adaptations. 


              [DSP 2017] 27# BUILD 2017 odc.4 (Windows, XAML, Fluent Design, ARM, desktop)        

    Tym razem znów wracam do tegorocznego BUILD’a. Dziś tematyka związana z rozwojem Windows. Oprócz nośnych tematów tj. Fluent Design, nowości w UWP XAML czy Windows Bridge znajdziemy tu kilka perełek jak Windows 10 na ARM czy też hostowany UWP XAML w WPF (a ma być nawet w Windows Forms). Mało tego okazuje się, że dzięki .NET Standard 2.0 w wiekowy kod aplikacji Windows Forms z DataSet-ami można tchnąć nowe życie i wykorzystać wprost tę logikę w Xamarinie! Coraz więcej możliwych integracji i reużytkowania pomiędzy technologiami z różnych pokoleń, trzymać tak dalej!

     

    What's new and coming for Windows UI: XAML and composition

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    Semantic animations (future)

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    aka.ms/windowsui/inwin32

    nareszcie!

    image

    image

     

    Introducing Fluent Design

    image

    image

    image

    image

    image

    image

    image

    image

    image

    image

    Dokumentacja i przykłady, narzędzia do designu

     

    Windows 10 on ARM

    prototypowe urządzenie

    image

    image

    7zip z niemodyfikowanym oldschoolowym instalatorem

    image

    image

    aplikacje uniwersalne dostają z Windows Store pakiet dla ARM

    image

     

    Developer's Guide to the Galaxy #WinDev, Part 2

    image

    Obsługa Surface Dial w JavaScript

    Przepis na wyjeżdżane po kolei elementy menu za pomocą kompozycji

    Animacje za pomocą kompozycji, InteractionTracker

    image

    UWP Community Toolkit

    aplikacja z przykładami

    kontrolki: MasterDetailsView, Expander, HeaderedTextBox

    sharowanie z serwisami społecznościowymi (np. Facebook)

    Bing

     

    Bring your desktop apps to UWP and the Windows Store using the Desktop Bridge

    image

    image

    image

    pełny Office w Windows Store

    image

    image

    image

    image

    Packaging Project: edytor manifestu, Deploy

    Demo: Wywoływanie UWP API z aplikacji Windows Forms

    komponenty UWP (np. taski)

    możemy zbudować jedynie nowe UI i komunikować je z .exe starej desktopowej aplikacji…

    image

    image

    image

    XmlDocument w UWP

    image

     

    Modernize WinForms and WPF apps with maximum code reuse, cross-platform reach, and efficient DevOps

    image

    image

    image

    image

    image

    image

    Trwają prace nad narzędziem, które ułatwi dodawanie referencji z UWP, takich jak:

    image

    image

    image

    Demo: Cortana odpala okno z Windows Forms –Winking smileArgumenty z linka są przekazywane w argumentach aplikacji.

    UWP XAML osadzony wewnątrz WPF !  (prototyp)  Docelowo hostowanie w WPF i Windows Forms.

    image

    image

    image

    DataSet-y w projekcie .NET Standard 2.0, który podepniemy do Xamarina

    image

    .NET Standard 2.0 wspiera klienta SQL

    image

    VS Mobile Center - wsparcie dla UWP i Desktop Bridge


              Episode 28: Azure Functions for Mobile Apps with Donna Malayeri | The Xamarin Show        

    This week, James is joined by friend of the show Donna Malayeri, Program Manager at Microsoft in Azure Functions, who introduces us to serverless compute with Azure Functions. We discuss what Azure Functions is, how they work, and why they matter for mobile developers. Donna walks us through several mobile focused scenarios that Azure Functions are ideal for.

    Segments:

    • [01:00] What are Azure Functions and Serverless Compute
    • [03:30] Where to Get Started
    • [07:30] Mobile Use Cases for Azure Functions
    • [10:00] Our First Azure Function
    • [15:00] Content Moderation Sample
    • [22:30] App Insights Statistics
    • [28:00] Durable Azure Functions

    Show Links:

    Useful Links:

              Asian Financial Services Congress 2016        

    VirtusaPolaris is delighted to be an Exhibitor Partner at the Asian Financial Services Congress on March 3-4 at Marina Bay Sands in Singapore.

    The 12th Annual Asian Financial Congress is a 2 day event showcasing a comprehensive research-based agenda, blended with technology innovations, business and operational best-practices organized by IDC. It's Asia's most influential gathering participated by world's most influential economists, practitioners and regulators from the financial services sector.

    At the event, IDC Financial Insights once again puts together a panel of industry leaders and champions, this time to align the concepts of Beyond-Digital and the New Standards of Customer Experience.

    Join us and meet our experts at booth # E04 to discover how we enable BFS firms reimagine banking in the digital era.

    Our key focus areas at the event include:

    • Tokenization and Mobile Wallets
    • SMART Banking
    • Robotic Process Automation and Artificial Intelligence
    • Blockchain and Smart Contracts
    • DevOps

    To schedule a 1-on-1 meeting with our experts, click here to set an appointment.

    Learn more on our Banking and Financial Services.


              IAOP: OWS16        

    Come meet our experts at booth #6 and learn how VirtusaPolaris' IT outsourcing approach focuses on improving IT efficiency through Agile DevOps, increased automation, and transforming production operations from reactive to pre-emptive.

    https:///www.iaop.org/summit


              Tip #936: Grow up to Azure Logic Apps        
    It was never a secret that Microsoft Flow was built on top of Azure Logic Apps. However, until recently, two platforms were independent, one designed to attract end-users and customizers and another – developers and DevOps. The day has come to grow up – it is now possible for Flow users to save any flow […]
              Senior DevOps Engineer - Dealer-FX - Markham, ON        
    To support you in this organized chaos we give you gorgeous new offices in Markham and:. What can you expect as our next Senior DevOps Engineer?...
    From Dealer-FX - Wed, 12 Apr 2017 19:41:08 GMT - View all Markham, ON jobs
              Microsoft Ignite New Zealand, Microsoft Surface Studio        
    Earlier this week I had the chance to attend this year's Microsoft Ignite New Zealand. This was the ninth year I attended the event, previously known as Microsoft TechEd.

    Many, many things changed over the years and while Microsoft Ignite is still a technology event at heart, things changed, just the same as Microsoft did over the years.

    If you were one of the couple of thousands of attendees you had the chance to learn not only from technical sessions but also from personal development sessions, ones created to let people progress in their careers not only by their geeky prowess but by being better at how these are used in the context of relating to yourself, other people in your job and your life.

    If you attended the keynote session you'd have heard from local Microsoft people and international guests who showed how to use technology to "empower every person and every organisation on the planet to achieve more."

    The message was powerful and easy to understand: technology for technology sake was hot. Today's technology is used to make lives better. From using Internet of Things and Big Data to make better, intelligent wheelchairs to solving global water challenges with cloud technology.

    I had the chance to talk to Microsoft experts from different areas, from Donna Sarkar (MIcrosoft Windows 10 and Windows Insider) to Donovan Brown (on how Microsoft is making DevOps an integral part of its stack) and all of these had the passion make this mission come to life.

    During the same week Microsoft announced its new Surface device. A long term project, which goes back to the first Surface concept (remember the Surface table?) this all-in-one computer integrates design and functionality, plus extra accessories that can help developers create new interfaces and experiences, making it a dream - one that will be here in early 2017.

    Watch the video below and tell me it's not a work of art?


              Cloud operating system deployment: WinPE in Azure        
    Jason Ryberg is a Consultant for Microsoft, where he writes PowerShell code and provides DevOps support.  Have you ever wanted to boot to WinPE in Azure and select an Microsoft Deployment Toolkit (MDT) Task Sequence?  As part of an informal cloud-readiness evaluation, I was asked to deploy a server image to Azure. The image that...
              Picking brains, foregoing severance in order to speak out, and hiring for diversity takes work        
    I scored a last minute invitation to DevOpsDays DC which takes place Monday and Tuesday! I'm really excited to see my DevOps friends, and I'll be taking a portable microphone with me to hopefully conduct some on-site interviews! Tweet me if you're going? Also, due to a family emergency, there's still time to pre-order hard … Continue reading Picking brains, foregoing severance in order to speak out, and hiring for diversity takes work
              Making conferences inclusive for women, parents, and people with dietary restrictions, and an armor-plated cat feeder (for fun!)        
    We still have copies for Issue 7: Security for sale!  We are also looking for sponsors to put those articles online. Reach out to audrey@recompilermag.com if you or someone you know may be interested. Shoutouts: DevOpsDays: Portland is offering 10% off registration for Recompiler supporters, readers and listeners! Use the code RECOMPILERSUPPORTER. Platform.sh is PolyConf's platinum sponsor and … Continue reading Making conferences inclusive for women, parents, and people with dietary restrictions, and an armor-plated cat feeder (for fun!)
              LogiGear nói về Công Nghệ & CÆ¡ Hội        
    Thời gian: 8:30am – 12:30pm, May 6th – Training room 6th Floor Địa điểm: 1A Phan Xích Long, Phường 2, Phú Nhuận, TP. Hồ Chí Minh DevOps DevOps đang là thuật ngữ nóng hiện nay. Rất nhiều công ty đang tìm cách đưa DevOps vào Agile hoặc Scrum. Hiệu chỉnh Scrumbutt đang trở thành Æ°u [...]
              #WIT hashtags, thoughts on panels, and livable code        
    We still have copies for Issue 7: Security for sale!  We are also looking for sponsors to put those articles online. Reach out to audrey@recompilermag.com if you or someone you know may be interested. Shoutouts: DevOpsDays: Portland is offering 10% off registration for Recompiler supporters, readers and listeners! Use the code RECOMPILERSUPPORTER. Empowering Instagram Hashtags for Women … Continue reading #WIT hashtags, thoughts on panels, and livable code
              DevOps Jobs: 10 ways to spot a great DevOps shop        

    EnterprisersProject: When interviewing, how do you tell a fantastic DevOps organization from a mediocre one?


              Team Foundation Server 2015 Now Available        
    Today, we are making available the final release of Team Foundation Server 2015.  Team Foundation Server 2015 provides a complete on-premises Application Lifecycle Management and DevOps solution for any development team, and represents a substantial update in both breadth and depth of capabilities for Team Foundation Server. TFS2015 makes it easier than ever for development...
              Docker Java App Deployment – 4 App Stacks: Tomcat, GlassFish, Jetty and JBoss        
    Sign Up for FREE on http://DCHQ.io to get access to out-of-box multi-tier Java application templates (including the Movie Store app on Tomcat, JBoss, GlassFish and Jetty) along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery. Background Java developers and DevOps professionals have long struggled to automate the deployment of ...
              The Value of Your QA Department        
    Becoming an agile company means internal structure changes as well as the team structure changes. The new formula should focus on having a DevOps and QA department that work together closely. If there is already a QA department, its role needs to be analyzed and the focus should lie on how much value QA can add overall. Testing should not be the last stage before release like in the waterfall [...]
              You Can't Do CI/CD Without Automated Testing        
    If you ask most DevOps experts what goes into a Continuous Integration or Continuous Delivery chain, they’ll mention components like CI servers and code repositories. They’re less likely to discuss automated testing tools, despite the fact that automated testing is just as crucial in order to achieve complete CI/CD. Below, I explain just how important automated testing is for a CI/CD [...]
              Devops Engineer (Remote or Local) - ICON Health & Fitness, Inc. - Logan, UT        
    Experience with MongoDB. Manage and query SQL and MongoDB databases. Remote and/or On-Site....
    From ICON Health & Fitness, Inc. - Fri, 09 Jun 2017 00:38:28 GMT - View all Logan, UT jobs
              DevOps Software Engineer, iOS Systems - Apple - Santa Clara Valley, CA        
    5+ years working with load balancers and databases (Oracle, MySQL, Cassandra, Elasticsearch, MongoDB). Imagine what you could do here....
    From Apple - Thu, 06 Jul 2017 12:59:49 GMT - View all Santa Clara Valley, CA jobs
              Senior Devops Software Engineer - Crypto Services - Apple - Santa Clara Valley, CA        
    5+ years working with load balancers and databases (Oracle, MySQL, Cassandra, Elasticsearch, MongoDB). We are seeking a talented and motivated software engineer...
    From Apple - Sun, 25 Jun 2017 12:47:21 GMT - View all Santa Clara Valley, CA jobs
              eCube Systems Announces NXTmonitor 9.0 for DevOps and Application Performance Management        

    NXTmonitor expands APM features for multi-platform application management and DevOps on Windows, OpenVMS, UNIX and Linux.

    (PRWeb April 24, 2017)

    Read the full story at http://www.prweb.com/releases/2017/04/prweb14252934.htm


              eCube Systems Announces New Release of NXTware Remote 4.6.5 for OpenVMS        

    Latest NXTware Remote includes NXTware Remote Builder integration to Jenkins plugin, DevOps remote deploy, and audit functions for development events on OpenVMS

    (PRWeb March 27, 2017)

    Read the full story at http://www.prweb.com/releases/2017/03/prweb14183833.htm


              eCube Systems Announces DevOps Visual Solution NXTmonitor at DevOps Summit/Cloud Expo in New York        

    eCube Systems is attending its first DevOps Summit at the Cloud Expo and will give a presentation on NXTmonitor, the APM Visual tool for DevOps.

    (PRWeb June 08, 2016)

    Read the full story at http://www.prweb.com/releases/2016/06/prweb13463939.htm


              eCube Systems Announces New DevOps Solution NXTmonitor        

    The new name, NXTmonitor, conveys the visual nature of eCube’s Application Performance Management system of the successor product for NXTminder.

    (PRWeb August 31, 2015)

    Read the full story at http://www.prweb.com/releases/2015/07/prweb12877087.htm


              pCloudy        
    pCloudy provides the simplest mobile app testing and digital assurance platform. Apart from providing the most comprehensive set of quality assurance tools for development, quality assurance (QA), and DevOps, it also offers automation, manual testing, and on-device performance testing solutions.
              Herding Code 217: Nick Craver on Stack Overflow Engineering        
    The guys talk to Nick Craver about all the magic behind the scenes at StackExchange global headquarters. Download / Listen: Herding Code 217: Nick Craver on Stack Overflow Engineering Show Notes: Hello (00:15) Nick explains his job: software development, sysadmin (site reliablity engineer), and sometime DBA. Just not devops. (01:30) Jon introduces Nick’s recent blog […]
              Red Hat Certified Architect (RHCA) - Aja dwc , United Arab Emirates, Dubai         
    Self-titled "the capstone certificate", RHCA is the most complete certificate in the program, adding an enterprise-level focus.
    There are concentrations inside the RHCA:
    • Datacenter: skills with tasks common in an on-premise datacenter
    • Cloud: skills with tasks common to cloud infrastructure
    • Devops: skills and knowledge in technologies and practices that can accelerate the process of moving applications and updates from development through the build and test processes and on to production
    • Application Development: skills in enterprise application development, integration, and architecture
    • Application Platform: skills with tasks common for building and managing tools and applications

    Cost:

    Certified


              C#.NET - Institutes , Bangalore         
    Sgraph Infotech is a Bangalore based complete training academy with strong placement Support. IT Training provided by software professionals with rich industry experience since 2005. Our core training Services includes Oracle DBA, SQL, PL SQL, MSBI, Power Bi, Devops. Our training courses are latest in demand and high-quality, we provide training in real-time with excellent training lab and affordable cost.

    Cost:

    Certified


              DevOps Primer: Getting Started with VMware Photon Platform and Cloud Native Applications        

    For some time now I’ve been trying to free up some time to get stuck into the Photon Platform and gain a better understanding of Cloud Native Applications. Container technology (i.e. Docker) is starting to gain traction in production environments and it’s a popular topic amongst the developer community. I am particularly interested in End […]

    The post DevOps Primer: Getting Started with VMware Photon Platform and Cloud Native Applications appeared first on Ray Heffer.


              Redhat Exams - Information Technology , Cairo         
    RHCA Exams only in Egypt at ValueSYS. 

    RHCA: What does it mean to be an RHCA? 
    What is an RHCA?
    A Red Hat Certified Architect (RHCA) is a Red Hat Certified Engineer (RHCE®) or Red Hat Certified JBoss Developer (RHCJD) who attained Red Hat’s highest level of certification by passing and keeping current five additional hands-on exams on Red Hat technologies.
    IT professionals are able to select what they wish to focus on or choose from a suggested concentration on either DevOps, Cloud, Datacenter, Enterprise applications development, and Application platform.


     

    Cost:

    Certified


              Sr. DevOps Engineer - Veritas Technologies - Remote, OR        
    Working experience with linux, unix and windows systems. Job Description and Responsibilities:....
    From Veritas Technologies - Sat, 22 Jul 2017 14:42:16 GMT - View all Remote, OR jobs
              Architect SI - paris (ESB, SOA, Big Data, Microservices, DevOps, Agile) H/F - Anson McCade - Paris        
    Un prestigieux bureau d'étude français recrute un Architect SI (ESB, SOA, Big Data, Microservices, DevOps, Agile) au sein de sa cellule spécialisée en Architecture (ESB, SOA, Big Data, Microservices, DevOps, Agile). Vous ferez partie d'une équipe experte en Architecture qui accompagne de grandes sociétés pour les aider à fluidifier et innover leur système d'information. Les projets seront principament des missions à enjeux strategiques, autour du Microservices , API, du conseil en...
              10 Ways To Attract Tech Talent As A Growing Startup        

    Hiring tech talent — software engineers, data scientists, DevOps, and related — is hard, especially when your business is new […]

    The post 10 Ways To Attract Tech Talent As A Growing Startup appeared first on Cox Blue.


              Job-Börse: Linux- und IT-Jobs vom 07.08.2017 von bitblokes.de        

    Aktuelle Stellenanzeigen für Dich von bitblokes.de vom 07.08.2017 aus der IT-Branche. Viel Spaß beim Stöbern und viel Erfolg! Heise Medien Gruppe GmbH & Co. KG – DevOps Engineer (m/w) (Hannover, Deutschland) DevOps Engineer (m/w) für den Standort Hannover DevOps Engineer (m/w) bei Heise Sie haben Berufserfahrung in der Rolle als DevOps Engi… 7 August 2017 | 4:52 pm Daimler AG – Mitarbeiter (m/w) IT-Support (Berlin, Deutschland) Daimler AG Mercedesstraße 137 … 7 August 2017 | 4:52 pm ERWEKA GmbH – […]

    Der Beitrag Job-Börse: Linux- und IT-Jobs vom 07.08.2017 von bitblokes.de ist von Linux | Spiele | Open-Source | Server | Desktop | Cloud | Android.


              Xamarin DevOps        
    Xamarin DevOps   Short introduction Application development – when you hear this you imagine some IDE where you can write code and build your app. Nowadays it looks different (and should look like this). When starting mobile app project, not only with Xamarin, you have to do some analysis, development plan, design research. There is … More Xamarin DevOps
              DevOps Engineer AWS and Windows        

              #110 Amo programar        

    Pues si te gusta programar y es tu pasión sigue con ello, no es facil ser un verdadero programador, empieza a aprender cosas como patrones, principios SOLID, TDD, BDD, Domain Driven Design, CQRS y un largo etc,etc. Ni caso a los que dicen que un programador es un "picateclas" y los "ingenieros" no programan y hacen sabe dios que cosas. La falacia del arquitecto de software que no programa sólo tiene sentido en organizaciones viejunas donde en realidad se crean docenas de categorías (analista, analista-programador, analista-organico, programador senior, proramador junior, arquitecto... ) simplemente para encajar con la estructura jerarquica organizativa.

    Sólo tenéis que coger a cualquier personalidad relevante en el mundo del software, cualquiera, y todos, sin excepción, son excelentes programadores. O cualquier empresa que haga software "exitoso" y ver como lo hace: ¿Creéis que en google hay un arquitecto que hace dibujitos y unos cuantos picateclas haciendo android?, o en microsoft mismo, ¿creéis que un sistema operativo lo "diseñan" en UML?, ¿o la IA de la ultima versión del FiFa tenía un documento de requisitos elaborado por un analista conforme a metrica 3?,¿ pensáis que linkedin lo han echo subcontrando los programadores a una carnica y pasandoles un diseño?.La labor central y más importante en la parte técnica de cualquier proyecto es el código, un ingeniero informático no sólo tiene que saber programar, si quiere liderar un proyecto tiene que ser el mejor programador del equipo, guiando al resto en el camino, haciendo revisiones de código, haciendo pair programing, ayudando al equipo a seleccionar las librerías y frameworks que mejor se ajustan al proyecto etc,etc.

    Por eso en muchos sitios un programador gana "una pasta", porque programar no se considera un trabajo mecánico que se hace a partir de un documento de diseño, programar es el diseño! (lo decia Jack Reeves hace un porron de años: www.developerdotstar.com/mag/articles/reeves_design_main.html). Un equipo de trabajo suele estar formado por un lider técnico (el programador con más experiencia en el equipo) y otros tantos programadores (y en función del proyecto a veces expertos en usabilidad, expertos en el negocio, expertos en diseño, depende), en lugar de tener mil escalones las jerarquias son planas y los equipos auto-organizados.

    Dicho esto, la parte técnica no es sólo programar, hay más cosas, hay que saber de persistencia (sql y NoSql), hay que saber organizar la politica de un sistema de control de versiones (branchs, mergues etc,etc), hay saber un poquito de integración continua, hay que saber de tipos de testing, hay que saber algo de devops y deploy continuo, hay que saber algo sobre como organizar un equipo (usar Scrum, kanban, xp...), muchas más cositas. (off course hay que saber UML, pero tampoco es una de las habilidades más importantes).

    si realmente quieres dedicarte a ser un buen técnico busca estos dos libros que quizas te hagan ver este mundo de los programadores de otra manera: The passionate programer y Apprenticeship Patterns. Mira en tu zona si existen grupos locales que se reúnan para dar charlas o practicar (en madrid ahora hay a montones, de java, de ruby, de python, de javascript, de agilismo etc,etc) y empieza a ir a las reuniones, buscate algún proyecto open-source con el colaborar...

    Eso si, en españa es dificil tener una carrera técnica, se puede pero hay poquitas oportunidades (yo de momento sigo aqui, aunque no descarto irme fuera o trabajar en remoto), si sales fuera, a EEUU o en europa (sobre todo por el norte) te darás cuenta de que la visión de la "programación" que se tiene por aqui esta muy lejos de la realidad de las empresas que producen VERDADERO software (no un nuevo CRUD para la aseguradora o el banco de turno).

    » autor: alfredo_casado


              DevOps Cloud Engineer        
    VA-Arlington, If you are a DevOps Cloud Engineer with experience, please read on! We are a top tier, global healthcare start-up whose mission is to revolutionize the web/cloud healthcare industry for the better with our cutting edge technologies! We are growing due to the success we are experiencing and currently we are seeking a DevOps Cloud Engineer to join our team! The ideal candidate will have solid experi
              Linux Kernel and Linux Foundation's Cloud Native Computing Foundation (CNCF)        

              DevOpsBy 2016        
    16 октября состоится конференция DevOpsBy 2016, посвященная организации профессионального процесса DevOps, рассмотрению лучших практик и инструментов DevOps. Вы укрепите свои знания о преимуществах devops перед более традиционными методами разработки и окончательно убедитесь в необходимости отдельного специалиста для контроля процессов. Будут … Continue reading
              DevOps: SUSE’s Innovative Tools Driving your CI/CD Future        
    Speakers: Jeff Price – SUSE Cameron Seader – SUSE Recorded at SUSECON 2016 as session TUT91504 Whether you think you have DevOps or you’re thinking of having a more DevOps-focused infrastructure, there are always gaps to fill and processes that can be more refined to adapt and respond to your business needs and growth. Come […]
              DevOps and the Open Build Service        
    Speaker: Adrian Schroeter – SUSE Recorded at SUSECON 2016 as session TUT89692 The Open Build Service continues as the central tool by which all SUSE products are built and distributed. It is the very essence of DevOps. OBS 2.7 was recently released which contains a few new features, and sets the stage for future improvements […]
              DevOps From a DevOps Manager’s Perspective        
    Speaker: Craig Gardner – SUSE Linux GmbH Recorded at SUSECON 2016 as session TUT89691 You’ve heard that DevOps (or ITOps, or WhateverOps) will solve all your development-to-deployment problems and how agile processes can increase the velocity of your projects. But you also likely know that it’s not a silver bullet that solves all problems. This […]
              Ingénieur Système Devops - Télécom (H/F) - Alten - Ile de France        
    Descriptif du projet Nous recrutons en CDI un ingénieur DevOps pour un projet basé en Ile de France. Vous interviendrez au sein d'une équipe dynamique sur des projets innovant et à forte valeur ajoutée. En plus de vos compétences techniques, vous êtes un véritable ambassadeur de la culture DevOps auprès des parties prenantes du projet. L'ingénieur DevOps apporte son expertise aux équipes de développement et de production pour améliorer l'exploitation des applications. Vos...
              Ingénieur Cloud / DevOps (H/F) - SYAGE - Paris        
    Poste Vous travaillerez en étroite collaboration avec les 5 développeurs de l'équipe, vous aurez un rôle clé dans l'évolution et la migration de leur plateforme Web vers l'hébergement cloud. Vos missions seront les suivantes : Ingénierie Cloud / Système : Migration de la plateforme Web vers le cloud Amazon Web Services (AWS) Conception et évolution de l'environnement de production du nouveau service Mise en place des outils de monitoring (efficacité, capacité, sécurité et...
              System/DevOps Administrator - CMHWorks, LLC - Purcellville, VA        
    *Position Details: * * 100% Tele-commute * Part-Time Flexible Hours * Contract to Hire Role *Responsibilities* *: * * Build and support a DevOps pipeline and ... $30 - $40 an hour
    From Indeed - Wed, 26 Jul 2017 19:53:43 GMT - View all Purcellville, VA jobs
              After all, it might not matter - A commentary on the status of .NET        

    Did you know what was the most menacing nightmare for a peasant soldier in Medieval wars? Approaching of a knight.

    Approaching of a knight - a peasant soldier's nightmare [image source]

    Famous for gallantry and bravery, armed to the teeth and having many years of training and battle experience, knights were the ultimate war machine for the better part of Medieval times. The likelihood of survival for a peasant soldier in an encounter with a knight was very small. They should somehow deflect or evade the attack of the knight’s sword or lance meanwhile wielding a heavy sword bring about the injury exactly at the right time when the knight passes. Not many peasant had the right training or prowess to do so.


    Appearing around 1000 AD, the dominance of knights started following the conquest of William of Normandy in 11th century and reached it heights in 14th century:
    “When the 14th century began, knights were as convinced as they had always been that they were the topmost warriors in the world, that they were invincible against other soldiers and were destined to remain so forever… To battle and win renown against other knights was regarded as the supreme knightly occupation” [Knights and the Age of Chivalry,1974]
    And then something happened. Something that changed the military combat for the centuries to come: the projectile weapons.
    “During the fifteenth century the knight was more and more often confronted by disciplined and better equipped professional soldiers who were armed with a variety of weapons capable of piercing and crushing the best products of the armourer’s workshop: the Swiss with their halberds, the English with their bills and long-bows, the French with their glaives and the Flemings with their hand guns” [Arms and Armor of the Medieval Knight: An Illustrated History of Weaponry in the Middle Ages, 1988]
    The development of longsword had provided more effectiveness for the knight attack but there was no degree of training or improved plate armour could stop the rise of the projectile weapons:
    “Armorers could certainly have made the breastplates thick enough to withstand arrows and bolts from longbows and crossbows, but the knights could not have carried such a weight around all day in the summer time without dying of heat stroke.”
    And the final blow was the handguns:
    “The use of hand guns provided the final factor in the inevitable process which would render armor obsolete” [Arms and Armor of the Medieval Knight: An Illustrated History of Weaponry in the Middle Ages, 1988]
    And with the advent of arbalests, importance of lifelong training disappeared since “an inexperienced arbalestier could use one to kill a knight who had a lifetime of training”

    Projectile weapons [image source]

    Over the course of the century, knighthood gradually disappeared from the face of the earth.

    A paradigm shift. A disruption.

    *       *       *

    After the big promise of web 1.0 was not delivered resulting in the .com crash of 2000-2001, development of robust RPC technologies combined with better languages and tooling gradually rose to fulfill the same promise in web 2.0. On the enterprise front, the need for reducing cost by automating business process lead to the growth of IT departments in virtually any company that could have a chance to survive in the 2000s decade.

    In the small-to-medium enterprises, the solutions almost invariably involved some form of a database in the backend, storing CRUD operations performed on data entry forms. The need for reporting on those databases resulted in creating business Intelligence functions employing more and more SQL experts.

    With the rise of e-Commerce, there was a need for most companies to have online presence and and ability to offer some form of shopping experience online. On the other hand, to reduce cost of postage and paper, companies started having account management online.

    Whether SOA or not, these systems functioned pretty well for the limited functionality they were offering. The important skills the developers of these systems needed to have was good command of the language used, object-oriented coding design principles (e.g. SOLID, etc), TDD and also knowledge of the agile principles and process. In terms of scalability and performance, these systems were rarely, if ever, pressed hard enough to break - even with sticky sessions could work as long as you had enough number of servers (it was often said “we are not Google or Facebook”). Obviously availability suffered but downtime was something businesses had used to and it was accepted as the general failure of IT.

    True, some of these systems were actually “lifted and shifted” to the cloud, but in reality not much had changed from the naive solutions of the early 2000s. And I call these systems The Simpleton Swamps.

    Did you see what was lacking in all of above? Distributed Computing.

    *       *       *

    It is a fair question that we need to ask ourselves: what was it that we, as the .NET community, were doing during the last 10 years of innovations? The first wave of innovations was the introduction of revolutionary papers of on BigTable and Dynamo Which later resulted in the emergence of NoSQL movement with Apache Cassandra, Riak and Redis (and later Elasticsearch). [During this time I guess we were busy with WPF and Silverlight. Where are they now?]

    The second wave was the Big Data revolution with Apache Hadoop ecosystem (HDFS, Pig, Hive, Mahout, Flume, HBase). [I guess we were doing Windows Phone development building Metro UI back then. Where are they now?]

    The third wave started with Kafka (and streaming solutions that followed), Grid Computing platforms with YARN and Mesos and also the extended Big Data family such as Spark, Storm, Impala, Drill, too many to name. In the meantime, Machine Learning became mainstream and the success of Deep Learning brought yet another dimension to the industry. [I guess we were rebuilding our web stack with Katana project. Where is it now?]

    And finally we have the Docker family and extended Grid Computing (registry, discovery and orchestration) software such as DCOS, Kubernetes, Marathon, Consul, etcd… Also the logging/monitoring stacks such as Kibana, Grafana, InfluxDB, etc which had started along the way as an essential ingredient of any such serious venture. The point is neither the creators nor the consumers of these frameworks could do any of this without in-depth knowledge of Distributed Computing. These platforms are not built to shield you from it, but to merely empower you to make the right decisions without having to implement a consensus algorithm from scratch or dealing with the subtleties of building a gossip protocol.


    And what was it that we have been doing recently? Well I guess we were rebuilding our stacks again with the #vNext aka #DNX aka #aspnetcore. Where are they now? Well actually a release is coming soon: 27th of June to be exact. But anyone who has been following the events closely knows that due to recent changes in direction, we are still - give or take - 9 to18 months far from a stable platform that can be built upon.

    So a big storm of paradigm shifts swept the whole industry and we have been still tinkering with our simpleton swamps. Please just have a look at this big list, only a single one of them is C#: Greg Young’s EventStore. And by looking at the list you see the same pattern, same shifts in focus.

    .NET ecosystem is dangerously oblivious to distributed computing. True we have recent exceptions such as Akka.net (a JVM port) or Orleans but it has not really penetrated and infused the ecosystem. If all we want to do is to simply build the front-end APIs (akin to nodejs) or cross-platform native apps (using Xamarin studio) is not a problem. But if we are not supposed to build the sizeable chunk of backend services, let’s make it clear here.

    *       *       *

    Actually there is fair amount of distributed computing happening in .NET. Over the last 7 years Microsoft has built significant numbers of services that are out to compete with the big list mentioned above: Azure Table Storage (arguably a BigTable implementation), Azure Blob Storage (Amazon Dynamo?) and EventHub (rubbing shoulder with Kafka). Also highly-available RDBM database (SQL Azure), Message Broker (Azure Service Bus) and a consensus implementation (Service Fabric). There is a plenty of Machine Learning as well, and although slowly, Microsoft is picking up on Grid Computing - alliance with Mesosphere and DCOS offering on Azure.

    But none of these have been open sourced. True, Amazon does not Open Source its bread-and-butter cloud. But considering AWS has mainly been an IaaS offering while Azure is banking on its PaaS capabilities, making Distributed Computing easy for its predominantly .NET consumers. It feels that Microsoft is saying, you know, let me deal with the really hard stuff, but for sure, I will leave a button in Visual Studio so you could deploy it to Azure.


    At points it feels as if, Microsoft as the Lords of the .NET stack fiefdom, having discovered gunpowder, are charging us knights and peasant soldiers to attack with our lances, axes and swords while keeping the gunpowder weapons and its science safely locked for the protection of the castle. .NET community is to a degree contributing to the #dotnetcore while also waiting for the Silver Bullet that #dotnetcore has been promised to be, revolutionising and disrupting the entire stack. But ask yourself, when was the last time that better abstractions and tooling brought about disruption? The knight is dead, gunpowder has changed the horizon yet there seems to be no ears to hear.

    Fiefdom of .NET stack
    We cannot fault any business entity for keeping its trade secrets. But if the soldiers fall, ultimately the castle will fall too.

    In fact, a single company is not able to pull the weight of re-inventing the emerging innovations. While the quantity of technologies emerged from Azure is astounding, quality has not always followed exactly. After complaining to Microsoft on the performance of Azure Table Storage, others finding it too and sometimes abandon the Azure ship completely.


    No single company is big enough to do it all by itself. Not even Microsoft.

    *       *       *

    I remember when we used to make fun of Java and Java developers (uninspiring, slow, eclipse was nightmare). They actually built most of the innovations of the last decade, from Hadoop to Elasticsearch to Storm to Kafka... In fact, looking at the top 100 Java repositories on github (minus Android Java), you find 24 distributed computing projects, 4 machine library repos and 2 languages. On C# you get only 3 with claims to distributed computing: ServiceStack, Orleans and Akka.NET.

    But maybe it is fine, we have our jobs and we focus on solving different kinds of problems? Errrm... let's look at some data.

    Market share of IIS web server has been halved over the last 6 years - according multiple independent sources [This source confirms the share was >20% in 2010].

    IIS share of the market has almost halved in the last 6 years [source]

    Now the market share of C# ASP.NET developers are decreasing to half too from tops of 4%:

    Job trend for C# ASP.NET developer [source]
    And if you do not believe that, see another comparison with other stacks from another source:

    Comparing trend of C# (dark blue) and ASP.NET (red) jobs with that of Python (yellow), Scala (green) and nodejs (blue). C# and ASP.NET dropping while the rest growing [source]

    OK, that was actually nothing, what I care more is OSS. Open Source revolution in .NET which had a steady growing pace since 2008-2009, almost reached a peak in 2012 with ASP.NET Web API excitement and then grew with a slower pace (almost plateau, visible on 4M chart - see appendix). [by the way, I have had my share of these repos. 7 of those are mine]

    OSS C# project creation in Github over the last 6 years (10 stars or more). Growth slowed since 2012 and there is a marked drop after March 2015 probably due to "vNext". [Source of the data: Github]

    What is worse is that the data showing with the announcement of #vNext aka #DNX aka #dotnetcore there was a sharp decline in the new OSS C# projects - the community is in a limbo situation waiting for the release - people find it pointless to create OSS projects on the current platform and the future platform is so much in flux which is not stable enough for innovation. With the recent changes announced, practically it will take another 12-18 months for it to stabilise (some might argue 6-12 months, fair enough, take what you like). For me this is the most alarming of all.

    So all is lost?

    All is never lost. You still find good COBOL or FoxPro developers and since it is a niche market, they are usually paid very well. But the danger is losing relevance…

    Practically can Microsoft pull it off? Perhaps. I do not believe it is hopeless, I feel a radical change by taking the steps below, Microsoft could materially reverse the decay:
    1. Your best community brains in the Distributed Computing and Machine Learning are in the F# community, they have already built many OSS projects on both - sadly remaining obscure and used by only few. Support and promote F# not just as a first class language but as THE preferred language of .NET stack (and by the way, wherever I said .NET stack, I meant C# and VB). Ask everyone to gradually move. I don’t know why you have not done it. I think someone somewhere in Redmond does not like it and he/she is your biggest enemy.
    2. Open Source good part of distributed services of Azure. Let the community help you to improve it. Believe me, you are behind the state of the art, frankly no one will look to copy it. Someone will copy from Azure Table Storage and not Cassandra?!
    3. Stop promoting deployment to Azure from Visual Studio with a click of a button making Distributed Computing looking trivial. Tell them the truth, tell them it is hard, tell them so few do succeed hence they need to go back and study, and forever forget about one-button click stuff. You are not doing a favour to them nor to yourself. No one should be acknowledged to deploy anything in distributed fashion without sound knowledge of Distributed Computing. 

    Last word

    So when I am asked about whether I am optimistic about the future of .NET or on the progress of dotnetcore, I usually keep silent: we seem to be missing the point on where we need to go with .NET - a paradigm shift has been ignored by our ecosystem. True dotnetcore will be released on 27th but  after all, it might not matter as much as we so much care about. One of the reasons we are losing to other stacks is that we are losing our relevance. We do not have all the time in the world. Time is short...

    Appendix

    Github Data

    Gathering the data from github is possible but due to search results being limited to 1000to rate-limiting, it takes a while to process. The best approach I found was to list repos by update date and keep moving up. I used a python script to gather the data.

    It is sensible to use the number of stars as the bar for the quality and importance of Github projects. But choosing the threshold is not easy and also there is usually a lag between creation of a project and it to gain popularity. That is why the threshold has been chosen very low. But if you think the drop in creation of C# projects in Github was due to this lag, think again. Here is the chart of all C# projects regardless of their stars (0 stars and more):


    All C# projects in github (0 stars and more) - marked drop in early 2015 and beyond

    F# showing healthy growth but the number of projects and stars are much less than that of C#. Hence here we look at the projects with 3 stars and more:


    OSS F# projects in Github - 3 stars or more
    Projects with 0 stars and more (possible showing people starting picking up and playing with it) is looking very healthy:


    All F# projects regardless of stars - steady rise.


    Data is available for download: C# here and F# here

    My previous predictions

    This is actually my second post of this nature. I wrote one 2.5 years ago, raising alarm bells for the lack of innovation in .NET and predicting 4 things that would happen in 5 years (2.5 years from now):
    1. All Data problems will be Big Data problems
    2. Any server-side code that cannot be horizontally scaled is gonna die
    3. Data locality will still be an issue so technologies closer to data will prevail
    4. We need 10x or 100x more data scientists and AI specialists
    Judge for yourself...


    Deleted section

    For the sake of brevity, I had to delete this section but this puts in context how we have many more hyperscale companies:

    "In the 2000s, not many had the problem of scale. We had Google, Yahoo and Amazon, and later Facebook and Twitter. These companies had to solve serious computing problems in terms of scalability and availability that on one hand lead to the Big Data innovations and on the other hand made Grid Computing more accessible.

    By commoditising the hardware, the Cloud computing allowed companies to experiment with the scale problems and innovate for achieving high availability. The results have been completely re-platformed enterprises (such as Netflix) and emergence of a new breed of hyperscale startups such as LinkedIn, Spotify, Airbnb, Uber, Gilt and Etsy. Rise of companies building software to solve problems related to these architectures such as Hashicorp, Docker, Mesosphere, etc has added another dimension to all this.

    And last but not least, is the importance of close relationship between academia and the industry which seems to be happening after a long (and sad) hiatus. This has lead many academy lecturers acting as Chief Scientists, etc to influence the science underlying the disruptive changes.

    There was a paradigm shift here. Did you see it?"


              QCon London 2015: from hype to trendsetting - Part 1        
    Level [C3]

    This year I could make it to the QCon London and I felt it might be useful to write up a summary for those who liked to be there but did not make it for any reason. This will also an opportunity to get my head together and summarise a couple of themes, inspired by the conference.

    Quality of the talks was varied and initially pretty disappointing on the first day but rose to a real high on the last day. Not surprisingly, Microservices and Docker were the buzzwords of the conference and many of the talks had one or the other in their title. It was as if, the hungry folks were being presented Microservices with ketchup and next it would be with Mayonnaise and yet nothing as good as Docker with Salsa. In fact it is very easy to be skeptic and sarcastic about Microservices or Docker and disregard them as a pure hype.

    After listening to the talks, especially ones on the last day, I was convinced that with or without me, this train is set to take the industry forward. Yes, granularity of the Microservices (MS) have not been crisply defined yet, and there is a stampede to download and install Microservices on old systems and fix the world. Industry will abuse it as it reduced SOA to Web Services by adding just a P to the end. Yes, there are very few people talking about the cost of moving to MS and explore the cases where you should stay put. But if your Monolith (even though pays lip service to SOA) has ground the development cycle to a halt and is killing you and your company, there is a thing or two to learn here.

    Disclaimer: This post by no means is a comprehensive account of the conference. This is my personal take on QCon London 2015 and topics discussed, peppered with some of my own views, presented as a technical writing.

    Microservices

    Yeah I know you are fed up with hearing the word - but bear with me for a few minutes. Microservices reminded me of my past life: it is a syndrome. A medical syndrome when it is first being described, does not have to have the Aetiology and Pathophysiology all clear and explained - it is just a syndrome, a collection of signs and symptoms that occur together. In the medical world, there could be years between describing a syndrome and finding what and why.

    And this is what we are dealing here within a different discipline: Microservice is an emerging pattern, a solution to a contextual problem that has indeed occurred. It is a phenomenon that we are still trying to figure out - a lot of head scratching is going on. So bear with it and I think we are for a good ride beyond all the hype.

    Its two basic benefits are mainly: smaller deployment granularity enabling you to iterate faster and smaller domain to focus, understand and improve. For me the first is the key.

    So here are a breakdown of few key aspects of the Microservices.

    Conway, Conway, Where Art Thou

    A re-occurring theme (and at points, ad nauseum) was that MS is the result of reversing cause and effect in the Conway's law and using it to your advantage: build smaller teams and your software will shape like it. So in essence, turning Conway's law on its head and use it as a tool to naturally result in a more loosely coupled architecture.



    This by no means is new, Amazon has been doing this for a decade. Size of the teams are nicely defined by Jeff Bezos as "Two Pizza Teams". But what is the makeup of these teams and how do they operate? As again described by Amazon, they are made up of elements of a small company, a start-up, including developers, testers, BA, business representative and more importantly operations, aka Devops.

    Another point stressed by Yoni Goldberg from Gilt and Andy Shoup was that the teams charge other teams for using their services and need to look after their finances. They found that doing this reduced costs of the team by 90% - mainly due to optimising cloud and computing costs.

    Granularity: "fits in my head" (does it?)

    One of the key challenges of Microservices was to define the granularity of a Microservice differentiating it from the traditional SOA. And it seems we have now up a definition: "its complexity fits one's head".

    What? This to me is a non-definition and on any account, it is a poor definition (sorry Dan). After all, there is nothing more subjective than what fits one's head, is it? And whose head by the way? if it is me, I cannot keep track of what I ate for breakfast and lunch at the same time (if you know me personally, you must have noticed my small head) and then we get those giants that can master several disciplines or understand the whole of an uber-domain.

    One of the key properties of a good definition is that it is tangible, unambiguous and objectively prescriptive. Jeff Bezos was not necessarily a Pizza lover to use it to define Amazon team sizes.

    In the absence of any tangible definition, I am going to bring my own - why not? This is really how I feel like the granularity of a MS should be, having built one or two, and I am using tangible metrics to define it.

    Granularity of Microservices - my definition

    As it is evident, Cross-cutting concerns of a Microservice are numerous. From security, availability, performance to routing, versioning, discovery, logging and monitoring. For a lot of these concerns, you can rely on the existing platform or common platform-wide guidelines, tools and infrastructure. So the crux of the sizing of the Microservice is its core business functionality, otherwise with regard to non-functional requirements, it would share the same concerns as traditional services.

    When not to Microservice

    Yoni Goldberg from Gilt covered this subject to some level. He basically said do not start with Microservice, build them when your domain complexity warrants it. He went through his own experience and how they improved upon the ball of mud to nice discreet service and then how they exploded the number of services when their
    So takeaways (with some personal salt and pepper) I would say is do NOT consider Microservice if:
    • you do not have the organisation structure (small cross functional teams)
    • you are not practising Devops, automated build and deployment
    • you do not have (or cannot have) an uber monitoring system telling you exactly what is happening
    • you have to carry along a legacy database
    • your domain is not too big

    Microservices is an evolutionary process

    Randy Shoup explained how the process towards Microservice has been an evolutionary one, usually starting with the Monolith. So he stressed "Evolution, not intelligent design" and how in such an environment, Governance (oh yeah, all ye Enterprise Architects listen up) is not the same as traditional SOA and is decentralised with its adoption purely based on how useful a practice/ is.

    Optimised message protocols now a must

    Frequented in more than a couple of talks, moving to ProtoBuf, Avro, Thrift or similar seems to be a must in all but trivial Microservice setups. One of the main performance challenges in MS scenarios is network latency and cost of serialisation/deserialisation over and over across multiple hops and JSON simply does not cut it anymore

    Source: Thrift vs Protobuf comparison (http://devres.zoomquiet.io/data/20091111011019/index.html)
    Be ready to move your APIs to these message protocols - yes you lose some simplicity benefits but trading it off for performance is always a necessary evil to make. Rest assured nothing stops you to use JSON while developing and testing, but if your game is serious, start changing your protocols now - and I am too, item already added to the technical backlog.

    What I was hoping to hear about and did not

    Microservice registry and versioning best practices was not mentioned at all. I tried to quiz a few speakers on these but did not quite get a good answer. I suppose the space is open for grab.

    Need for Composition Services/APIs

    As experienced personally, in an MS environment you would end up with two different types of services: Functional Microservice where they own their data and are the authority in their business domain and Composition APIs which do not normally own any data and bring value by composing data from several other services - normally involving some level of business logic affecting the end user. In DDD terms, you could somehow find similarity with Facade services and Yoni used the word "mid-tier services".

    Composition services can bring a lot of value when it comes to caching, pagination of data and enriching the information. They practically scatter the requests and gather the results back and compose the result - Fan-out is another term used here.

    By inherently depending on many services, they are notoriously susceptible to performance outliers (will be discussed in the second post) and failure scenarios which might warrant a layered cache backed by soft storage with a higher expiry for fallback in case dependent service is down.

    In the next post, we will look into topics below. We will discover why Docker in fact is closely related to the Microservices - and it is not what you think! [Do I qualify now to become a BusinessInsider journalist?]
    • Those pesky performance outliers
    • Containers, containers
    • Don't beat the dead Agile
    • Extra large memory computing is now a thing

              Performance Counters for your HttpClient        
    Level [T2]

    Pure HTTP APIs (aka REST APIs) are very popular at the moment. If you are building/maintaining one, you have probably learnt (perhaps the hard way) that having a monitoring on your API is one of your top cross-cutting concerns. This monitoring involves different aspects of the API, one of which is the performance.

    There has been many approaches to solving cross-cutting concerns on APIs. Proxying has been a popular one and there has been proliferation of proxy type services such as Mashery or Apigee which basically sit in front of your API and provide an abstraction which can solve your security and access control, monetizing or performance monitoring.

    This is a popular approach but comes with its headaches. One of these problems is that if you already have a security mechanism in place, it can clash with the one provided by the service. Also the geographic distribution of these services, although are getting better, is not as good as the one provided by many cloud vendors. This can mean that your traffic could be bouncing across the Atlantic ocean a couple of times before getting to your end users - and this is bad, really really bad. On the other hand, these will not tell you what is happening inside your application which you have to solve differently using classic monitoring approaches. So in a sense, I would say you might as well just byte! the bullet and just implement it yourself.

    PerfIt was a library I built a couple of years ago to provide performance counters for ASP.NET Web API. Creating and managing performance counters for windows is not a rocket science but is clumsy and once you do it over and over and for every service, this is just a bit too much overhead. So this was designed to make it really simple for you... well, actually for myself :) I have been using PerfIt over the last two years - in production - and it serves the purpose.

    Now, it is all well and good to know what is the performance characteristics of your API. But this gets really more complicated when you have taken a dependency on other APIs and degradation of your API is the result of performance issues in your dependent APIs.




    This is really a blame game: considering the fact that each MicroServie is managed by a single team and in an ideal DevOps world, the developers must support their services and you would love to blame another team rather than yourself especially if this is truly the direct result of performance degradation in a dependent service.

    One solution is have access to performance metrics of the dependent APIs but really this might not be possible and kinda goes against the DevOps model of operations. On the other hand, what if this is due to an issue in an intermediary - such as a Proxy?

    The real solution is to benchmark and monitor the calls you are making out of your API. And I have implemented a new DelegatingHandler to do that measurement for you!


    PerfIt! for HttpClient

    So HttpClient is the de-facto class for accessing HTTP APIs. If you are using something else then either you have a really really good reason to do so or you are just doing it wrong.

    PerfIt for client provides 4 standard counters out of the box:

    • Total # of operations
    • Average time taken (in seconds)
    • Time taken for the last operation (in ms)
    • # of operations per second

    These are the 4 counters that you would normally need. If you need another, just get in touch with me but remember these counters must be business-independent.

    First step is to install PerfIt using NuGet:
    PM> Install-Package PerfIt
    And then you just need to install the counters for your application. This can be done by running this simple code (categoryName is the performance counter grouping):
    PerfItRuntime.InstallStandardCounters("<categoryName>");
    Or by using an installer class as explained on the GitHub page and then running InstallUtil.exe.

    Now, just add PerfitClientDelegatingHandler to your HttpClient and make some requests against a couple of websites:
    using System;
    using System.Net.Http;
    using PerfIt;
    using RandomGen;

    namespace PerfitClientTest
    {
    class Program
    {
    static void Main(string[] args)
    {
    var httpClient = new HttpClient(new PerfitClientDelegatingHandler("ClientTest")
    {
    InnerHandler = new HttpClientHandler()
    });

    var randomSites = Gen.Random.Items(new[]
    {
    "http://google.com",
    "http://yahoo.com",
    "http://github.com"
    });
    for (int i = 0; i < 100; i++)
    {
    var httpResponseMessage = httpClient.GetAsync(randomSites()).Result;
    Console.Write("\r" + i);
    }

    }
    }
    }
    And now you should be seeing this (we have chosen "ClientTest" for the category name):

    So as you can see, instance names are the host names of the APIs and this should provide you with enough information for you to monitor your dependencies. Any deeper information than this and then you really need tracing rather than monitoring - which is a completely different thing...



    So as you can see, it is extremely easy to set this up and run it. I might expose the part of the code that defines the instance name which will probably be coming in the next versions.

    Please use the GitHub page to ask questions or provide feedbacks.



              The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win        
    The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win
    author: Gene Kim
    name: Brent
    average rating: 4.19
    book published: 2013
    rating: 0
    read at:
    date added: 2017/05/25
    shelves: techie
    review:


              The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations        
    The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations
    author: Gene Kim
    name: Brent
    average rating: 4.41
    book published: 2015
    rating: 0
    read at:
    date added: 2017/05/25
    shelves: techie
    review:


              Camp DevOps! (at Gluecon 2014)        
    One of the big uptrends in technology is the rise of DevOps. Whether your organization is a large enterprise or a fledgling startup, DevOps can help.  We have seen this first hand in many of our portfolio companies and the market in general.  This is why we are excited to be working with Eric Norlin...
              DevOps, Containers and Next Generation Microservices        
    DevOps, Containers and Next Generation Microservices Microsoft Reactor at Grand Central Tech 335 Madison Ave, New York, NY (map) Register on Meetup https://www.meetup.com/nysoftware/events/240246817/   AGENDA ·  6:30 PM – PIZZA & NETWORKING: Doors open, head to the Microsoft Reactor on the 4th floor. Food & drinks will be served, courtesy of Microsoft. Take the opportunity to mingle […]
              Devops Online Training | Devops Online Course | Online IT Guru        
    DevOps is to improve and change the performance for better communication and collaboration in IT Industry. Check out now Onlineitguru providing DevOps online training.
              AWS re:Invent 2016 Video & Slide Presentation Links with Easy Index        
    As with last year, here is my quick index of all re:Invent sessions. I'll keep running the tool to fill in the index.  It usually takes Amazon a few weeks to fully upload all the videos and presentations. This year it looks like Amazon got the majority of content on Youtube and Slideshare very quick with a few Slideshares still trickling in.

    See below for how I created the index (with code):


    ALX201 - How Capital One Built a Voice-Based Banking Skill for Amazon Echo
    As we add thousands of skills to Alexa, our developers have uncovered some basic and more complex tips for building better skills. Whether you are new to Alexa skill development or if you have created skills that are live today, this session helps you understand how to create better voice experiences. Last year, Capital One joined Alexa on stage at re:Invent to talk about their experience building an Alexa skill. Hear from them one year later to learn from the challenges that they had to overcome and the results they are seeing from their skill. In this session, you will learn the importance of flexible invocations, better VUI design, how OAuth and account linking can add value to your skill, and about Capital One's experience building an Alexa skill.
    ALX202 - How Amazon is enabling the future of Automotive
    The experience in the auto industry is changing. For both the driver and the car manufacturer, a whole new frontier is on the near horizon. What do you do with your time while the car is driving itself? How do I have a consistent experience while driving shared or borrowed cars? How do I stay safer and more aware in the ever increasing complexity of traffic, schedules, calls, messages and tweets? In this session we will discuss how the auto industry is facing new challenges and how the use of Amazon Alexa, IoT, Logistics services and the AWS Cloud is transforming the Mobility experience of the (very near) future.
    ALX203 - Workshop: Creating Voice Experiences with Alexa Skills: From Idea to Testing in Two Hours
    This workshop teaches you how to build your first voice skill with Alexa. You bring a skill idea and well show you how to bring it to life. This workshop will walk you through how to build an Alexa skill, including Node.js setup, how to implement an intent, deploying to AWS Lambda, and how to register and test a skill. Youll walk out of the workshop with a working prototype of your skill idea. Prerequisites: Participants should have an AWS account established and available for use during the workshop. Please bring your own laptop.
    ALX204 - Workshop: Build an Alexa-Enabled Product with Raspberry Pi
    Fascinated by Alexa, and want to build your own device with Alexa built in? This workshop will walk you through to how to build your first Alexa-powered device step by step, using a Raspberry Pi. No experience with Raspberry Pi or Alexa Voice Service is required. We will provide you with the hardware and the software required to build this project, and at the end of the workshop, you will be able to walk out with a working prototype of Alexa on a Pi. Please bring a WiFi capable laptop.
    ALX301 - Alexa in the Enterprise: How JPL Leverages Alexa to Further Space Exploration with Internet of Things
    The Jet Propulsion Laboratory designs and creates some of the most advanced space robotics ever imagined. JPL IT is now innovating to help streamline how JPLers will work in the future in order to design, build, operate, and support these spacecraft. They hope to dramatically improve JPLers' workflows and make their work easier for them by enabling simple voice conversations with the room and the equipment across the entire enterprise. What could this look like? Imagine just talking with the conference room to configure it. What if you could kick off advanced queries across AWS services and kick off AWS Kinesis tasks by simply speaking the commands? What if the laboratory could speak to you and warn you about anomalies or notify you of trends across your AWS infrastructure? What if you could control rovers by having a conversation with them and ask them questions? In this session, JPL will demonstrate how they leveraged AWS Lambda, DynamoDB and CloudWatch in their prototypes of these use cases and more. They will also discuss some of the technical challenges they are overcoming, including how to deploy and manage consumer devices such as the Amazon Echo across the enterprise, and give lessons learned. Join them as they use Alexa to query JPL databases, control conference room equipment and lights, and even drive a rover on stage, all with nothing but the power of voice!
    ALX302 - Build a Serverless Back End for Your Alexa-Based Voice Interactions
    Learn how to develop voice-based serverless back ends for Alexa Voice Service (AVS) and Alexa devices using the Alexa Skills Kit (ASK), which allows you to add new voice-based interactions to Alexa. Well code a new skill, implemented by a serverless backend leveraging AWS services such as Amazon Cognito, AWS Lambda, and Amazon DynamoDB. Often, your skill needs to authenticate your users and link them back to your backend systems and to persist state between user invocations. User authentication is performed by leveraging OAuth compatible identity systems. Running such a system on your back end requires undifferentiated heavy lifting or boilerplate code. Well leverage Login with Amazon as the identity provider instead, allowing you to focus on your application implementation and not on the low-level user management parts. At the end of this session, youll be able to develop your own Alexa skills and use Amazon and AWS services to minimize the required backend infrastructure. This session shows you how to deploy your Alexa skill code on a serverless infrastructure, leverage AWS Lambda, use Amazon Cognito and Login with Amazon to authenticate users, and leverage AWS DynamoDB as a fully managed NoSQL data store.
    ALX303 - Building a Smarter Home with Alexa
    Natural user interfaces, such as those based on speech, enable customers to interact with their home in a more intuitive way. With the VUI (Voice User Interface) smart home, now customers don't need to use their hands or eyes to do things around the home they only have to ask and it's at their command. This session will address the vision for the VUI smart home and how innovations with Amazon Alexa make it possible.
    ALX304 - Tips and Tricks on Bringing Alexa to Your Products
    Ever wonder what it takes to add the power of Alexa to your own products? Are you curious about what Alexa partners have learned on their way to a successful product launch? In this session you will learn about the top tips and tricks on how to go from VUI newbie to an Alexa-enabled product launch. Key concepts around hardware selection, enabling far field voice interaction, building a robust Alexa Voice Service (AVS) client and more will be discussed along with customer and partner examples on how to plan for and avoid common challenges in product design, development and delivery.
    ALX305 - From VUI to QA: Building a Voice-Based Adventure Game for Alexa
    Hitting the submit button to publish your skill is similar to sending your child to their first day of school. You want it to be set up for a successful launch day and for many days thereafter. Learn how to set your skill up for success from Andy Huntwork, Alexa Principal Engineer and one of the creators of the popular Alexa skill The Magic Door. You will learn the most common reasons why skills fail and also some of the more unique use cases. The purpose of this session is to help you build better skills by knowing what to look out for and what you can test for before submitting. In this session, you will learn what most developers do wrong, how to successfully test and QA your skill, how to set your skill up for successful certification, and the process of how a skill gets certified.
    ALX306 - State of the Union: Amazon Alexa and Recent Advances in Conversational AI
    The way humans interact with machines is at a turning point, and conversational artificial intelligence (AI) is at the center of the transformation. Learn how Amazon is using machine learning and cloud computing to fuel innovation in AI, making Amazon Alexa smarter every day. Alexa VP and Head Scientist Rohit Prasad presents the state of the union Alexa and Recent Advances in Conversational AIn for Alexa. He addresses Alexa's advances in spoken language understanding and machine learning, and shares Amazon's thoughts about building the next generation of user experiences.
    ALX307 - Voice-enabling Your Home and Devices with Amazon Alexa and AWS IoT
    Want to learn how to Alexa-power your home? Join Brookfield Residential CIO and EVP Tom Wynnyk and Senior Solutions Architect Nathan Grice, for Alexa Smart Homefor an overview of building the next generation of integrated smart homes using Alexa to create voice-first experiences. Understand the technologies used and how to best expose voice experiences to users through Alexa. Paul and Nathan cover the difference between custom Alexa skills and Smart Home Skill API skills, and build a home automation control from the ground up using Alexa and AWS IoT.
    ARC201 - Scaling Up to Your First 10 Million Users
    Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
    ARC202 - Accenture Cloud Platform Serverless Journey
    Accenture Cloud Platform helps customers manage public and private enterprise cloud resources effectively and securely. In this session, learn how we designed and built new core platform capabilities using a serverless, microservices-based architecture that is based on AWS services such as AWS Lambda and Amazon API Gateway. During our journey, we discovered a number of key benefits, including a dramatic increase in developer velocity, a reduction (to almost zero) of reliance on other teams, reduced costs, greater resilience, and scalability. We describe the (wild) successes weve had and the challenges weve overcome to create an AWS serverless architecture at scale. Session sponsored by Accenture. AWS Competency Partner
    ARC203 - Achieving Agility by Following Well-Architected Framework Principles on AWS
    The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, and cost optimization when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud. In this session, you'll learn how National Instruments used the Well-Architected Framework to follow AWS guidelines and best practices. By developing a strategy based on the AWS Well-Architected Framework, National Instruments was able to triple the number of applications running in the cloud without additional head count, significantly increase the frequency of code deployments, and reduce deployment times from two weeks to a single day. As a result, National Instruments was able to deliver a more scalable, dynamic, and resilient LabVIEW platform with agility.
    ARC204 - From Resilience to Ubiquity - #NetflixEverywhere Global Architecture
    Building and evolving a pervasive, global service requires a multi-disciplined approach that balances requirements with service availability, latency, data replication, compute capacity, and efficiency. In this session, well follow the Netflix journey of failure, innovation, and ubiquity. We'll review the many facets of globalization and then delve deep into the architectural patterns that enable seamless, multi-region traffic management; reliable, fast data propagation; and efficient service infrastructure. The patterns presented will be broadly applicable to internet services with global aspirations.
    ARC205 - Born in the Cloud; Built Like a Startup
    This presentation provides a comparison of three modern architecture patterns that startups are building their business around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers Elastic Beanstalk, Amazon ECS, Docker, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront, as well as Docker.
    ARC207 - NEW LAUNCH! Additional transparency and control for your AWS environment through AWS Personal Health Dashboard
    When your business is counting on the performance of your cloud solutions, having relevant and timely insights into events impacting your AWS resources is essential. AWS Personal Health Dashboard serves as the primary destination for you to receive personalized information related to your AWS infrastructure, guiding your through scheduled changes, and accelerating the troubleshooting of issues impacting your AWS resources. The service, powered by AWS Health APIs, integrates with your in-house event management systems, and can be programmatically configured to proactively get the right information into the right hands at the right time. The service is integrated with Splunk App for AWS to enhance Splunks dashboards, reports and alerts to deliver real-time visibility into your environment.
    ARC208 - Hybrid Architectures: Bridging the Gap to the Cloud
    AWS provides many services to assist customers with their journey to the cloud. Hybrid solutions offer customers a way to continue leveraging existing investments on-premises, while expanding their footprint into the public cloud. This session covers the different technologies available to support hybrid architectures on AWS. We discuss common patterns and anti-patterns for solving enterprise workloads across a hybrid environment.
    ARC209 - Attitude of Iteration
    In todays world, technology changes at a breakneck speed. What was new this morning is outdated at lunch. Working in the AWS Cloud is no different. Every week, AWS announces new features or improvements to current products. As AWS technologists, we must assimilate these new technologies and make decisions to adopt, reject, or defer. These decisions can be overwhelming: we tend to either reject everything and become stagnant, or adopt everything and never get our project out the door. In this session we will discuss the attitude of iteration. The attitude of iteration allows us to face the challenges of change without overwhelming our technical teams with a constant tug-o-war between implementation and improvement. Whether youre an architect, engineer, developer, or AWS newbie, prepare to laugh, cry, and commiserate as we talk about overcoming these challenges. Session sponsored by Rackspace.
    ARC210 - Workshop: Addressing Your Business Needs with AWS
    Come and participate with other AWS customers as we focus on the overall experience of using AWS to solve business problems. This is a great opportunity to collaborate with existing and prospective AWS users to validate your thinking and direction with AWS peers, discuss the resources that aid AWS solution design, and give direct feedback on your experience building solutions on AWS.
    ARC211 - Solve common problems with ready to use solutions in 5 minutes or less
    Regularly, customers at AWS assign resources to create solutions that address common problems shared between businesses of all sizes. Often, this results in taking resources away from products or services that truly differentiate the business in the marketplace. The Solutions Builder team at AWS focuses on developing and publishing a catalog of repeatable, standardized solutions that can be rapidly deployed by customers to overcome common business challenges. In this session, the Solutions Builder team will share ready to use solutions that make it easy for anyone to create a transit VPC, centralized logging, a data lake, scheduling for Amazon EC2, and VPN monitoring. Along the way, the team reveals the architectural tenets and best practices they follow for the development of these solutions. In the end, customers are introduced to a catalog of freely available solutions with a peek into the architectural approaches used by an internal team at AWS.
    ARC212 - Salesforce: Helping Developers Deliver Innovations Faster
    Salesforce is one of the most innovative enterprise software companies in the world, delivering 3 major releases a year with hundreds of features in each release. In this session, come learn how we enable thousands of engineers within Salesforce to utilize a flexible development environment to deliver these innovations to our customers faster. We show you how we enable engineers at Salesforce to test not only individual services they are developing but also large scale service integrations. Also learn how we can achieve setup of a representative production environment in minutes and teardown in seconds, using AWS.
    ARC213 - Open Source at AWS—Contributions, Support, and Engagement
    Over the last few years, we have seen a dramatic increase in the use of open source projects as the mainstay of architectures in both startups and enterprises. Many of our customers and partners also run their own open source programs and contribute key technologies to the industry as a whole (see DCS201). At AWS weengage with open source projects in a number of ways. Wecontribute bug fixesand enhancementstopopular projectsincluding ourwork with the Hadoop ecosystem (see BDM401), Chromium(see BAP305) and (obviously) Boto.We have our own standalone projectsincludingthe security library s2n (see NET405)and machine learning project MXnet (see MAC401).Wealsohave services that make open source easier to use like ECS for Docker (see CON316), and RDS for MySQL and PostgreSQL (see DAT305).In this session you will learn about our existing open source work across AWS, and our next steps.
    ARC301 - Architecting Next Generation SaaS Applications on AWS
    AWS provides a broad array of services, tools, and constructs that can be used to design, operate, and deliver SaaS applications. In this session, Tod Golding, the AWS Partner Solutions Architect, shares the wisdom and lessons learned from working with dozens of customers and partners building SaaS solutions on AWS. We discuss key architectural strategies and patterns that are used to deliver multi-tenant SaaS models on AWS and dive into the full spectrum of SaaS design and architecture considerations, including tenant isolation models, tenant identity management, serverless SaaS, and multi-tenant storage strategies. This session connects the dots between general SaaS best practices and what it means to realize these patterns on AWS, weighing the architectural tradeoffs of each model and assessing its influence on the agility, manageability, and cost profile of your SaaS solution.
    ARC302 - From One to Many: Evolving VPC Design
    As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to Amazon S3, managing multi-tenant VPCs, integrating existing customer networks through AWS Direct Connect, and building a full VPC mesh network across global regions.
    ARC303 - Cloud Monitoring - Understanding, Preparing, and Troubleshooting Dynamic Apps on AWS
    Applications running in a typical data center are static entities. Dynamic scaling and resource allocation are the norm in AWS. Technologies such as Amazon EC2, Docker, AWS Lambda, and Auto Scaling make tracking resources and resource utilization a challenge. The days of static server monitoring are over. In this session, we examine trends weve observed across thousands of customers using dynamic resource allocation and discuss why dynamic infrastructure fundamentally changes your monitoring strategy. We discuss some of the best practices weve learned by working with New Relic customers to build, manage, and troubleshoot applications and dynamic cloud services. Session sponsored by New Relic. AWS Competency Partner
    ARC304 - Effective Application Data Analytics for Modern Applications
    IT is evolving from a cost center to a source of continuous innovation for business. At the heart of this transition are modern, revenue-generating applications, based on dynamic architectures that constantly evolve to keep pace with end-customer demands. This dynamic application environment requires a new, comprehensive approach to traditional monitoring one based on real-time, end-to-end visibility and analytics across the entire application lifecycle and stack, instead of monitoring by piecemeal. This presentation highlights practical advice on how developers and operators can leverage data and analytics to glean critical information about their modern applications. In this session, we will cover the types of data important for todays modern applications. Well discuss visibility and analytics into data sources such as AWS services (e.g., Amazon CloudWatch, AWS Lambda, VPC Flow Logs, Amazon EC2, Amazon S3, etc.), development tool chain, and custom metrics, and describe how to use analytics to understand business performance and behaviors. We discuss a comprehensive approach to monitoring, troubleshooting, and customer usage insights, provide examples of effective data analytics to improve software quality, and describe an end-to-end customer use case that highlights how analytics applies to the modern app lifecycle and stack. Session sponsored by Sumo Logic. AWS Competency Partner
    ARC305 - From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
    Gilt, a global e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Emerson Loureiro, Sr. Software Engineer at Gilt, will share Gilt's experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud.Derek Chiles, AWS Solutions Architect, will review best practices and recommended architectures for deploying microservices on AWS.
    ARC306 - Event Handling at Scale: Designing an Auditable Ingestion and Persistence Architecture for 10K+ events/second
    How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
    ARC307 - Accelerating Next Generation Healthcare Business on the AWS Cloud
    Hear Geneia's design principles for using multiple technologies like Elastic Load Balancing and Auto Scaling in end-to-end solutions to meet regulatory requirements. Explore how to meet HIPAA regulations by using native cloud services like Amazon EC2, Amazon EBS volumes, encryption services, and monitoring features in addition to third-party tools to ensure end-to-end data protection, privacy, and security for protected health information (PHI) data hosted in the AWS Cloud. Learn how Geneia leveraged multiregion and multizone backup and disaster recovery solutions to address the recovery time objective (RTO) and recovery point objective (RPO) requirements. Discover how automated build, deployment, provisioning, and virtual workstations in the cloud enabled Geneia's developers and data scientists to quickly provision resources and work from any location, expediting the onboarding of customers, getting to market faster, and capturing bigger market share in healthcare analytics while minimizing costs. Session sponsored by Cognizant. AWS Competency Partner
    ARC308 - Metering Big Data at AWS: From 0 to 100 Million Records in 1 Second
    Learn how AWS processes millions of records per second to support accurate metering across AWS and our customers. This session shows how we migrated from traditional frameworks to AWS managed services to support a large processing pipeline. You will gain insights on how we used AWS services to build a reliable, scalable, and fast processing system using Amazon Kinesis, Amazon S3, and Amazon EMR. Along the way we dive deep into use cases that deal with scaling and accuracy constraints. Attend this session to see AWSs end-to-end solution that supports metering at AWS.
    ARC309 - Moving Mission Critical Apps from One Region to Multi-Region active/active
    In gaming, low latencies and connectivity are bare minimum expectations users have while playing online on PlayStation Network. Alex and Dustin share key architectural patterns to provide low latency, multi-region services to global users. They discuss the testing methodologies and how to programmatically map out a large dependency multi-region deployment with data-driven techniques. The patterns shared show how to adapt to changing bottlenecks and sudden, several million request spikes. Youll walk away with several key architectural patterns that can service users at global scale while being mindful of costs.
    ARC310 - Cost Optimizing Your Architecture: Practical Design Steps For Big Savings
    Did you know that AWS enables builders to architect solutions for price? Beyond the typical challenges of function, performance, and scale, you can make your application cost effective. Using different architectural patterns and AWS services in concert can dramatically reduce the cost of systems operation and per-transaction costs. This session uses practical examples aimed at architects and developers. Using code and AWS CloudFormation in concert with services such as Amazon EC2, Amazon ECS, Lambda, Amazon RDS, Amazon SQS, Amazon SNS, Amazon S3, CloudFront, and more, we demonstrate the financial advantages of different architectural decisions. Attendees will walk away with concrete examples, as well as a new perspective on how they can build systems economically and effectively. Attendees at this session will receive a free 30 day trial of AWS Trusted Advisor.
    ARC311 - Evolving a Responsive and Resilient Architecture to Analyze Billions of Metrics
    Nike+ is at the core of the Nike digital product ecosystem, providing services to enhance your athletic experience through quantified activity tracking and gamification. As one of the first movers at Nike to migrate out of the datacenter to AWS, they share the evolution in building a reactive platform on AWS to handle large, complex data sets. They provide a deep technical view of how they process billions of metrics a day in their quantified-self platform, supporting millions of customers worldwide. Youll leave with ideas and tools to help your organization scale in the cloud. Come learn from experts who have built an elastic platform using Java, Scala, and Akka, leveraging the power of many AWS technologies like Amazon EC2, ElastiCache, Amazon SQS, Amazon SNS, DynamoDB, Amazon ES, Lambda, Amazon S3, and a few others that helped them (and can help you) get there quickly.
    ARC312 - Compliance Architecture: How Capital One Automates the Guard Rails for 6,000 Developers
    What happens when you give 6,000 developers access to the cloud? Introducing Cloud Custodian, an open source project from Capital One, which provides a DSL for AWS fleet management that operates in real-time using CloudWatch Events and Lambda. Cloud Custodian is used for the gamut of compliance, encryption, and cost optimization. What can it do for you?
    ARC313 - Running Lean Architectures: How to Optimize for Cost Efficiency
    Whether youre a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; choosing the optimal instance type through load testing; taking advantage of Multi-AZ support; and using Amazon CloudWatch to monitor usage and automatically shut off resources when they are not in use. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by leveraging AWS high-level services. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the Amazon Cloud. Attendees of this session receive a free 30-day trial of enterprise-level Trusted Advisor.
    ARC314 - Enabling Enterprise Migrations: Creating an AWS Landing Zone
    With customers migrating workloads to AWS, we are starting to see a need for the creation of a prescribed landing zone, which uses native AWS capabilities and meets or exceeds customers' security and compliance objectives. In this session, we will describe an AWS landing zone and will cover solutions for account structure, user configuration, provisioning, networking and operation automation. This solution is based on AWS native capabilities such as AWS Service Catalog, AWS Identity and Access Management, AWS Config Rules, AWS CloudTrail and Amazon Lambda. We will provide an overview of AWS Service Catalog and how it be used to provide self-service infrastructure to applications users, including various options for automation. After this session you will be able to configure an AWS landing zone for successful large scale application migrations. Additionally, Philips will explain their cloud journey and how they have applied their guiding principles when building their landing zone.
    ARC315 - The Enterprise Fast Lane - What Your Competition Doesn't Want You To Know About Enterprise Cloud Transformation
    Fed up with stop and go in your data center? Shift into overdrive and pull into the fast lane! Learn how AutoScout24, the largest online car marketplace Europe-wide, are building their Autobahn in the Cloud. The secret ingredient? Culture! Because Cloud is only one half of the digital transformation story: The other half is how your organization deals with cultural change as you transition from the old world of IT into building microservices on AWS with agile DevOps teams in a true you build it you run it fashion. Listen to stories from the trenches, powered by Amazon Kinesis, Amazon DynamoDB, AWS Lambda, Amazon ECS, Amazon API Gateway and much more, backed by AWS Partners, AWS Professional Services, and AWS Enterprise Support. Key takeaways: How to become Cloud native, evolve your architecture step by step, drive cultural change across your teams, and manage your companys transformation for the future.
    ARC316 - Hybrid IT: A Stepping Stone to All-In
    This session demonstrates how customers can leverage hybrid IT as a transitional step on the path to going all-in on AWS. We provide a step-by-step walk-through focusing on seamless migration to the cloud, with consideration given to existing data centers, equipment, and staff retraining. Learn about the suite of capabilities AWS provides to ease and simplify your journey to the cloud.
    ARC318 - Busting the Myth of Vendor Lock-In: How D2L Embraced the Lock and Opened the Cage
    When D2L first moved to the cloud, we were concerned about being locked-in to one cloud provider. We were compelled to explore the opportunities of the cloud, so we overcame our perceived risk, and turned it into an opportunity by self-rolling tools and avoiding AWS native services. In this session, you learn how D2L tried to bypass the lock buteventually embraced itand opened the cage. Avoiding AWS native tooling and pure lifts of enterprise architecture caused a drastic inflation of costs. Learn how we shifted away from a self-rolled lift into an efficient and effective shift while prioritizing cost, client safety, AND speed of development. Learn from D2L'ssuccesses and missteps, and convert your own enterprise systems into the cloud both through native cloud births and enterprise conversions. This session discusses D2Ls use of Amazon EC2 (with aguest appearance by Reserved Instances), Elastic Load Balancing, Amazon EBS, Amazon DynamoDB, Amazon S3, AWS CloudFormation, AWS CloudTrail, Amazon CloudFront, AWS Marketplace, Amazon Route 53, AWS Elastic Beanstalk, and Amazon ElastiCache.
    ARC319 - Datapipe Open Source: Image Development Pipeline
    For an IT organization to be successful in rapid cloud assessment or iterative migration of their infrastructure and applications to AWS, they need to effectively plan and execute on a strategic cloud strategy that focuses not only on cloud, but also big data, DevOps, and security.Session sponsored by Datapipe. AWS Competency Partner
    ARC320 - Workshop: AWS Professional Services Effective Architecting Workshop
    The AWS Professional Services team will be facilitating an architecture workshop exercise for certified AWS Architects. Class size will be limited to 48. This workshop will be a highly interactive architecture design exercise where the class will be randomly divided into teams and given a business case for which they will need to design an effective AWS solution. Past participants have found the interaction with people from other organizations and the creative brainstorming that occurs across 6 different teams greatly enhances the learning experience. Flipcharts will be provided and students are encouraged to bring their laptops to document their designs. Each team will be expected to present their solution to the class.
    ARC402 - Serverless Architectural Patterns and Best Practices
    As serverless architectures become more popular, AWS customers need a framework of patterns to help them deploy their workloads without managing servers or operating systems. This session introduces and describes four re-usable serverless patterns for web apps, stream processing, batch processing, and automation. For each, we provide a TCO analysis and comparison with its server-based counterpart. We also discuss the considerations and nuances associated with each pattern and have customers share similar experiences. The target audience is architects, system operators, and anyone looking for a better understanding of how serverless architectures can help them save money and improve their agility.
    ARC403 - Building a Microservices Gaming Platform for Turbine Mobile Games
    Warner Bros Turbine team shares lessons learned from their enhanced microservices game platform, which uses Docker, Amazon EC2, Elastic Load Balancing, and Amazon ElastiCache to scale up in anticipation of massive game adoption. Learn about their Docker-based microservices architecture, tuned and optimized to support the demands of the massively popular [Batman: Arkham Underworld and other franchises]. Turbine invent and simplify microservices persistence services consolidating their previous NoSQL database solution with highly performant PostgreSQL on Amazon EC2 and Amazon EBS. Turbine also describes other innovative strategies, including integrated analytic techniques to anticipate and predict their scaling operations.
    ARC404 - Migrating a Highly Available and Scalable Database from Oracle to Amazon DynamoDB S3E2 meetup at Netflix. The second was a great conference in Raleigh - All Things Open (ATO) 2015. The third was the Research Triangle Park area Triangle Devops meetup. At all three of these I talked about why we do open source at Netflix. At the ATO 2015 conference I heard from people doing open source better than I personally am doing. These meetings inspired me to write this blog post to be more open about one of my personal projects that is struggling and motivated me to do better on this project - Prana.  Prana is our sidecar process we run alongside non-Java code to provide access to our cloud platform services. Prana(-Zuul) runs on 1000’s of servers in the Netflix cloud and is a key technology that allows Netflix to work across multiple language implementations.

    When we released Prana to open source, we open sourced a version we expected to be used internally before it actually was. This was at a time where a majority of the Netflix cloud platform was looking towards adoption of RxNetty networking and reactive RPC and service hosting mechanisms. Given both RxNetty and reactive RPC were evolving at the same time, all services of Prana open source were based on, at the time, prototype codebases. Let’s call this version Prana-OSS.

    While all of this external work was going on, there was a different version of Prana used internally that originated out of our Edge team. Given its origin, it was based on Zuul and Jetty for service hosting and an older internal version of Ribbon for RPC. This version, while a bit crufty due to its age, had more functionality with regards to reliability and was more battle tested in production. Let’s call this version Prana-Zuul.

    Prana-Zuul is based upon our cloud platform libraries (configuration, service registration and discovery, RPC, etc). Prana-Zuul is maintained by the same cloud platform team as these base libraries. In thinking through strategically what Prana (and Prana-OSS) should become, we decided that Prana should be based on the newer supported versions of both cloud platform services and RPC. This is an evolving space in our cloud platform today and therefore we are waiting to update Prana-OSS until this cloud platform update occurs. In the meantime, we have been continuously battle hardening Prana-Zuul for our wealth of internal non-JVM customers.

    This has meant that external users of Prana-OSS are using an OSS project that, while valuable, isn’t used internally. Given the code isn’t the code we use internally, this has meant that PR’s and issues on github are going unanswered. It was Christine Abernathy’s talk at ATO 2015 that reminded me that such behavior isn’t acceptable for an open source project. At Facebook, such a project would likely come to the attention of the open source project office and then executives and they would take action. To be more responsible with this project, I will post this blog article to each of the open PR’s as well as on the front readme asking people to read it to understand the status. I will also go ahead and address as many of the PR’s as I can easily to keep the code live across external contributors, knowing that we are resolving them for open source, but they will not be tested internally.

    Longer term, as our cloud platform updates land in Netflix open source themselves, we will update Prana-OSS based on the roll-out of a reworked Prana version inside of Netflix. We could decide to release the current Prana-Zuul in open source instead, but the cleanup required and timing in relation to cloud platform updates makes this a less attractive option. At the point where we release this updated Prana in open source, people will see a much more healthy Prana externally.

    I do apologize for the confusion and pain this caused to our Prana open source users. I hope to be better in the future. Keep me accountable, please.
              AWS re:Invent 2015 Video & Slide Presentation Links with Easy Index        
    As with last year, here is my quick index of all re:Invent sessions.  Please wait for a few days and I'll keep running the tool to fill in the index.  It usually takes Amazon a few weeks to fully upload all the videos and slideshares.

    See below for how I created the index (with code):


    WRK307 - A Well-Architected Workshop: Working with the AWS Well-Architected Framework
    This workshop describes the AWS Well-Architected Framework, which enables customers to assess and improve their cloud architectures and better understand the business impact of their design decisions. It address general design principles, best practices, and guidance in four pillars of the Well-Architected Framework.  We will work in teams, assisted by AWS Solutions Architects, to review an example architecture, identifying issues, and how to improve the system.  You will need to have architecture experience to get the most from this workshop. After attending this workshop you will be able to review an architecture and identify potential issues across the four pillars of Well-Architected: security, performance efficiency, reliability, and cost optimization. Prerequisites: Architecture experience.  Optional - review the AWS Well-Architected Framework whitepaper. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
    WRK306 - AWS Professional Services Architecting Workshop
    The AWS Professional Services team will be facilitating an architecture workshop exercise for certified AWS architects, with a class size limited to 40. In this highly interactive architecture design exercise, the class will be randomly divided into teams and given a business case for which to design an effective AWS solution. Flipcharts will be provided, and students are encouraged to bring their laptops to document their designs. Each team will be expected to present their solution to the class. Prerequisites: Participants should be certified AWS Architects.  Bring your laptop. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 40 attendees.  The session will be offered twice on October 7 and twice on October 8, using the same case study for each to allow for scheduling flexibility.   Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
    ARC403 - From One to Many: Evolving VPC Design
    As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multiregion design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to S3, managing multitenant VPCs, integrating existing customer networks through AWS Direct Connect and building a full VPC mesh network across global regions. View Less
    ARC402 - Double Redundancy with AWS Direct Connect
    AWS Direct Connect provides low latency and high performance connectivity to the AWS cloud by allowing the provision of physical fiber from the customer's location or data center into AWS Direct Connect points of presence. This session covers design considerations around AWS Direct Connect solutions. We will discuss how to design and configure physical and logical redundancy using both physically redundant fibers and logical VPN connectivity, and includes a live demo showing both the configuration and the failure of a doubly redundant connectivity solution. This session is for network engineers/architects, technical professionals, and infrastructure managers who have a working knowledge of Amazon VPC, Amazon EC2, general networking, and routing protocols. View Less
    ARC401 - Cloud First: New Architecture for New Infrastructure
    What do companies with internal platforms have to change to succeed in the cloud? The five pillars at the heart of IT solutions in the cloud are automation, fault tolerance, horizontal scalability, security, and cost-effectiveness. This talk discusses tools that facilitate the development and automate the deployment of secure, highly available microservices. The tools were developed using AWS CloudFormation, AWS SDKs, AWS CLI, Amazon RDS, and various open-source software such as Docker. The talk provides concrete examples of how these tools can help developers and architects move from beginning/intermediate AWS practitioners to cloud deployment experts. View Less
    ARC348 - Seagull: How Yelp Built a Highly Fault-tolerant Distributed System for Concurrent Task Execution
    Efficiently parallelizing mutually exclusively tasks can be a challenging problem when done at scale. Yelp's recent in-house product, Seagull, demonstrates how an intelligent scheduling system can use several open-source products to provide a highly scalable and fault-tolerant distributed system. Learn how Yelp built Seagull with a variety of Amazon Web Services to concurrently execute thousands of tasks that can greatly improve performance. Seagull combines open-source software like ElasticSearch, Mesos, Docker, and Jenkins with Amazon Web Services (AWS) to parallelize Yelp's testing suite. Our current use case of Seagull involves distributively running Yelp's test suite that has over 55,000 test cases. Using our smart scheduling, we can run one of our largest test suites to process 42 hours of serial work in less than 10 minutes using 200 r3.8xlarge instances from Amazon Elastic Compute Cloud (Amazon EC2). Seagull consumes and produces data at very high rates. On a typical day, Seagull writes 60 GBs of data and consumes 20 TBs of data. Although we are currently using Seagull to parallelize test execution, it can efficiently parallelize other types of independent tasks. View Less
    ARC346-APAC - Scaling to 25 Billion Daily Requests Within 3 Months: Building a Global Big Data Distribution Platform on AWS (APAC track)
    What if you were told that within three months, you had to scale your existing platform from 1,000 req/sec (requests per second) to handle 300,000 req/sec with an average latency of 25 milliseconds? And that you had to accomplish this with a tight budget, expand globally, and keep the project confidential until officially announced by well-known global mobile device manufacturers? That's what exactly happened to us. This session explains how The Weather Company partnered with AWS to scale our data distribution platform to prepare for unpredictable global demand. We cover the many challenges that we faced as we worked on architecture design, technology and tools selection, load testing, deployment and monitoring, and how we solved these challenges using AWS. This is a repeat session that will be translated simultaneously into Japanese, Chinese, and Korean. View Less
    ARC346 - Scaling to 25 Billion Daily Requests Within 3 Months: Building a Global Big Data Distribution Platform on AWS
    What if you were told that within three months, you had to scale your existing platform from 1,000 req/sec (requests per second) to handle 300,000 req/sec with an average latency of 25 milliseconds? And that you had to accomplish this with a tight budget, expand globally, and keep the project confidential until officially announced by well-known global mobile device manufacturers? That's what exactly happened to us. This session explains how The Weather Company partnered with AWS to scale our data distribution platform to prepare for unpredictable global demand. We cover the many challenges that we faced as we worked on architecture design, technology and tools selection, load testing, deployment and monitoring, and how we solved these challenges using AWS. View Less
    ARC344 - How Intuit Improves Security and Productivity with AWS Virtual Networking, identity, and Account Services
    Intuit has an "all in" strategy in adopting the AWS cloud. We have already moved some large workloads supporting some of our flagship products (TurboTax, Mint) and are expecting to launch hundreds of services in AWS over the coming years. To provide maximum flexibility for product teams to iterate on their services, as well as provide isolation of individual accounts from logical errors or malicious actions, Intuit is deploying every application into its own account and virtual private cloud (VPC). This talk discusses both the benefits and challenges of designing to run across hundreds or thousands of VPCs within an enterprise. We discuss the limitations of connectivity, sharing data, strategies for IAM access across account, and other nuances to keep in mind as you design your organization's migration strategy. We share our design patterns that can help guide your team in developing a plan for your AWS migration. This talk is helpful for anyone who is planning or in the process of moving a large enterprise to AWS with the difficult decisions and tradeoffs in structuring your deployment. View Less
    ARC342 - Closing the Loop: Designing and Building an End-to-End Email Solution Using AWS
    Email continues to be a critical medium for communications between businesses and customers and remains an important channel for building automation around sending and receiving messages. Email automation enables use cases like updating a ticketing system or a forum via email, logging and auditing an email conversation, subscribing and unsubscribing from email lists via email, transferring small files via email, and updating email contents before delivery. This session implements and presents live code that covers a use case supported by Amazon.com's seller business: how to protect your customers' privacy by anonymizing email for third-party business-to-business communication on your platform. With Amazon SES and the help of Amazon S3, AWS Lambda, and Amazon DynamoDB, we cover architecture, walk through code as we build an application live, and present a demonstration of the final implementation. View Less
    ARC340 - Multi-tenant Application Deployment Models
    Shared pools of resources? Microservices in containers? Isolated application stacks? You have many architectural models and AWS services to consider when you deploy applications on AWS. This session focuses on several common models and helps you choose the right path or paths to fit your application needs. Architects and operations managers should consider this session to help them choose the optimal path for their application deployment needs for their current and future architectures. This session covers services such as Amazon Elastic Compute Cloud (Amazon EC2), EC2 Container Services, AWS Lambda, and AWS CodeDeploy. View Less
    ARC313 - Future Banks Live in the Cloud: Building a Usable Cloud with Uncompromising Security
    Running today's largest consumer bitcoin startup comes with a target on your back and requires an uncompromising approach to security. This talk explores how Coinbase is learning from the past and pulling out all the stops to build a secure infrastructure behind an irreversibly transferrable digital good for millions of users. This session will cover cloud architecture, account and network isolation in the AWS cloud, disaster recovery, self-service consensus-based deployment, real-time streaming insight, and how Coinbase is leveraging practical DevOps to build the bank of the future. View Less
    ARC311 - Decoding the Genetic Blueprint of Life on a Cloud Connected Ecosystem
    Thermo Fisher Scientific, a world leader in biotechnology, has built a new polymerase chain reaction (PCR) system for DNA sequencing. Designed for low- to midlevel throughput laboratories that conduct real time PCR experiments, the system runs on individual QuantStudio devices. These devices are connected to Thermo Fisher's cloud computing platform, which is built on AWS using Amazon EC2, Amazon DynamoDB, and Amazon S3. With this single platform, applied and clinical researchers can learn, analyze, share, collaborate, and obtain support. Researchers worldwide can now collaborate online in real time and access their data wherever and whenever necessary. Laboratories can also share experimental conditions and results with their partners while providing a uniform experience for every user and helping to minimize training and errors. The net result is increased collaboration, faster time to market, fewer errors, and lower cost. We have architected a solution that uses Amazon EMR, DynamoDB, Amazon Elasticache, and S3. In this presentation, we share our architecture, lessons learned, best design patterns for NoSQL, strategies for leveraging EMR with DynamoDB, and a flexible solution that our scientist use. We also share our next step in architecture evolution. View Less
    ARC310-APAC - Amazon.com: Solving Amazon's Catalog Contention and Cost with Amazon Kinesis (APAC track)
    The Amazon.com product catalog receives millions of updates each hour across billions of products, and many of the updates involve comparatively few products. In this... View More
    ARC310 - Amazon.com: Solving Amazon's Catalog Contention and Cost with Amazon Kinesis
    The Amazon.com product catalog receives millions of updates an hour across billions of products with many of the updates concentrated on comparatively few products. In this session, hear how Amazon.com has used Amazon Kinesis to build a pipeline orchestrator that provides sequencing, optimistic batching, and duplicate suppression whilst at the same time significantly lowering costs. This session covers the architecture of that solution and draws out the key enabling features that Amazon Kinesis provides. This talk is intended for those who are interested in learning more about the power of the distributed log and understanding its importance for enabling OLTP just as DHT is for storage. View Less
    ARC309 - From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
    Gilt, a billion dollar e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Derek Chiles, AWS solutions architect, will review best practices and recommended architectures for deploying microservices on AWS. Adrian Trenaman, SVP of engineering at Gilt, will share Gilt's experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud. View Less
    ARC308-APAC - The Serverless Company with AWS Lambda: Streamlining Architecture with AWS (APAC track)
    In today's competitive environment, startups are increasingly focused on eliminating any undifferentiated heavy lifting. Come learn about various architectural patterns for building scalable, function-rich data processing systems using AWS Lambda and other AWS managed services. Find out how PlayOn! Sports went from a multi-layered architecture for video streaming to a streamlined and serverless system by using AWS Lambda and Amazon S3. This is a repeat session that will be translated simultaneously into Japanese, Chinese, and Korean. View Less
    ARC308 - The Serverless Company Using AWS Lambda: Streamlining Architecture with AWS
    In today's competitive environment, startups are increasingly focused on eliminating any undifferentiated heavy lifting. Come learn about various architectural patterns for building a scalable, function-rich data processing systems, using AWS Lambda and other AWS managed services. Come see how PlayOn! Sports went from a multi-layered architecture for video streaming to a streamlined and serverless system using Lambda and Amazon S3. View Less
    ARC307 - Infrastructure as Code
    While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure. View Less
    ARC305 - Self-service Cloud Services: How J&J Is Managing AWS at Scale for Enterprise Workloads
    Johnson & Johnson is a global health care leader with 270 operating companies in 60 countries. Operating at this scale requires a decentralized model that supports the autonomy of the different companies under the J&J umbrella, while still allowing knowledge and infrastructure frameworks to be shared across the different businesses. To address this problem, J&J created an Amazon VPC, which provides simplified architecture patterns that J&J's application teams leveraged throughout the company using a self-service model while adhering to critical internal controls. Hear how J&J leveraged Amazon S3, Amazon Redshift, Amazon RDS, Amazon DynamoDB, and Amazon Kinesis to develop these architecture patterns for various use cases, allowing J&J's businesses to use AWS for its agility while still adhering to all internal policies automatically. Learn how J&J uses this model to build advanced analytic platforms to ingest large streams of structured and unstructured data, which minimizes the time to insight in a variety of areas, including physician compliance, bioinformatics, and supply chain management. View Less
    ARC304 - Designing for SaaS: Next-Generation Software Delivery Models on AWS
    SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation. View Less
    ARC303 - Pure Play Video OTT: A Microservices Architecture in the Cloud
    An end-to-end, over-the-top (OTT) video system is built of many interdependent architectural tiers, ranging from content preparation, content delivery, and subscriber and entitlement management, to analytics and recommendations. This talk will provide a detailed exploration of how to architect a media platform that allows for growth, scalability, security, and business changes at each tier, based on real-world experiences delivering over 100 Gbps of concurrent video traffic with 24/7/365 linear TV requirements. Finally, learn how Verizon uses AWS, including Amazon Redshift and Amazon Elastic MapReduce, to power its recently launched mobile video application Go90. Using a mixture of AWS services and native applications, we address the following scaling challenges:     Content ingest, preparation, and distribution     Operation of a 24x7x365 Linear OTT Playout Platform     Common pitfalls with transcode and content preperation     Multi-DRM and packaging to allow cross platform playback     Efficient delivery and multi-CDN methodology to allow for a perfect experience globally     Kinesis as a dual purpose system for both analytics and concurrency access management     Integration with Machine Learning for an adaptive recommendation system, with real time integration between content history and advertising data     User, entitlement, and content management     General best practices for ‘Cloud Architectures' and their integration with Amazon Web Services; Infrastructure as Code, Disposable and immutable infrastructure, code deployment & release management, DevOps and Microservices Architectures This session is great for architects, engineers, and CTOs within media and entertainment or others simply interested in decoupled architectures. View Less
    ARC302 - Running Lean Architectures: How to Optimize for Cost Efficiency
    Whether you're a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We'll cover how you can effectively combine EC2 On-Demand, Reserved, and Spot instances to handle different use cases, leveraging auto scaling to match capacity to workload, choosing the most optimal instance type through load testing, taking advantage of multi-AZ support, and using CloudWatch to monitor usage and automatically shut off resources when not in use. We'll discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely, by leveraging AWS high-level services. We will also showcase simple tools to help track and manage costs, including the AWS Cost Explorer, Billing Alerts, and Trusted Advisor. This session will be your pocket guide for running cost effectively in the Amazon cloud. View Less
    ARC301 - Scaling Up to Your First 10 Million Users
    Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud. View Less
    ARC201 - Microservices Architecture for Digital Platforms with AWS Lambda, Amazon CloudFront and Amazon DynamoDB
    Digital platforms are by nature resource intensive, expensive to build, and difficult to manage at scale. What if we can change this perception and help AWS customers architect a digital platform that is low cost and low maintenance? This session describes the underlying architecture behind dam.deep.mg, the Digital Asset Management system built by Mitoc Group and powered by AWS abstracted services like AWS Lambda, Amazon CloudFront, and Amazon DynamoDB. Eugene Istrati, the CTO of Mitoc Group, will dive deep into their approach to microservices architecture on serverless environments and demonstrate how anyone can architect AWS abstracted services to achieve high scalability, high availability, and high performance without huge efforts or expensive resources allocation. View Less
    WRK304 - Build a Recommendation Engine and Use Amazon Machine Learning in Real Time
    Build an exciting machine learning model for recommending top restaurants for a customer in real time based on past orders and viewing history. In this guided session you will get hands on with data cleansing, building AML model and doing real time predictions. Dataset will be provided. Prerequisites: Participants should have an AWS account established and available for use during the workshop.  Participants should bring their own laptop.    Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
    WRK303 - Real-World Data Warehousing with Amazon Redshift and Big Data Solutions from AWS Marketplace
    In this workshop, you will work with other attendees as a small team to build an end-to-end data warehouse using Amazon Redshift and by leveraging key AWS Marketplace partners. Your team will learn how to build a data pipeline using an ETL partner from the AWS Marketplace, to perform common validation and aggregation tasks in a data ingestion pipeline.  Your team will then learn how to build dashboards and reports using a Data visualization partner from AWS Marketplace, for interactive analysis of large datasets in Amazon Redshift. In less than 2 hours your team will build a fully functional solution to discover meaningful insights from raw-datasets. The session also showcase on how you can extend this solution further to create a near real-time solution by leveraging Amazon Kinesis and other AWS Big Data services. Prerequisites: Hands-on experience with AWS. Some prior experience with Databases, SQL and familiarity with data-warehousing concepts. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only.   View Less
    WRK301 - Implementing Twitter Analytics Using Spark Streaming, Scala, and Amazon EMR
    Over the course of this workshop, we will launch a Spark Custer and deploy a Spark streaming application written in Scala that analyzes popular tags flowing out of Twitter.  Along the way we will learn about AWS EMR, Spark, Spark Streaming, Scala, and how to deploy applications into Spark clusters on AWS EMR. Prerequisites: Participants are expected be familiar with building modest-size applications in Scala. Participants should have an AWS account established and available for use during the workshop.  Please bring your laptop. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only.   View Less
    BDT404 - Building and Managing Large-Scale ETL Data Flows with AWS Data Pipeline and Dataduct
    As data volumes grow, managing and scaling data pipelines for ETL and batch processing can be daunting. With more than 13.5 million learners worldwide, hundreds of courses, and thousands of instructors, Coursera manages over a hundred data pipelines for ETL, batch processing, and new product development. In this session, we dive deep into AWS Data Pipeline and Dataduct, an open source framework built at Coursera to manage pipelines and create reusable patterns to expedite developer productivity. We share the lessons learned during our journey: from basic ETL processes, such as loading data from Amazon RDS to Amazon Redshift, to more sophisticated pipelines to power recommendation engines and search services. Attendees learn: Do's and don'ts of Data Pipeline Using Dataduct to streamline your data pipelines How to use Data Pipeline to power other data products, such as recommendation systems What's next for Dataduct View Less
    BDT403 - Best Practices for Building Real-time Streaming Applications with Amazon Kinesis
    Amazon Kinesis is a fully managed, cloud-based service for real-time data processing over large, distributed data streams. Customers who use Amazon Kinesis can continuously capture and process real-time data such as website clickstreams, financial transactions, social media feeds, IT logs, location-tracking events, and more. In this session, we first focus on building a scalable, durable streaming data ingest workflow, from data producers like mobile devices, servers, or even a web browser, using the right tool for the right job. Then, we cover code design that minimizes duplicates and achieves exactly-once processing semantics in your elastic stream-processing application, built with the Kinesis Client Library. Attend this session to learn best practices for building a real-time streaming data architecture with Amazon Kinesis, and get answers to technical questions frequently asked by those starting to process streaming events. View Less
    BDT402 - Delivering Business Agility Using AWS
    Wipro is one of India's largest publicly traded companies and the seventh largest IT services firm in the world. In this session, we showcase the structured methods that Wipro has used in enabling enterprises to take advantage of the cloud. These cover identifying workloads and application profiles that could benefit, re-structuring enterprise application and infrastructure components for migration, rapid and thorough verification and validation, and modifying component monitoring and management. Several of these methods can be tailored to the individual client or functional context, so specific client examples are presented. We also discuss the enterprise experience of enabling many non-IT functions to benefit from the cloud, such as sales and training. More functions included in the cloud increase the benefit drawn from a cloud-enabled IT landscape. Session sponsored by Wipro. View Less
    BDT401 - Amazon Redshift Deep Dive: Tuning and Best Practices
    Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use work load management, tune your queries, and use Amazon Redshift's interleaved sorting features. Finally, learn how TripAdvisor uses these best practices to give their entire organization access to analytic insights at scale.  View Less
    BDT324 - Big Data Optimized for the AWS Cloud
    Apache Hadoop is now a foundational platform for big data processing and discovery that drives next-generation analytics. While Hadoop was designed when cloud models were in their infancy, the open source platform works remarkably well in production environments in the cloud. This talk will cover use cases for running big data in the cloud and share examples of organizations that have experienced real-world success on AWS. We will also look at new software and hardware innovations that are helping companies get more value from their data. Session sponsored by Intel. View Less
    BDT323 - Amazon EBS and Cassandra: 1 Million Writes Per Second on 60 Nodes
    With the introduction of Amazon Elastic Block Store (EBS) GP2 and recent stability improvements, EBS has gained credibility in the Cassandra world for high performance workloads. By running Cassandra on Amazon EBS, you can run denser, cheaper Cassandra clusters with just as much availability as ephemeral storage instances. This talk walks through a highly detailed use case and configuration guide for a multi PetaByte, million write per second cluster that needs to be high performing and cost efficient. We explore the instance type choices, configuration, and low-level tuning that allowed us to hit 1.3 million writes per second with a replication factor of 3 on just 60 nodes. View Less
    BDT322 - How Redfin and Twitter Leverage Amazon S3 to Build Their Big Data Platforms
    Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms. View Less
    BDT320 - NEW LAUNCH! Streaming Data Flows with Amazon Kinesis Firehose and Amazon Kinesis Analytics
    Amazon Kinesis Firehose is a fully-managed, elastic service to deliver real-time data streams to Amazon S3, Amazon Redshift, and other destinations. In this session, we start with overviews of Amazon Kinesis Firehose and Amazon Kinesis Analytics. We then discuss how Amazon Kinesis Firehose makes it even easier to get started with streaming data, without writing a stream processing application or provisioning a single resource. You learn about the key features of Amazon Kinesis Firehose, including its companion agent that makes emitting data from data producers even easier. We walk through capture and delivery with an end-to-end demo, and discuss key metrics that will help developers and architects understand their streaming data flow. Finally, we look at some patterns for data consumption as the data streams into S3. We show two examples: using AWS Lambda, and how you can use Apache Spark running within Amazon EMR to query data directly in Amazon S3 through EMRFS. View Less
    BDT319 - NEW LAUNCH! Amazon QuickSight: Very Fast, Easy-to-Use, Cloud-native Business Intelligence
    Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. In this session, we demonstrate how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes. We also introduce SPICE -  a new Super-fast, Parallel, In-memory, Calculation Engine in Amazon QuickSight, which performs advanced calculations and render visualizations rapidly without requiring any additional infrastructure, SQL programming, or dimensional modeling, so you can seamlessly scale to hundreds of thousands of users and petabytes of data. Lastly, you will see how Amazon QuickSight provides you with smart visualizations and graphs that are optimized for your different data types, to ensure the most suitable and appropriate visualization to conduct your analysis, and how to share these visualization stories using the built-in collaboration tools. View Less
    BDT318 - Netflix Keystone: How Netflix Handles Data Streams Up to 8 Million Events Per Second
    In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak. View Less
    BDT317 - Building a Data Lake on AWS
    Conceptually, a data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created “on demand”, providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we will introduce key concepts for a data lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management. We will also provide insight on how AWS enables a data lake architecture.   A data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created "on demand", providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we introduce key concepts for a data lake and present aspects related to its implementation. We discuss critical success factors and pitfalls to avoid, as well as operational aspects such as security, governance, search, indexing, and metadata management. We also provide insight on how AWS enables a data lake architecture. Attendees get practical tips and recommendations to get started with their data lake implementations on AWS. View Less
    BDT316 - Offloading ETL to Amazon Elastic MapReduce
    Amgen discovers, develops, manufactures, and delivers innovative human therapeutics, helping millions of people in the fight against serious illnesses. In 2014, Amgen implemented a solution to offload ETL data across a diverse data set (U.S. pharmaceutical prescriptions and claims) using Amazon EMR. The solution has transformed the way Amgen delivers insights and reports to its sales force. To support Amgen's entry into a much larger market, the ETL process had to scale to eight times its existing data volume. We used Amazon EC2, Amazon S3, Amazon EMR, and Amazon Redshift to generate weekly sales reporting metrics. This session discusses highlights in Amgen's journey to leverage big data technologies and lay the foundation for future growth: benefits of ETL offloading in Amazon EMR as an entry point for big data technologies; benefits and challenges of using Amazon EMR vs. expanding on-premises ETL and reporting technologies; and how to architect an ETL offload solution using Amazon S3, Amazon EMR, and Impala. View Less
    BDT314 - Running a Big Data and Analytics Application on Amazon EMR and Amazon Redshift with a Focus on Security
    No matter the industry, leading organizations need to closely integrate, deploy, secure, and scale diverse technologies to support workloads while containing costs. Nasdaq, Inc.-a leading provider of trading, clearing, and exchange technology-is no exception. After migrating more than 1,100 tables from a legacy data warehouse into Amazon Redshift, Nasdaq, Inc. is now implementing a fully-integrated, big data architecture that also includes Amazon S3, Amazon EMR, and Presto to securely analyze large historical data sets in a highly regulated environment. Drawing from this experience, Nasdaq, Inc. shares lessons learned and best practices for deploying a highly secure, unified, big data architecture on AWS. Attendees learn: Architectural recommendations to extend an Amazon Redshift data warehouse with Amazon EMR and Presto. Tips to migrate historical data from an on-premises solution and Amazon Redshift to Amazon S3, making it consumable. Best practices for securing critical data and applications leveraging encryption, SELinux, and VPC. View Less
    BDT313 - Amazon DynamoDB for Big Data
    NoSQL is an important part of many big data strategies. Attend this session to learn how Amazon DynamoDB helps you create fast ingest and response data sets. We demonstrate how to use DynamoDB for batch-based query processing and ETL operations (using a SQL-like language) through integration with Amazon EMR and Hive. Then, we show you how to reduce costs and achieve scalability by connecting data to Amazon ElasticCache for handling massive read volumes. We'll also discuss how to add indexes on DynamoDB data for free-text searching by integrating with Elasticsearch using AWS Lambda and DynamoDB streams. Finally, you'll find out how you can take your high-velocity, high-volume data (such as IoT data) in DynamoDB and connect it to a data warehouse (Amazon Redshift) to enable BI analysis. View Less
    BDT312 - Application Monitoring in a Post-Server World: Why Data Context Is Critical
    The move towards microservices in Docker, EC2 and Lambda points to a shift towards shorter lived resources. These new application architectures are driving new agility and efficiency. But they, while providing developers with inherent scalability, elasticity, and flexibility, also present new challenges for application monitoring. The days of static server monitoring with a single health and status check are over. These days you need to know how your entire ecosystem of AWS EC2 instances are performing, especially since many of them are short lived and may only exist for a few minutes. With such ephemeral resources, there is no server to monitor; you need to understand performance along the lines of computation intent. And for this, you need the context in which these resources are performing. Join Kevin McGuire, Director of Engineering at New Relic, as he discusses trends in computing that we've gleaned from monitoring Docker and how they'v
              AWS re:Invent 2014 Video & Slide Presentation Links with Easy Index        
    As with last year, here is my quick index of all re:Invent sessions.  Please wait for a few days and I'll keep running the tool to fill in the index.  It usually takes Amazon a few weeks to fully upload all the videos and slideshares.

    See below for how I created the index (with code):


    ADV403 - Dynamic Ad Performance Reporting with Amazon Redshift: Data Science and Complex Queries at Massive Scale
    by Vidhya Srinivasan - Senior Manager, Software Development with Amazon Web ServicesTimon Karnezos - Director, Infrastructure with NeuStar
    Delivering deep insight on advertising metrics and providing customers easy data access becomes a challenge as scale increases. In this session, Neustar, a global provider of real-time analytics, shows how they use Redshift to help advertisers and agencies reach the highest-performing customers using data science at scale. Neustar dives into the queries they use to determine how best to target ads based on their real reach, how much to pay for ads using multi-touch attribution, and how frequently to show ads. Finally, Neustar discusses how they operate a fleet of Redshift clusters to run workloads in parallel and generate daily reports on billions of events within hours. Session includes how Neustar provides daily feeds of event-level data to their customers for ad-hoc data science.
    ADV402 - Beating the Speed of Light with Your Infrastructure in AWS
    by Siva Raghupathy - Principal Solutions Architect with Amazon Web ServicesValentino Volonghi - CTO with AdRoll
    With Amazon Web Services it's possible to serve the needs of modern high performance advertising without breaking the bank. This session covers how AdRoll processes more than 60 billion requests per day in less than 100 milliseconds each using Amazon DynamoDB, Auto Scaling, and Elastic Load Balancing. This process generates more than 2 GB of data every single second, which will be processed and turned into useful models over the following hour. We discuss designing systems that can answer billions of latency-sensitive global requests every day and look into some tricks to pare down the costs.
    ADV303 - MediaMath's Data Revolution with Amazon Kinesis and Amazon EMR
    by Aditya Krishnan - Sr. Product Manager with Amazon Web ServicesEdward Fagin - VP, Engineering with MediaMathIan Hummel - Sr. Director, Data Platform with MediaMath
    Collecting and processing terabytes of data per day is a challenge for any technology company. As marketers and brands become more sophisticated consumers of data, enabling granular levels of access to targeted subsets of data from outside your firewalls presents new challenges. This session discusses how to build scalable, complex, and cost-effective data processing pipelines using Amazon Kinesis, Amazon EC2 Spot Instances, Amazon EMR, and Amazon Simple Storage Service (S3). Learn how MediaMath revolutionized their data delivery platform with the help of these services to empower product teams, partners, and clients. As a result, a number of innovative products and services are delivered on top of terabytes of online user behavior. MediaMath covers their journey from legacy batch processing and vendor lock-in to a new world where the raw materials to build advanced lookalike models, optimization algorithms, or marketing attribution models are readily available to any engineering team in real time, substantially reducing the time - and cost - of innovation.
    AFF302 - Responsive Game Design: Bringing Desktop and Mobile Games to the Living Room
    by Jesse Freeman - Developer Evangelist, HTML5 &Games with Amazon Web Services
    In this session, we cover what's needed to bring your Android app or game to Fire TV. We walk you through controller support for a game scenario (buttons and analog sticks), controller support for UI (selection, moving between menu items, invoking the keyboard), and how to account for the form factor (overscan, landscape, device and controller detection). By the end of this session, you'll be able to understand what you need to do if you want to build or modify your own app to work on a TV.
    AFF301 - Fire Phone: The Dynamic Perspective API, Under the Hood
    by Bilgem Cakir - Senior Software Development Engineer with Amazon Web ServicesPeter Heinrich - Developer Evangelist with Amazon Web Services
    Fire phone's Dynamic Perspective adds a whole new dimension to UI and customer interaction, combining dedicated hardware and advanced algorithms to enable real-time head-tracking and 3D effects. This session goes behind the scenes with the Dynamic Perspective team, diving into the unique technical challenges they faced during development. Afterward, we explore the Dynamic Perspective SDK together so you leave the session knowing how to add innovative features like Peek, Tilt, 3D controls, and Parallax to your own apps.
    AFF202 - Everything You Need to Know about Building Apps for the Fire Phone
    by David Isbitski - Developer Evangelist, Amazon Mobile Apps &Games with Amazon Web Services
    Fire is the first phone designed by Amazon. We show you the new customer experiences it enables and how top developers have updated their Android apps to take advantage of Fire phone. Learn more about the hardware, the services, and the development SDK including Enhanced Carousel, Firefly and Dynamic Perspective, Appstore Developer Select, submitting to the Amazon Appstore, and Best Practices for developing great Fire apps.
    AFF201 - What the Top 50 Games Do with In-App Purchasing That the Rest of Us Don't
    by Mike Hines - Developer Evangelist with Amazon Web ServicesSalim Mitha - EVP Product, Marketing &Monetization with Playtika - Caesars Interactive
    Not sure when (or if) to run a sale? Not sure what IAP items to offer? In this session, Playtika EVP Salim Mitha and Amazon show you what works. We share best practices and analytics data that we've aggregated from the top 50 in-app purchase (IAP) grossing games in the Amazon Appstore.  We cover user retention and engagement data comparisons and examine several purchasing UI layouts to learn how to manage and present IAP item selection. We also cover how to manage IAP price points and how and when to tailor price variety, sales, and offers for customers. You get actionable data and suggestions that you can use on your current as well as future projects to help maximize IAP revenue.Â
    APP402 - Serving Billions of Web Requests Each Day with Elastic Beanstalk
    by Mik Quinlan - Director, Engineering, Mobile Advertising with ThinkNearJohn Hinnegan - VP of Software Engineering with Thinknear
    AWS Elastic Beanstalk provides a number of simple and flexible interfaces for developing and deploying your applications. Follow Thinknear's rapid growth from inception to acquisition, scaling from a few dozen requests per hour to billions of requests served each day with AWS Elastic Beanstalk.  Thinknear engineers demonstrate how they extended the AWS Elastic Beanstalk platform to scale to billions of requests while meeting response times below 100 ms, discuss tradeoffs they made in the process, and what did and did not work for their mobile ad bidding business.
    APP315-JT - Coca-Cola: Migrating to AWS - Japanese Track
    by Michael Connor - Senior Platform Architect with Coca-Cola
    This session details Coca-Cola's effort to migrate hundreds of applications from on-premises to AWS. The focus is on migration best practices, security considerations, helpful tools, automation, and business processes used to complete the effort. Key AWS technologies highlighted will be AWS Elastic Beanstalk, Amazon VPC, AWS CloudFormation, and the AWS APIs. This session includes demos and code samples. Participants should walk away with a clear understanding of how AWS Elastic Beanstalk compares to "platform as a service," and why it was chosen to meet strict standards for security and business intelligence. This is a repeat session that will be translated simultaneously into Japanese.
    APP315 - Coca-Cola: Migrating to AWS
    by Michael Connor - Senior Platform Architect with Coca-Cola
    This session details Coca-Cola's effort to migrate hundreds of applications from on-premises to AWS. The focus is on migration best practices, security considerations, helpful tools, automation, and business processes used to complete the effort. Key AWS technologies highlighted will be AWS Elastic Beanstalk, Amazon VPC, AWS CloudFormation, and the AWS APIs. This session includes demos and code samples. Participants should walk away with a clear understanding of how AWS Elastic Beanstalk compares to "platform as a service," and why it was chosen to meet strict standards for security and business intelligence.
    APP313 - NEW LAUNCH: Amazon EC2 Container Service in Action
    by Daniel Gerdesmeier - Software Development Engineer, EC2 with Amazon Web ServicesDeepak Singh - Principal Product Manager with Amazon Web Services
    Container technology, particularly Docker, is all the rage these days. Â At AWS, our customers have been running Linux containers at scale for several years, and we are increasingly seeing customers adopt Docker, especially as they build loosely coupled distributed applications. Â However, to do so they have to run their own cluster management solutions, deal with configuration management, and manage their containers and associated metadata. Â We believe that those capabilities should be a core building block technology, just like EC2. Today, we are announcing the preview of Amazon EC2 Container Service, a new AWS service that makes is easy to run and manage Docker-enabled distributed applications using powerful APIs that allow you to launch and stop containers, get complete cluster state information, and manage linked containers. Â In this session we will discuss why we built the EC2 Container Service, some of the core concepts, and walk you through how you can use the service for your applications.
    APP311 - Lessons Learned From Over a Decade of Deployments at Amazon
    by Andy Troutman - Senior Manager, Software Development with Amazon Web Services
    Amazon made the transition to a service-oriented architecture over a decade ago. That move drove major changes to the way we release updates to our applications and services. We learned many lessons over those years, and we used that experience to refine our internal tools as well as the services that we make available to our customers. In this session, we share that learning with you, and demonstrate how to optimize for agility and reliability in your own deployment process.
    APP310 - Scheduling Using Apache Mesos in the Cloud
    by Sharma Podila - Senior Software Engineer with Netflix
    How can you reliably schedule tasks in an unreliable, autoscaling cloud environment? This presentation talks about the design of our Fenzo scheduler, built on Apache Mesos, that serves as the core of our stream-processing platform, Mantis, designed for real-time insights. We focus on the following aspects of the scheduler:  - Resource granularity  - Fault tolerance  - Bin packing, task affinity, stream locality  - Autoscaling of the cluster and of individual service jobs  - Constraints (hard and soft) for individual tasks such as zone balancing, unique, and exclusive instances This talk also includes detailed information on a holistic approach to scheduling in a distributed, autoscaling environment to achieve both speed and advanced scheduling optimizations.
    APP309 - Running and Monitoring Docker Containers at Scale
    by Alexis Lê-Quôc - CTO with Datadog
    If you have tried Docker but are unsure about how to run it at scale, you will benefit from this session. Like virtualization before, containerization (À la Docker) is increasing the elastic nature of cloud infrastructure by an order of magnitude. But maybe you still have questions: How many containers can you run on a given Amazon EC2 instance type? Which metric should you look at to measure contention? How do you manage fleets of containers at scale? Datadog is a monitoring service for IT, operations, and development teams who write and run applications at scale. In this session, the cofounder of Datadog presents the challenges and benefits of running containers at scale and how to use quantitative performance patterns to monitor your infrastructure at this magnitude and complexity. Sponsored by Datadog.
    APP308 - Chef on AWS: Deep Dive
    by Michael Ducy - Global Partner Evangelist with ChefJOHN KEISER - DEVELOPER LEAD with Chef
    When your infrastructure scales, you need to have the tooling and knowledge to support that scale. Chef is one of the commonly used tools for deploying and managing all kinds of infrastructure at any scale. In this session, we focus on how you can get your existing infrastructure robustly represented in Chef. We dive deep on all the specifics that make deploying with Chef on AWS easy: authentication management, versioning, recipe testing, and leveraging AWS resources in your recipes. Whether you're building new infrastructure with no existing operations management software or deploying existing Chef recipes into AWS, this session will outline all the tips and tricks you need to be a master Chef in the cloud.
    APP307 - Leverage the Cloud with a Blue/Green Deployment Architecture
    by Sean Berry - Principal Engineer with CrowdStrikeJim Plush - Sr Director of Engineering with CrowdStrike
    Minimizing customer impact is a key feature in successfully rolling out frequent code updates. Learn how to leverage the AWS cloud so you can minimize bug impacts, test your services in isolation with canary data, and easily roll back changes. Learn to love deployments, not fear them, with a blue/green architecture model. This talk walks you through the reasons it works for us and how we set up our AWS infrastructure, including package repositories, Elastic Load Balancing load balancers, Auto Scaling groups, internal tools, and more to help orchestrate the process. Learn to view thousands of servers as resources at your command to help improve your engineering environment, take bigger risks, and not spend weekends firefighting bad deployments.
    APP306 - Using AWS CloudFormation for Deployment and Management at Scale
    by Tom Cartwright - Exec. Product Manager with BBCYavor Atanasov - Senior Software Engineer with BBC
    With AWS CloudFormation you can model, provision, and update the full breadth of AWS resources. You can manage anything from a single Amazon EC2 instance to a multi-tier application. The British Broadcasting Corporation (BBC) uses AWS and CloudFormation to help deliver a range of services, including BBC iPlayer. Learn straight from the BBC team on how they developed these services with a multitude of AWS features and how they operate at scale. Get insight into the tooling and best practices developed by the BBC team and how they used CloudFormation to form an end-to-end deployment and management pipeline. If you are new to AWS CloudFormation, get up to speed for this session by completing the Working with CloudFormation lab in the self-paced Labs Lounge.
    APP304 - AWS CloudFormation Best Practices
    by Chris Whitaker - Senior Manager of Software Development with Amazon Web ServicesChetan Dandekar - Senior Product Manager with Amazon Web Services
    With AWS CloudFormation you can model, provision, and update the full breadth of AWS resources. You can manage anything from a single Amazon EC2 instance to a multi-tier application. If you are familiar with AWS CloudFormation or using it already, this session is for you. If you are familiar with AWS CloudFormation, you may have questions such as ‘How do I plan my stacks?', ‘How do I deploy & bootstrap software on my stacks?' and ‘Where does AWS CloudFormation fit in a DevOps pipeline?' If you are using AWS CloudFormation already, you may have questions such as ‘How do I manage my templates at scale?', ‘How do I safely update stacks?', and ‘How do I audit changes to my stack?' This session is intended to answer those questions. If you are new to AWS CloudFormation, get up to speed for this session by completing the Working with CloudFormation lab in the self-paced Labs Lounge.
    APP303 - Lightning Fast Deploys with Docker Containers and AWS
    by Nathan LeClaire - Solutions Engineer with Docker
    Docker is an open platform for developers to build, ship, and run distributed applications in Linux containers. In this session, Nathan LeClaire, a Solutions Engineer at Docker Inc., will be demonstrating workflows that can dramatically accelerate the development and deployment of distributed applications with Docker containers. Through in-depth demos, this session will show how to achieve painless deployments that are both readily scalable and highly available by combining AWS's strengths as an infrastructure platform with those of Docker's as a platform that transforms the software development lifecycle.
    APP301 - AWS OpsWorks Under the Hood
    by Reza Spagnolo - Software Development Engineer with Amazon Web ServicesJonathan Weiss - Senior Software Development Manager with Amazon Web Services
    AWS OpsWorks helps you deploy and operate applications of all shapes and sizes. With AWS OpsWorks, you can model your application stack with layers that define the building blocks of your application: load balancers, application servers, databases, etc. But did you know that you can also extend AWS OpsWorks layers or build your own custom layers? Whether you need to perform a specific task or install a new software package, AWS OpsWorks gives you the tools to install and configure your instances consistently and help them evolve in an automated and predictable fashion. In this session, we dive into the development process including how to use attributes, recipes, and lifecycle events; show how to develop your environment locally; and provide troubleshooting steps that reduce your development time.
    APP204 - NEW LAUNCH: Introduction to AWS Service Catalog
    by Ashutosh Tiwary - General Manager, Cloud Formation with Amazon Web ServicesAbhishek Lal - Senior Product Manager @ AWS with Amazon Web Services
    Running an IT department in a large organization is not easy. To provide your internal users with access to the latest and greatest technology so that they can be as efficient and as productive as possible needs to be balanced with the need to set and maintain corporate standards, collect and disseminate best practices, and provide some oversight to avoid runaway spending and technology sprawl. Introducing AWS Service Catalog, a service that allows end users in your organization to easily find and launch products using a personalized portal. You can manage catalogs of standardized offerings and control which users have access to which products, enabling compliance with business policies. Your organization can benefit from increased agility and reduced costs. Attend this session to be one of the first to learn about this new service.Â
    APP203 - How Sumo Logic and Anki Build Highly Resilient Services on AWS to Manage Massive Usage Spikes
    by Ben Whaley - Director of Infrastructure with AnkiChristian Beedgen - CTO with Sumo Logic
    In just two years, Sumo Logic's multitenant log analytics service has scaled to query over 10 trillion more logs each day. Christian, Sumo Logic's cofounder and CTO shares the three most important lessons he has learned in building such a massive service on AWS. Ben Whaley is an AWS Community Hero who works for Anki as an AWS cloud architect. Ben uses hundreds of millions of logs to troubleshoot and improve Anki Drive, the coolest battle robot racing game on the planet. This is an ideal session for cloud architects constantly looking to improve scalability and application performance on AWS. Sponsored by Sumo Logic.
    APP202 - Deploy, Manage, and Scale Your Apps with AWS OpsWorks and AWS Elastic Beanstalk
    by Abhishek Singh - Senior Product Manager, AWS Elastic Beanstalk, Amazon Web Services with Amazon Web ServicesChris Barclay - Senior Product Manager with Amazon Web Services
    AWS offers a number of services that help you easily deploy and run applications in the cloud. Come to this session to learn how to choose among these options. Through interactive demonstrations, this session shows you how to get an application running using AWS OpsWorks and AWS Elastic Beanstalk application management services. You also learn how to use AWS CloudFormation templates to document, version control, and share your application configuration. This session covers application updates, customization, and working with resources such as load balancers and databases.
    APP201 - Going Zero to Sixty with AWS Elastic Beanstalk
    by Abhishek Singh - Senior Product Manager, AWS Elastic Beanstalk, Amazon Web Services with Amazon Web Services
    AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS cloud. This session shows you how to deploy your code to AWS Elastic Beanstalk, easily enable or disable application functionality, and perform zero-downtime deployments through interactive demos and code samples for both Windows and Linux. Are you new to AWS Elastic Beanstalk? Get up to speed for this session by first completing the 60-minute Fundamentals of AWS Elastic Beanstalk lab in the self-paced Lab Lounge.
    ARC403 - From One to Many: Evolving VPC Design
    by Yinal Ozkan - Principal Solutions Architect with Amazon Web Services
    As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against increasingly complex design requirements. This session follows the evolution of a single regional VPC into a multi-VPC, multiregion design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, managing multitenant VPCs, conducting VPC-to-VPC traffic, running multiple hybrid environments over AWS Direct Connect, and integrating corporate multiprotocol label switching (MPLS) clouds into multiregion VPCs..
    ARC402 - Deployment Automation: From Developers' Keyboards to End Users' Screens
    by Chris Munns - Solutions Architect with Amazon Web Services
    Some of the best businesses today are deploying their code dozens of times a day. How? By making heavy use of automation, smart tools, and repeatable patterns to get process out of the way and keep the workflow moving. Come to this session to learn how you can do this too, using services such as AWS OpsWorks, AWS CloudFormation, Amazon Simple Workflow Service, and other tools. We'll discuss a number of different deployment patterns, and what aspects you need to focus on when working toward deployment automation yourself.
    ARC401 - Black-Belt Networking for the Cloud Ninja
    by Steve Morad - Principal Solutions Architect with Amazon Web Services
    Do you need to get beyond the basics of VPC and networking in the cloud? Do terms like virtual addresses, integrated networks and network monitoring get you motivated? Come discuss black-belt networking topics including floating IPs, overlapping network management, network automation, network monitoring, and more. This expert-level networking discussion is ideally suited for network administrators, security architects, or cloud ninjas who are eager to take their AWS networking skills to the next level.
    ARC318 - Continuous Delivery at a Rate of 500 Deployments a Day!
    by Elias Torres - VP of Engineering with Driftt
    Every development team would love to spend more time building products and less time shepherding software releases. What if you had the ability to repeatably push any version of your code and not have to worry about the optimal server allocation for your services? This talk will cover how HubSpot and a team of 100 engineers deploys 500 times a day with very minimal effort. Singularity, an open-source project which HubSpot built from scratch, works with Apache Mesos to manage a multipurpose cluster in AWS to support web services, cron jobs, map/reduce tasks, and one-off processes. This talk will discuss the HubSpot service architecture and cultural advantages and the costs and benefits of the continuous delivery approach.
    ARC317 - Maintaining a Resilient Front Door at Massive Scale
    by Daniel Jacobson - VP of Engineering, Edge and Playback with NetflixBenjamin Schmaus - Director, Edge Systems with Netflix
    The Netflix service supports more than 50 million subscribers in over 40 countries around the world. These subscribers use more than 1,000 different device types to connect to Netflix, resulting in massive amounts of traffic to the service. In our distributed environment, the gateway service that receives this customer traffic needs to be able to scale in a variety of ways while simultaneously protecting our subscribers from failures elsewhere in the architecture. This talk will detail how the Netflix front door operates, leveraging systems like Hystrix, Zuul, and Scryer to maximize the AWS infrastructure and to create a great streaming experience.
    ARC313 - So You Think You Can Architect?
    by Constantin Gonzalez - Solutions Architect with Amazon Web ServicesJan Metzner - Solutions Architect with Amazon Web ServicesMichael Sandbichler - CTO with ProSiebenSat.1 Digital GmbH
    TV talent shows with online and mobile voting options pose a huge challenge for architects: How do you handle millions of votes in a very short time, while keeping your system robust, secure, and scalable? Attend this session and learn from AWS customers who have solved the architectural challenges of setting up, testing, and operating mobile voting infrastructures. We will start with a typical, standard web application, then introduce advanced architectural patterns along the way that help you scale, secure, and simplify your mobile voting infrastructure and make it bulletproof for show time! We'll also touch on topics like testing and support during the big event.
    ARC312 - Processing Money in the Cloud
    by Sri Vasireddy - President with REAN CloudSoofi Safavi - CTO, SVP with Radian
    Financial transactions need to be processed and stored securely and in real time. Together with a giant in the mortgage insurance industry, we have developed an elastic, secure, and compliant data processing framework on AWS that meets these processing requirements and drastically improves the time it takes to make a decision on a loan. This session will discuss what we've learned along the way, how we have overcome multiple security and compliance hurdles, and how other organizations in regulated industries can do the same. This session is targeted at business decision-makers and solutions architects working in regulated industries with high security and compliance requirements.
    ARC311 - Extreme Availability for Mission-Critical Applications
    by Eduardo Horai - Manager, Solutions Architecture with Amazon Web ServicesRaul Frias - Solutions Architect with Amazon Web ServicesAndre Fatala - CDO with Magazine Luiza
    More and more businesses are deploying their mission-critical applications on AWS, and one of their concerns is how to improve the availability of their services, going beyond traditional availability concepts. In this session, you will learn how to architect different layers of your application-beginning with an extremely available front-end layer with Amazon EC2, Elastic Load Balancing, and Auto Scaling, and going all the way to a protected multitiered information layer, including cross-region replicas for relational and NoSQL databases. The concepts that we will share, using services like Amazon RDS, Amazon DynamoDB, and Amazon Route 53, will provide a framework you can use to keep your application running even with multiple failures. Additionally, you will hear from Magazine Luiza, in an interactive session, on how they run a large e-commerce application with a multiregion architecture using a combination of features and services from AWS to achieve extreme availability.
    ARC309 - Building and Scaling Amazon Cloud Drive to Millions of Users
    by Ashish Mishra - Sr Software Dev Engineer with Amazon Web ServicesTarlochan Cheema - Software Development Manager, Amazon Cloud Drive with Amazon Web Services
    Learn from the Amazon Cloud Drive team how Amazon Cloud Drive services are built on top of AWS core services using Amazon S3, Amazon DynamoDB, Amazon EC2, Amazon SQS, Amazon Kinesis, and Amazon CloudSearch. This session will cover design and implementation aspects of large-scale data uploads, metadata storage and query, and consistent and fault-tolerant services on top of the AWS stack. The session will provide guidance and best practices about how and when to leverage and integrate AWS infrastructure and managed services for scalable solutions. This session will also cover how Cloud Drive services teams innovated to attain high throughputs.
    ARC308 - Nike's Journey into Microservices
    by Amber Milavec - Sr Technical Architect - Infrastructure with Nike, Inc.Jason Robey - Director of Database and Data Services with Nike, Inc.
    Tightly coupled monolithic stacks can present challenges for companies looking to take full advantage of the cloud. In order to move to a 100 percent cloud-native architecture, the Nike team realized they would need to rewrite all of the Nike Digital sites (Commerce, Sport, and Brand) as microservices. This presentation will discuss this journey and the architecture decisions behind making this happen. Nike presenters will talk about adopting the Netflix operations support systems (OSS) stack for their deployment pipeline and application architecture, covering the problems this solved and the challenges this introduced.
    ARC307-JT - Infrastructure as Code - Japanese Track
    by Alex Corley - SA with Amazon Web ServicesDavid Winter - Enterprise Sales with Amazon Web ServicesTom Wanielista - Chief Engineer with Simple
    While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure. This is a repeat session that will be translated simultaneously into Japanese.
    ARC307 - Infrastructure as Code
    by David Winter - Enterprise Sales with Amazon Web ServicesAlex Corley - SA with Amazon Web ServicesTom Wanielista - Chief Engineer with Simple
    While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
    ARC306 - IoT: Small Things and the Cloud
    by Brett Francis - Strategic Account Solutions Architect with Amazon Web Services
    Working with fleets of “Internet of Things” (IoT) devices brings about distinct challenges. In this session, we will explore four of these challenges: telemetry, commands, device devops, and audit and authorization, and how they transform when deploying hundreds-of-thousands of resource-constrained devices. We'll explore high-level architectural patterns that customers use to meet these challenges through the functionality and ubiquity of a globally accessible cloud platform. If you consider yourself a device developer, an electrical, industrial, or hardware engineer, a hardware incubator class member, a new device manufacturer, an existing device manufacturer who wants to smarten up their next-gen devices, or a software developer working with people who identify as part of these tribes, you'll want to participate in this session.
    ARC304 - Designing for SaaS: Next-Generation Software Delivery Models on AWS
    by Matt Tavis - Principal Solutions Architect with Amazon Web Services
    SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation.
    ARC303 - Panning for Gold: Analyzing Unstructured Data
    by Ganesh Raja - Solutions Architect with Amazon Web ServicesKrishnan Venkata - Director with LatentView Analytics
    Mining unstructured data for valuable information has historically been frustrating and difficult. This session will walk through practical examples of how multiple AWS services can be leveraged to provide extremely flexible, scalable, and available systems to successfully analyze massive amounts of data. Come learn how an application was adapted to leverage Elastic MapReduce and Amazon Kinesis to collect and analyze terabytes of web log data a day. Learn how Amazon Redshift can be used to clean up and visualize data and how AWS CloudFormation enables this analytical framework to be deployed in multiple regions while honoring privacy laws.
    ARC302 - Running Lean Architectures: How to Optimize for Cost Efficiency
    by Constantin Gonzalez - Solutions Architect with Amazon Web ServicesYimin Jiang - Cloud Performance & Reliability Lead with Adobe
    Whether you're a startup getting to profitability or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. Building on last year's popular foundation of how to reduce waste and fine-tune your AWS spending, this session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customer Adobe Systems. With the massive growth of subscribers to Adobe's Creative Cloud, Adobe's footprint in AWS continues to expand. We will discuss the techniques used to optimize and manage costs, while maximizing performance and improving resiliency. We'll cover effectively combining EC2 On-Demand, Reserved, and Spot instances to handle different use cases, leveraging auto scaling to match capacity to workload, choosing the most optimal instance type through load testing, taking advantage of multi-AZ support, and using CloudWatch to monitor usage and automatically shutting off resources when not in use. Other techniques we'll discuss include taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely, by leveraging AWS high-level services. We will also showcase simple tools to help track and manage costs, including the AWS Cost Explorer, Billing Alerts, and Trusted Advisor. This session will be your pocket guide for running cost effectively in the Amazon cloud.
    ARC206 - Architecting Reactive Applications on AWS
    by Revanth Talari - Systems Analyst, USEReady with USEReadyAtul Shukla - Platform Architect with USEReady
    Application requirements have changed dramatically in recent years, requiring millisecond or even microsecond response times and 100 percent uptime. This change has led to a new wave of "reactive applications" with archite
              A NetflixOSS sidecar in support of non-Java services        
    In working on supporting our next round of IBM Cloud Service Fabric service tenants, we found that the service implementers came from very different backgrounds.  Some were skilled in Java, some Ruby and others were C/C++ focused and therefore their service implementations were just as diverse.  Given the size of the team of the services we're on-boarding and timeframe for going public, recoding all of these services to use NetflixOSS Java libraries that bring the operational excellence (like Archaius, Karyon, Eureka, etc) seemed pretty unlikely.

    For what it is worth, we faced a similar challenge in earlier services (mostly due to existing C/C++ applications) and we created what was called a "sidecar".  By sidecar, what I mean is a second process on each node/instance that did Cloud Service Fabric operations on behalf of the main process (the side-managed process).  Unfortunately those sidecars all went off and created one-offs for their particular service.  In this post, I'll describe a more general sidecar that doesn't force users to have these one-offs.

    Sidenote:  For those not familiar with sidecars, think of the motorcycle sidecar below.  Snoopy would be the main process with Woodstock being the sidecar process.  The main work on the instance would be the motorcycle (say serving your users' REST requests).  The operational control is the sidecar (say serving health checks and management plane requests of the operational platform).


    Before we get started, we need to note there are multiple types of sidecars.  Predominantly there are two main types of sidecars.  There are sidecars that manage durable and or storage tiers.  These sidecars need to manage things that other sidecars do not (like joining a stateful ring of servers, or joining a set of slaves and discovering masters, or backup and recovery of data).  Some sidecars that exist in this space are Priam (for Cassandra) and Exhibitor (for Zookeeper).  The other type is for managing stateless mid-tier services like microservices.  An example of this is AirBNB's Synapse and Nerve.  You'll see that in the announcement of Synapse and Nerve on AirBNB's blog that they are trying to solve some (but not all) of the issues I will mention in this blog post.

    What are some things that a microservice sidecar could do for a microservice?

    1. Service discovery registration and heartbeat

    This registration with service discovery would have to happen only after the sidecar detects the side-managed process as ready to receive requests.  This isn't necessarily the same as if the instance is "healthy" as an instance might be healthy well before it is ready to handle requests (consider an instance that needs to pre-warm caches, etc.).  Also, all dynamic configuration of this function (where and if to register) should be considered.

    2.  Health check URL

    Every instance should have a health check url that can communicate out of band the health of an instance.  The sidecar would need to query the health of the side-managed process and expose this url on behalf of the side-managed process.  Various systems (like auto scaling groups, front end load balancers, and service discovery queries) would query this URL and take sick instances out of rotation.

    3.  Service dependency load balancing

    In a NetflixOSS based microservice, routing can be done intelligently based upon information from service discovery (Eureka) via smart client side load balancing (Ribbon).  Once you move this function out of the microservice implementation, as AirBNB noted as well, it is likely unneeded and problematic in some cases to move back to centralized load balancing.  Therefore it would be nice if the sidecar would perform load balancing on behalf of the side-managed process.  Note that Zuul (on instance in the sidecar) could fill this role in NetflixOSS.  In AirBNB's stack, the combination of service discovery and this item is done through Synapse.  Also, all dynamic configuration of this function (states of routes, timeouts, retry strategy, etc) should be considered.

    One other area to consider here (especially in the NetflixOSS space) would be if the sidecar should provide for advanced devops filters in load balancing that go beyond basic round robin load balancing.  Netflix has talked about the advantages of Zuul for this in the front/edge tier, but we could consider doing something in between microservices.

    4.  Microservice latency/health metrics

    Being able to have operational visibility into the error rates on calls to dependent services as well as latency and overall state of dependencies is important to knowing how to operate the side-managed process.  In NetflixOSS by using the Hystrix pattern and API, you can get such visibility through the exported Hystrix streams.  Again, Zuul (on instance in the sidecar) can provide this functionality.

    5.  Eureka discovery

    We have found service implementation in IBM that already have their own client side load balancing or cluster technologies.  Also, Netflix has talked about other OSS systems such as Elastic Search.  For these systems it would be nice if the sidecar could provide a way to expose Eureka discovery outside of load balancing.  Then the client could ingest the discovery information and use it however it felt necessary.  Also, all dynamic configuration of this function should be considered.

    6.  Dynamic configuration management

    It would nice if the sidecar could expose to the side-managed process dynamic configuration.  While I have mentioned the need to have previous sidecar functions items dynamically configured, it is important that the side-managed process configuration to be considered as well.  Consider the case where you want the side-managed process to use a common dynamic configuration management system but all it can do is read from property files.  In NetflixOSS this is managed via Archaius but this requires using the NetflixOSS libraries.

    7.  Circuit breaking for fault tolerance to dependencies

    It would nice if the sidecar could provide an approximation of circuit breaking.  I believe this is impossible to do as "cleanly" as using NetflixOSS Hystrix natively (as this wouldn't require the user to write specific business logic to handle failures that reduce calls to the dependency), but it might be nice to have some level of guarantee of fast failure of scenarios using #3.  Also, all dynamic configuration of this function (timeouts, etc) should be considered.

    8.  Application level metrics

    It would be nice if the sidecar provided could allow the side-managed process to more easily publish application specific metrics to the metrics pipeline.  While every language likely already has a nice binding to systems like statsd/collectd, it might be worth making the interface to these systems common through the sidecar.  For NetflixOSS, this is done through Servo.

    9. Manual GUI and programmatic control

    We have found the need to sometimes quickly dive into a specific instance with human eyes.  Having a private web based UI is far easier than loading up ssh.  Also, if you want to script access to the functions and data collected by the sidecar, we would like a REST or even JMX interface to the control offered in the sidecar.

    This all said, I started a quick project last week to create a sidecar that does some of these functions using NetflixOSS so it integrated cleanly into our existing IBM Cloud Services Fabric environment.  I decided to do it in github, so others can contribute.

    By using Karyon as a base for the sidecar, I was able to get a few of the items on the list automatically (specifically #1, #2 partially and #9).  I started with the most basic sidecar in the trunk project.  Then I added two more things:


    Consul style health checks:


    In work leading up to this work Spencer Gibb pointed me to the sidecar agents checks that Consul uses (which they said they based on Nagios).  I based a similar set of checks for my sidecar.  You can see in this archaius config file how you'd configure them:

    com.ibm.ibmcsf.sidecar.externalhealthcheck.enabled=true
    com.ibm.ibmcsf.sidecar.externalhealthcheck.numchecks=1

    com.ibm.ibmcsf.sidecar.externalhealthcheck.1.id=local-ping-healthcheckurl
    com.ibm.ibmcsf.sidecar.externalhealthcheck.1.description=Runs a script that curls the healthcheck url of the sidemanaged process
    com.ibm.ibmcsf.sidecar.externalhealthcheck.1.interval=10000
    com.ibm.ibmcsf.sidecar.externalhealthcheck.1.script=/opt/sidecars/curllocalhost.sh 8080 /
    com.ibm.ibmcsf.sidecar.externalhealthcheck.1.workingdir=/tmp

    com.ibm.ibmcsf.sidecar.externalhealthcheck.2.id=local-killswitch
    com.ibm.ibmcsf.sidecar.externalhealthcheck.2.description=Runs a script that tests if /opt/sidecarscripts/killswitch.txt exists
    com.ibm.ibmcsf.sidecar.externalhealthcheck.2.interval=30000
    com.ibm.ibmcsf.sidecar.externalhealthcheck.2.script=/opt/sidecars/checkKillswitch.sh

    Specifically you define a check as an external script that the sidecar executes and if the script returns a code of 0, the check is marked as healthy (1 = warning, otherwise unhealthy).  If all checks defined come back as healthy for greater than three iterations, the instance is healthy.  I have coded up some basic shell scripts that we'll likely give to all of our users (like curllocalhost.sh and checkkillswitchtxtfile.sh).  Once I had these checks being executed by the sidecar, it was pretty easy to change the Karyon/Eureka HealthCheckHandler class to query the CheckManager logic I added.


    Integration with Dynamic Configuration Management


    We believe most languages can easily register events based on files changing and can easily read properties files.  Based on this, I added another feature configured this archiaus config file:

    com.ibm.ibmcsf.sidecar.dynamicpropertywriter.enabled=true
    com.ibm.ibmcsf.sidecar.dynamicpropertywriter.file.template=/opt/sidecars/appspecific.properties.template
    com.ibm.ibmcsf.sidecar.dynamicpropertywriter.file=/opt/sidecars/appspecific.properties

    What this says is that a user of the sidecar puts all of the properties they care about in the file.template properties file and then as configuration is dynamically updated in Archaius the sidecar sees this and writes out a copy to the main properties file with the values filled in.

    With these changes, I think we now have a pretty solid story for #1, #2, #6 and #9.  I'd like to next focus on #3, #4, and #7 adding a Zuul and Hystrix based sidecar process but I don't have users (yet) pushing for these functions.  Also, I should note that the code is a proof of concept and needs to be hardened as it was just a side project for me.

    PS.  I do want to make it clear that while this sidecar approach could be used for Java services (as opposed to languages that don't have NetflixOSS bindings), I do not advocate moving these functions to external to your Java implementation.  There are places where offering this function in a side-car isn't as "excellent" operationally and more close to "good enough".  I'll let it to the reader to understand these tradeoffs.  However, I hope that work in this microservice sidecar space leads to easier NetflixOSS adoption in non-Java environments.

    PPS.  This sidecar might be more useful in the container space as well at a host level.  Taking the sidecar and making it work across multiple single process instances on a host would be an interesting extension of this work.



              How is a multi-host container service different from a multi-host VM service?        
    Warning:  I am writing this blog post without knowing the answer to the question I am asking in the title.  I am writing this post to force myself to articulate a question I've personally been struggling with as we move towards what we all want - containers with standard formats changing how we handle many cases in the cloud.  Also, I know there are folks that have thought about this for FAR longer than myself and I hope they comment or write alternative blogs so we can all learn together.

    That said, I have seen throughout the time leading up to Dockercon and since what seems to be divergent thoughts that when I step back aren't so divergent.  Or maybe they are?  Let's see.

    On one hand, we have existing systems on IaaS clouds using virtual machines that have everything controlled by API's with cloud infrastructural services that help build up a IaaS++ environment.  I have specifically avoided using the word PaaS as I define PaaS as something that tends to abstract IaaS to a point where IaaS concepts can't be directly seen and controlled.  I know that everyone doesn't accept such a definition of PaaS, but I use it as a means to help explain my thoughts (please don't just comment exclusively on this definition as it's not the main point of this blog post).  By IaaS++ I mean an environment that adds to IaaS offering services like continuous delivery workflows, high availability fault domains/automatic recovery, cross instance networking with software defined networking security, and operational visibility through monitoring.  And by not calling it PaaS, I suggest that the level of visibility into this environment includes IaaS concepts such as (VM) instances through ssh or other commonly used *nix tools, full TCP network stack access, full OS's with process and file system control, etc.

    On the other hand, we have systems growing around resource management systems and schedulers using "The Datacenter as a Computer" that are predominantly tied to containers.  I'll admit that I'm only partially through the book on the subject (now in 2nd edition).  Some of the systems in open source to implement such datacenter as the computer/warehouse scale machines are Yarn (for Hadoop), CoreOS/Fleet, Mesos/Marathon and Google Kubernetes.

    At Dockercon, IBM (and yours truly) demoed a Docker container deployment option for the IBM SoftLayer cloud.  We used our cloud services fabric (partially powered by NetflixOSS technologies) on top of this deployment option as the IaaS++ layer.  Given IBM SoftLayer and its current API doesn't support containers as a deployment option, we worked to implement some of ties to the IaaS technologies as part of the demo reusing the Docker API.  Specifically, we showcased an autoscaling service for automatic recovery, cross availability zone placement, and SLA based scaling.  Next we used the Docker private registry along side the Dockerhub public index for image management.  Finally we did specific work to natively integrate the networking from containers into the SoftLayer network.  Doing this networking work was important as it allowed us to leverage existing IaaS provided networking constructs such as load balancers and firewalls.

    Last night I watched the Kubernetes demo at Google I/O by Brendan Burns and Craig McLuckie.  The talk kicks off with an overview of the Google Compute Engine VM optimized for containers and then covers the Kubernetes container cluster management open source project which includes a scheduler for long running processes, a labeling system that is important for operational management, a replication controller to scale and auto recover labeled processes, and a service abstraction across labeled processes.

    I encourage you to watch the two demo videos before proceeding, as I don't want to force you into thinking only from my conclusions.  Ok, so now that you've watched the videos yourself, let me use the two videos to look at use case comparison points (the links now jump to the right place in each video that are similar):

    Fast development and deployment at scale



    Brendan demonstrated rolling updates on the cloud.  In the IBM demo, we showed the same, but as an initial deployment on a laptop.  As you see later in the demo, due to the user of Docker, running on the cloud is exactly the same as the laptop.  Also, the IBM cloud services fabric devops console - NetflixOSS Asgard also has the concept of rolling updates as well as the demonstrated initial deployment.  Due to Docker, both demos use essentially the same approach to image creation/baking.

    Automatic recovery


    I like how Brendan showed through a nice UI the failure and recovery as compared to me watching log files of the health manager.  Other than presentation, the use case and functionality was the same.  The system discovered a failed instance and recovered it.

    Service registration

    Brendan talked about how Kubernetes offers the concept of services based on tagging.  Under the covers this is implemented by a process that does selects against the tagged containers updating an etcd service registry.  In the cloud services fabric demo we talked about how this was done with NetflixOSS Eureka in a more intrusive (but maybe more app centric valuable) way.  I also have hinted about how important it is to consider availability in your service discovery system.

    Service discovery and load balancing across service implementations

    Brenda talked about in Kubernetes how this is handled by, currently, a basic round robin load balancer.  Under the covers each Kubernetes node starts this load balancer and any defined service gets started on the load balancer across the cluster with information being passed to client containers via two environment variables, one for the address for the Kubernetes local node load balancer, and one for the port assigned to a specific service.  In the cloud services fabric this is handled by Eureka enabled clients (for example NetflixOSS Ribbon for REST), which does not require a separate load balancer and is more direct and/or the similar NetflixOSS Zuul load balancer in cases where the existing clients can't be used.

    FWIW, I haven't seen specifically supported end to end service registration/discovery/load balancing in non-Kubernetes resource managers/schedulers.  I'm sure you could build something similar on top of Mesos/Marathon (or people already have) and CoreOS/etcd, but I think Kubernetes concept of labels and services (much like Eureka) are right in starting to integrate the concept of services into the platform as they are so critical in microservices based devops.

    I could continue to draw comparison points for other IaaS++ features like application centric metrics, container level metrics, dynamic configuration management, other devops workflows, remote logging, service interaction monitoring, etc, but I'll let that to the reader.  My belief is that many of these concepts will be implemented in both approaches, as they are required to run an operationally competent system.

    Also, I think we need to consider tougher points like how this approach scales (in both demos, under the covers networking was implemented via a subnet per Docker host, which wouldn't necessarily scale well), approach to cross host image propagation (again, both demos used a less than optimal way to push images across every node), and integration with other important IaaS networking concepts (such as external load balancers and firewalls).

    What is different?

    The key difference that I see in these systems is terminology and implementation.

    In the IBM demo, we based the concept of a cluster on what Asgard defines as a cluster.  That cluster definition and state is based on multiple separate, but connected by version naming, auto scaling groups.  It is then, the autoscaler that decides placement based on not only "resource availability", but also high availability (spread deployments across distinct failure domains) and locality policies.  Most everyone is available with the concept of high availability in these policies in existing IaaS - in SoftLayer we use Datacenters or pods, in other clouds the concept is called "availability zones".  Also, in public clouds, the policy for co-location is usually called "placement groups".

    Marathon (a long running scheduler on top of the Mesos resource manager), offers these same concepts through the concept of constraints.  Kubernetes today doesn't seem, today, to offer these concepts likely due to its initial focus on smaller scenarios.  Given its roots in Google Omega/Borg, I'm sure there is no reason why Kubernetes couldn't eventually expose the same policy concepts within its replication controller.  In fact, at the end of the Kubernetes talk, there is a question from the crowd on how to make Kubernetes scale across multiple Kubernetes configurations which could have been asked from a more high-availability.

    So to me, the concept of an autoscaler and its underlying implementation seems very similar to the concept of a resource manager and scheduler.  I wonder if public cloud auto scalers were open sourced if they would be called resource managers and long running schedulers?

    The reason why I ask all of this is as we move forward with containers, I think we might be tempted to build another cloud within our existing clouds.  I also think the Mesos and Kubernetes technologies will have people building clouds within clouds until cloud providers natively support containers as a deployment option.  At that point, will we have duplication of resource management and scheduling if we don't combine the concepts?  Also, what will people do to integrate these new container deployments with other IaaS features like load balancers, security groups, etc?

    I think others are asking the same question as well.  As shown in the IBM Cloud demo, we are thinking through this right now.  We have also experimented internally with OpenStack deployments of Docker containers as the IaaS layer under a similar IaaS++ layer.  The experiments led to a similar cloud container IaaS deployment option leveraging existing OpenStack approaches for resource management and scheduling as compared to creating a new layer on top of OpenStack.  Also, there is a public cloud that has likely considered this a long time ago - Joyent.  Joyent has had SmartOS zones which are similar to containers under its IaaS API for a long time without the need to expose the formal concepts of resource management and scheduling to its users.  Also, right at the end of the Kubernetes demo, someone in the crowd asks the same question.  I took this question to ask, when will the compute engine support container deployment this way without having a user setup their own private set of Kubernetes systems (and possibly not have to consider resource management/scheduling with anything more than policy).

    As I said in the intro, I'm still learning here.  What are your thoughts?
              Docker SoftLayer Cloud Talk at Dockercon 2014        

    The overall concept



    Today at Dockercon, Jerry Cuomo went over the concept of borderless cloud and how it relates to IBM's strategy.  He talked about how Docker is one of the erasers of the lines between various clouds with regards to openness.  He talked about how, regardless of vendor, deployment option and location, we need to focus on the following things:

    Fast


    Especially in the age of devops and continuous delivery how lack of speed is a killer.  Even worse, actually unforgivable, having manual steps that introduce error is not acceptable any longer.  Docker helps with this by having layered file systems that allow for just updates to be pushed and loaded.  Also, with its process model it starts as fast as you'd expect your applications to start.  Finally, Docker helps by having a transparent (all the way to source) description model for images which guarantees you run what you coded, not some mismatch between dev and ops.

    Optimized


    Optimized means not only price/performance but also optimization of location of workloads.  In the price/performance area IBM technologies (like our IBM Java read-only memory class sharing) can provide for much faster application startup and less memory when similar applications are run on a single node.  Also, getting the hypervisor out of the way can help I/O performance significantly (still a large challenge in VM based approaches) which will help data oriented applications like Hadoop and databases.

    Open


    Openness of cloud is very important to IBM, just like it was for Java and Unix/Linux.   Docker can provide the same write once, run anywhere experience for cloud workloads.  It is interesting how this openness combined with the fast/small also allows for advances in devops not possible before with VM's.  It is now possible to now run production like workload configurations on premise (and on developer's laptops) in almost the exact same way as deployed in production due to the reduction in overhead vs. running a full virtual machine.

    Responsible


    Moving fast isn't enough.  You have to most fast with responsibility.  Specifically you need to make sure you don't ignore security, high availability, and operational visibility when moving so fast.  With the automated and repeatable deployment possible with Docker (and related scheduling systems) combined with micro-service application design high availability and automatic recovery becomes easier.  Also, enterprise deployments of Docker will start to add to the security and operational visibility capabilities.

    The demo - SoftLayer cloud running Docker



    After Jerry covered these areas, I followed up with a live demo.

    On Monday, I showed how the technology we've been building to host IBM public cloud services, the Cloud Services Fabric (CSF), works on top of Docker.  We showed how the kernel of the CSF, based in part on NetflixOSS, and powered by IBM technologies was fully open source and easily run on a developer's laptop.  I talked about how this can even allow developers to Chaos Gorilla test their micro-service implementations.

    I showed how building the sample application and its microservice was extremely fast.  Building an update to the war file took more time than containerizing the same war for deployment.  Both were done in seconds.  While we haven't done it yet, I could imagine eventually optimizing this to container generation as part of an IDE auto compile.


    In the demo today, I followed this up with showcasing how we could take the exact same environment and marry it with the IBM SoftLayer public cloud.  I took the exact same sample application container image and instead of loading locally, pushing through a Docker registry to the SoftLayer cloud.  The power of this portability (and openness) is very valuable to our teams as it will allow for local testing to mirror more closely production deployment.

    Finally, I demonstrated how adding SoftLayer to Docker added to the operational excellence.  Specifically I showed how once we told docker to use a non-default bridge (that was assigned a SoftLayer portable subnet attached to the host private interface), I could have Docker assign IP's out of a routable subnet within the SoftLayer network.  This networking configuration means that the containers spun up would work in the same networks as SoftLayer bare metal and virtual machine instances transparently around the global SoftLayer cloud.  Also, advanced SoftLayer networking features such as load balancers and firewalls would work just as well with the containers.  I also talked about how we deployed this across multiple hosts in multiple datacenters (availability zones) further adding to the high availability options for deployment.  To prove this, I unleashed targeted chaos army like testing.  I showed how I could emulate a failure of a container (by doing a docker rm -f) and how the overall CSF system would auto recover by replacing the container with a new container.

    Some links



    You can see the slides from Jerry's talk on slideshare.

    The video:

    Direct Link (HD Version)

              Open Source Release of IBM Acme Air / NetflixOSS on Docker        
    In a previous blog, I discussed the Docker "local" (on laptop) IBM Cloud Services Fabric powered in part by NetflixOSS prototype.

    One big question on twitter and my blog went unanswered.  The question was ... How can someone else run this environment?  In the previous blog post, I mentioned how there was no plan to make key components open source at that point in time.

    Today, I am pleased to announce that all of the components to build this environment are now open source and anyone can reproduce this run of IBM Acme Air / NetflixOSS on Docker.  All it takes is about an hour, a decent internet connection, and a laptop with VirtualBox (or boot2docker, or vagrant) installed.

    Specifically, the aspects that we have added to open source are:

    1. Microscaler - a small scale instance health manager and auto recovery/scaling agent that works against the Docker remote API.  Specifically we have released the Microscaler service (that implements a REST service), a CLI to make calling Microscaler easier, and a Microscaler agent that is designed to manage clusters of Docker nodes.
    2. The Docker port of the NetflixOSS Asgard devops console.  Specifically we ported Asgard to work against the Docker API for managing IaaS objects such as images and instances as well as the Microscaler API for clusters.  The port handles some of the most basic CRUD operations in Asgard.  Some scenarios (like canary testing, red/black deployment) are yet to be fully implemented.
    3. The Dockerfiles and build scripts that enable anyone to build all of the containers required to run this environment.  The Dockerfiles build containers of the Microscaler, the NetflixOSS infrastructural servers (Asgard, Eureka and Zuul), as well as the full microservices sample application Acme Air (web app, microservice and cassandra data tier).  The build scripts help you build the containers and give easy commands to do the end to end deployment and common administration tasks.
    If you want to understand what this runtime showcases, please refer to the previous blog entry.  There is a video that shows the Acme Air application and basic chaos testing that proves the operational excellence of the environment.

    Interesting compare:


    It is interesting to note that the scope of what we released (the core of the NetflixOSS cloud platform + the Acme Air cloud sample/benchmark application) is similar to we previously released back at the Netflix Cloud Prize in the form of Amazon EC2 AMI's.  I think it is interesting to consider the difference when using Docker in this release as our portable image format.  Using Docker, I was able to easily release the automation of building the images (Dockerfiles) in source form which makes the images far more transparent than an AMI in the Amazon marketplace.  Also, the containers built can be deployed anywhere that Docker containers can be hosted.  Therefore, this project is going to be valuable to far more than a single cloud provider -- likely more on that later as Dockercon 2014 happens next week.

    If you want to learn how to run this yourself, check out the following video.  It shows building the containers for open source, starting an initial minimal environment and starting to operate the environment.  After that go back to the previous blog post and see how to perform advanced operations.


    Direct Link (HD Version)






              Cloud Services Fabric (and NetflixOSS) on Docker        
    At IBM Impact 2014 last week we showed the following demo:

    Direct Link (HD Version)


    The demo showed an end to end NetflixOSS based environment running on Docker on a laptop.  The components running in Docker containers shown included:
    1. Acme Air Web Application - The front end web application that is NetflixOSS enabled.  In fact, this was run as a set of containers within an auto scaling group.  The web application looks up (in Eureka) ephemeral instances of the auth service micro-service and performs on-instance load balancing via Netflix Ribbon.
    2. Acme Air Auth Service - The back end micro-service application that is NetflixOSS enabled.  In fact, this was run as a set of containers within an auto scaling group.
    3. Cassandra - This was the Acme Air Netflix port that runs against Cassandra.  We didn't do much with the data store in this demo, other than making it into a container.
    4. Eureka - The NetflixOSS open source service discovery server.  The ephemeral instances of both the web application and auth service automatically register with this Eureka service.
    5. Zuul - The NetflixOSS front end load balancer.  This load balancer looks up (in Eureka) ephemeral instances of the front end web application instances to route all incoming traffic across the rest of the topology.
    6. Asgard - The NetflixOSS devops console, which allows an application or micro-service implementer to configured versioned clusters of instances.  Asgard was ported to talk to the Docker remote API as well as the Auto scaler and recovery service API.
    7. Auto scaler and recovery service.  Each of the instances ran an agent that communicates via heartbeats to this service.  Asgard is responsible for calling API's on this Auto scaler to create clusters.  The auto scaler then called Docker API's to create instances of the correct cluster size.  Then if any instance died (stopped heartbeating), the auto scaler would create a replacement instance.  Finally, we went as far as implementing the idea of datacenters (or availability zones) when launching instances by tagging this information in a "user-data" environment variable (run -e) that had an "az_name" field.
    You can see the actual setup in the following slides:


    Docker Demo IBM Impact 2014 from aspyker

    Once we had this setup, we can locally test "operational" scenarios on Docker including the following scenarios:
    1. Elastic scalability.  We can easily test if our services can scale out and automatically be discovered by the rest the environment and application.
    2. Chaos Monkey.  As shown in the demo, we can test if killing single instances impacted overall system availability and if the system auto recovered a replacement instance.
    3. Chaos Gorilla.  Given we have tagged the instances with their artificial data center/availability zone, we can kill all instances within 1/3 of the deployment emulating a datacenter going away.  We showed this in the public cloud SoftLayer back at dev@Pulse.
    4. Split Brain Monkey.  We can use the same datacenter/availability tagging to isolate instances via iptables based firewalling (similar to Jepsen).
    We want to use this setup to a) help our Cloud Service Fabric users understand the Netflix based environment more quickly b) allow our users to do simple localized "operational" tests as listed above before moving to the cloud and c) use this in our continuous integration/delivery pipelines to do mock testing on a closer to production environment than possible on bare metal or memory hungry VM based setups.  More strategically, this work shows that if clouds supported containers and the Docker API we could move easily between a NeflixOSS powered virtual machine and container based approach.

    Some details of the implementation:

    The Open Source

    Updated 2014/06/09 - This project is now completely open source.  For more details see the following blog entry.

    The Auto Scaler and agent

    The auto scaler and on instance agents talking to the auto scaler being used here are working prototypes from IBM research.  Right now we do not have plans to open source this auto scaler which makes open sourcing the entire solution impossible.  The work to implement an auto scaler is non-trivial and was a large piece of work.

    The Asgard Port

    In the past, we had already ported Asgard to talk to IBM's cloud (SoftLayer) and its auto scaler (RightScale).  We extended this porting work to instead talk to our Auto scaler and Docker's remote API.  The work was pretty similar and therefore easily achieved in a week or so of work.

    The Dockerfiles and their containers

    Other than the aforementioned auto scaler and our Asgard port, we were able to use the latest CloudBees binary releases of all of the NetflixOSS technologies and Acme Air.  If we could get the auto scaler and Asgard port moved to public open source, anyone in the world could replicate this demo themselve easily.  We have a script to compile all of our Docker files (15 in all, including some base images) and it takes about 15 minutes on a decent Macbook.  This time is spent mostly in download time and compile steps for our autoscaler and agent.

    Creation of these Dockerfiles took about a week to get the basic functionality.  Making them work with the autoscaler and required agents took a bit longer.

    We choose to run our containers as "fuller" OS's vs. single process.  On each node we ran the main process for the node, a ssh daemon (to allow more IaaS like access to the filesystem) and the auto scaling agent.  We used supervisord to allow for easy management of these processes inside of Ubuntu on Docker.

    The Network

    We used the Eureka based service location throughout with no changes to the Eureka registration client.  In order to make this easy to humans (hostnames vs. IP's) we used skydock and skydns to give each tier of the application it's own domain name using --dns and --name options when running containers to associate incremental names for each cluster.  For example, when starting two cassandra nodes, they would show up in skydns as cass1.cassandra.dev.docker and cass2.cassandra.dev.docker.  We also used routing and bridging to make the entire environment easy to access from the guest laptop.

    The Speed

    The fact that I can start this all on a single laptop isn't the only impressive aspect.  I ran this with my virtual box being set to three gigs of memory for the boot2docker VM.  Running the demo spins the cooling fan as this required a good bit of CPU, but in terms of memory it was far lighter than I've seen in other environments.
    The really impressive aspect is that I can in 90 seconds (including a 20 second sleep waiting for Cassandra to peer) restart an entire environment including two auto scaling clusters of two nodes each and the other five infrastructural services.  This includes all the staggered starts required for starting the database, loading it with data, starting service discovery and dns, starting an autoscaler and defining the clusters to the auto scaler and the final step of them all launching and interconnecting.

    Setting this up in a traditional cloud would have taken at least 30 minutes based on my previous experience.

    I hope this explanation will be of enough interest to you to consider future collaboration.  I also hope to get up to Dockercon in June in case you also want to talk about this in person.

    The Team

    I wanted to give credit where credit is due.  The team of folks working on this included folks across IBM developer and research including Takahiro Inaba, Paolo Dettori, and Seelam Seetharami.



              Chaos Gorilla High Availability Tests on IBM Cloud with NetflixOSS        
    At the IBM cloud conference this week, IBM Pulse, I presented on how we are using the combination of IBM Cloud (SoftLayer), IBM middleware, and the Netflix Open Source cloud platform technologies to operationalize IBM's own public cloud services.  I focused on high availability, automatic recovery, elastic and web scale, and continuous delivery devops.  At Dev@Pulse, I gave a very quick overview of the entire platform.  You can grab the charts on slideshare.

    The coolest part of the talk was the fact that it included a live demo of Chaos Gorilla testing.  Chaos Gorilla is a type of chaos testing that emulates an entire datacenter (or availability zone) going down.  While our deployment of our test application (Acme Air) and runtime technologies was already setup to survive such a test, it was our first time doing such a test.  It was very interesting to see how our system reacted (the workload itself and the alerting/monitoring systems).  Knowing how this type of failure manifests will help us as we roll this platform and other IBM hosted cloud services into production.  The goal of doing such Chaos testing is to prove to yourself that you can survive failure before the failure occurs.  However, knowing how the system operates in this degraded state of capacity is truly valuable as well.

    To be fair when compared to Netflix (who pioneered Chaos Gorilla and other more complicated testing), so far this was only killing all instances of the mid tier services of Acme Air.  It was not killing the data tier, which would have more interesting stateful implications.  With a proper partitioned and memory replicated data tier that includes failure recovery automation, I believe the data tier would also have survived such of an attack, but that is work remaining within each of our services today and will be the focus of follow-on testing.

    Also, as noted in a quick review by Christos Kalantzis from Netflix, this was more targeted destruction.  The true Netflix Chaos Gorilla is automated such that it randomly decides what datacenter to kill.  Until we automate Chaos Gorilla testing, we had to pick a specific datacenter to kill.  The application and deployment approach demonstrated is architected in a way that should have worked for any datacenter going down.  Dallas 05 was chosen arbitrarily in targeted testing until we have more advanced automation.

    Finally, we need to take this to the next level beyond basic automation.  Failing an entire datacenter is impressive, but it is mostly a "clean failure".  By clean I mean the datacenter availability goes from 100% available to 0% availability.  There are more interesting Netflix chaos testing like Split Brian Monkey and Latency Monkey that would present cases of availability that are worse than perfect systems but not as clean as gone (0%).  These are also places where we want to continue to test our systems going forward.  You can read more about the entire suite of Chaos testing on the Netflix techblog.

    Take a look at the following video, which is a recording of the Chaos Gorilla testing.

    Direct Link (HD Version)

              Devops Online Training course        
    This Devops Online Training course includes topics for registering sources and targets; creating and working with jobs; and working with transformations. About Skillbricks Online Trainings: Skillbricks is one of the best institute providing quality level of training in E-learning process. This is devops online training. We also provide corporate training , if group of people interested in same technology. We also provide support in client interviews, resume preparation – DEVOPS Online Training. Contact us for detailed course content on Devops Online training. Contact : +1-510 509 7542(USA) +91-9030596677(India) Email: info@skillbricks(dot)com Web URL :http://skillbricks.com/devops-online-training.html
              Episode 60. All your Containers Are Belong to Us (An intro to Docker)        

    So you have heard about it, and probably ran into it already. Docker is a super cool tech that let us create / manage and deploy applications (It is really what would come out if Devs and Ops decided to have a kid). Come hear how you can too master the art of Docker, and more importantly why is it so "accepted" and revered.



    A Big Thanks to LaunchDarkly for sponsoring our podcast! Feature flagging is easy, feature flag management is hard. What LaunchDarkly has done is essentially take a system like Google or Facebook has made in-house and bring this to the masses. With features like percentage rollouts, audit logging, and flag statuses, teams have complete control over features at scale. When you effectively separate business logic from code, you can build better software, faster without the risk

    Don't forget to SUBSCRIBE to our cool new NewsCast! Java Off Heap



    Do you like the episodes? Want more? Help us out! Buy us a beer!


              DEVOPS ONLINE TRAINING IN UK        
    DEVOPS ONLINE TRAINING IN UK JGTHUB.COM offers the best DEVOPS online training based on the Real time experts, will train & guide with live projects. We will help you achieve your dream with our DEVOPS workshop program. Learn from us to become an expert in DEVOPS module. Be an early player and join us. We provide Online Trainings in India, USA, United Kingdom, Canada, and Australia ECT. JGTHUB.COM provides most comprehensive training program classes by real time consultants. 100% Job Oriented. Register Now!! More Details attend free online DEMO. Call for Free Demo: + (91) 9293949797, + (91) 9246449191 USA: +1-678-389-6789 Email: info@jgthub.com, Web: www.jgthub.com
              F5 fait le point sur l'utilisation des applications au sein des entreprises        
    Secuobs.com : 2016-04-05 18:23:11 - Global Security Mag Online - F5 Networks dévoile les résultats de sa deuxième enquête client annuelle State of Application Delivery Celle-ci intègre les réponses de plus de 3 000 clients à travers le monde, dont 980 en EMEA Le rapport détaille cette année la manière dont les utilisateurs assurent le bon fonctionnement de leurs applications, comment ils assurent la sécurité des données et des utilisateurs, et comment le cloud hybride, le SDN software-defined networking et DevOps ont fait évoluer l'informatique Principaux - Investigations
              DevOps Engineer (m/w) - AKDB - München         
    Wir entwickeln Software, die dem Menschen dient. Unsere Kunden im öffent­lichen Bereich nutzen unsere Produkte z. B. zur Erstellung von Aus­weisen, Führer­scheinen oder Heirats­urkunden. Wir sorgen dafür, dass Ge­häl­ter und Kinder­geld ausgezahlt werden. Wir bieten Soft­ware­lösungen, Services und Beratung für unsere Kunden und unter­stützen Verwal­tungen dabei, service­orien­tiert und bürger­nah zu sein. Tragen auch Sie dazu bei und kommen Sie zu uns! Als Teil der kommunalen...
              Best Online Training on Dot Net, MVC, Sharepoint, Job Support        
    Greetings! Value Online Training Academy provides Best Corporate Training's, Online Training's, Online Assignment help, Online project support, online job support on all IT Technologies. Courses: .Net & Advance .Net, Java, J2ee - Spring, EJb, Hibernate etc, Oracle, Obiee, Oracle DBA,Oracle Apps Technical, Financials, Sql Server, MS BI, Sql DBA, Tableau, Testing tools, Selenium, Load Runner, QTP, Agile , Scrum, SAP - All Modules, SharePoint, Informatica, Cognos, Data stage, Teradata, PMP, I phone, Android, Hadoop, WAD - html, xml, Java script, Jquery, Angular Js, Ext Js, React js, Mongo DB,Bootstrap, SAS, DB2,Netaezza, TFS, Build Release, Devopps - Jenkins, Github,Chef,Puppet,Nigios, Jira, Cloud Computing, Sales Force, etc from basics to high end latest courses. We run customized batches for Individuals, students, working professionals & corporate (onsite) learners from India, USA, Canada, Australia, UK, France, Germany,Singapore, Malaysia, Saudi Arabia as per their suitable time zones. All Courses are trained by well qualified, certified & real time experienced professionals. New Batches start every week. Exclusive Training for OPT's / Placement Consultants. We also provide technical support on resume preparation, Interviews & post training guidance. Interested / required participants pls reach @ 91- 8106914377 / 1 732-788-3636 FREE , Skype Id : valueonline99 , Mail id : valueonlinetraining@gmail.com, whatsup : 91-8106914377. For more details pls visit www.valueonlinetraining.com
              JFrog Acquires CloudMunch        
    JFrog, has acquired CloudMunch, a universal DevOps Intelligence platform, to expand its product offerings for developers. This is the third strategic acquisition by JFrog in eight months, following Dimon, experts in CI/CD, and Conan, the fast-gr ...
              BMC Control-M Workbench Enables a Jobs-as-Code for Application Developers        
    As part of its strategy to empower developers and continue to drive DevOps innovation in multi-cloud environments, BMC, a global leader in IT solutions for the digital enterprise, today introduced Control-M Workbench, a no-cost, self-service, sta ...
              What Are Other I&O Folks Asking Us        
    Do you wonder what other organizations are working on?  Do you feel challenged around topics such as DevOps, hybrid cloud, infrastructure transformations and how to enable your workforce?   I took a look atÂ