iPhone 8: Wahnsinnig viel Mühe steckt in diesem Dummy-Video ...   
In diesem iPhone 8-Video steckt wahnsinnig viel Mühe, daher möchten wir es euch nicht vorenthalten. Der bekannte Leaker Steve Hemmerstoffer alias OnLeaks ließ auf Basis von geleakten CAD-Dateien aus einer Fabrik einen CNC-gefrästen Dummy erstellen, der einen sehr realistischen Eindruck vom Serienmodell verschaffen könnte. Dieser Artikel wurde einsortiert unter Mobile, Apple iPhone, Mobile Computing, Apple, Smartphone, Apple iPhone 8, Apple iOS 11.

          Comment on Google On The Verge Of A Major Quantum Computing Breakthrough (GOOG) by John   
Many of the problems this article is claiming quantum computing can solve are problems of will, of the body politic. We can solve them now. People just have to *want* to. Regular computers were going to make life Jim dandy half a century ago too, remember?
          Paul Messina: Race to develop the first exascale computer   

The more super-computing capacity the world has, the more it seems to need. Now the Energy Department has awarded contracts to six companies as a push to develop the first exascale computer, a machine capable of performing a quintilian calculations per second. Program director Paul Messina joined Federal Drive with Tom Temin to discuss the ins and outs of the project.

The post Paul Messina: Race to develop the first exascale computer appeared first on FederalNewsRadio.com.


          Non-Boring #Tech4Good Meetings for Nonprofits   
NetSquared organizers bring together the nonprofit technology community for face-to-face meetings ... but we all know that meetings are boring! Admit it, you sometimes dread going to those all-staff assemblies. Luckily for you, our NetSquared leaders are super creative when planning their #Tech4Good events. "Meetings" come in diverse and innovative formats like Mississauga, Canada's Geek Talk — Coffee and Convo; Pangani, Tanzania's Social Media Surgery ; and Birmingham, United Kingdom's Summer Tech for Good Social in a local pub. Join us at your local group. It will be fun — we promise! Find your closest NetSquared group Upcoming Tech for Good Events This roundup of face-to-face nonprofit tech events includes meetups from NetSquared , NTEN's Tech Clubs , and other awesome organizations. If you're holding monthly events gathering the #nptech community, let me know , and I'll include you in the next community calendar. Or, apply today to start your own NetSquared group. Africa The targets need to be linked like so: --> Jump to events in North America or go international with events in Africa and Middle East Asia and Pacific Rim Europe and United Kingdom Central and South America ## North America ## Africa ## Europe ## Asia and Pacific Rim --> North America Monday, July 3, 2017 Mississauga, Ontario: Geek Talk — Coffee and Convo Wednesday, July 5, 2017 San Francisco, California: Code for America Civic Hack Night Thursday, July 6, 2017 San Francisco, California: Tech for Good Monthly Mixer Friday, July 07, 2017 Saint Paul, Minnesota: Tips and Tools to Doll Up Your Data | Minnesota Council of Nonprofits (Free) Tuesday, July 11, 2017 Naples, Florida: How to use Technology to Communicate and Manage Volunteers Vancouver, British Columbia: Full Spectrum Civic Engagement Columbus, Ohio: Nonprofit IT Forum Boston, Massachusetts: Tech Networks of Boston Roundtable: Nonprofit Organizations, Civic Data, and Civic Faith Phoenix, Arizona: Phoenix: QuickBooks Made Easy Wednesday, July 12, 2017 Los Angeles, California: Summer Social Phoenix, Arizona: Data Management: What Nonprofits Need to Know San Francisco, California: Code for America Civic Hack Night Monday, July 17, 2017 Kitchener, Ontario: Mail Management Tuesday, July 18, 2017 Greensburg, Pennsylvania: Bagels and Bytes — Westmoreland Marietta, Georgia: Easy SEO Fixes for Your Nonprofit Jasper, Indiana: Social Media Wednesday, July 19, 2017 San Francisco, California: Code for America Civic Hack Night Portland, Oregon: QuickBooks Made Easy Thursday, July 20, 2017 Seattle, Washington: QuickBooks Made Easy Friday, July 21, 2017 San Francisco, California: Mobile Apps for Change Demo Day at the Salvation Army Monday, July 24, 2017 Nanaimo, British Columbia: "Free Money" (Microsoft Volume Licensing and Google for Nonprofits) Tuesday, July 25, 2017 Buffalo, New York: Why Nonprofits Should Use TechSoup and NetSquared Houston, Texas: Net2Houston Refresh! Wednesday, July 26, 2017 San Francisco, California: Code for America Civic Hack Night Friday, July 28, 2017 Seattle, Washington: Roundtable for New Nonprofit Executives Tuesday, August 1, 2017 Naples, Florida: Tech4Good SWFL Meeting Pittsburgh, Pennsylvania: Bagels and Bytes — Allegheny Wednesday, August 2, 2017 San Francisco, California: Code for America Civic Hack Night Phoenix, Arizona: Defining and Targeting Your Audience: Marketing for Nonprofits Thursday, August 3, 2017 Cleveland, Ohio: How to Remarket to Website Visitors via Facebook and Twitter Monday, August 7, 2017 Mississauga, Ontario: Geek Talk — Coffee and Convo Tuesday, August 8, 2017 Columbus, Ohio: Nonprofit IT Forum Ottawa, Ontario: Review Progress on Data Analysis Projects Wednesday, August 9, 2017 San Francisco, California: Code for America Civic Hack Night Los Angeles, California: Web Accessibility: Designing Inclusive User Experiences Friday, August 11, 2017 Saint Paul, Minnesota: Optimizing Your Communications for Mobile | Minnesota Council of Nonprofits (Free) Wednesday, August 16, 2017 San Francisco, California: Code for America Civic Hack Night Research Triangle Park, North Carolina: The Internet of Things: You Only Live Twice? Tuesday, August 22, 2017 Houston, Texas: Net2Houston Refresh! Wednesday, August 23, 2017 San Francisco, California: Code for America Civic Hack Night Asia and Pacific Rim Saturday, July 15, 2017 Jakarta, Indonesia: Strategy of Data Collection for Nonprofits Tuesday, August 15, 2017 Jakarta, Indonesia: YouTube for Nonprofits Africa and Middle East Saturday, July 1, 2017 Bunda, Tanzania: Microsoft Cloud Computing Monday, July 3, 2017 Beirut, Lebanon: Lebanon's Digital Big Bang — An AltCity Info Session Friday, July 7, 2017 Mukono, Uganda: Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Saturday, July 8, 2017 Bunda, Tanzania: Microsoft Cloud Computing Wednesday, July 12, 2017 Bamenda, Cameroon: How to Create Digital Stories Friday, July 14, 2017 Mukono, Uganda Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Second Term 2017 Solar Mobile Computer Training Meetup for Saint John Kaama Primary Saturday, July 15, 2017 Bunda, Tanzania: Microsoft Cloud Computing Friday, July 21, 2017 Mukono, Uganda Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Second Term 2017 Solar Mobile Computer Training Meetup for Saint John Kaama Primary Saturday, July 22, 2017 Bunda, Tanzania: Microsoft Cloud Computing Friday, July 28, 2017 Port Harcourt, Nigeria: Creating Apps and Other Tech Mukono, Uganda: Second Term 2017 Solar Mobile Computer Training Meetup for Saint John Kaama Primary Saturday, July 29, 2017 Bunda, Tanzania: Microsoft Cloud Computing Pangani, Tanzania: Social Media Surgery: WhatsApp for Farmers and Livestock Keepers Sunday, July 30, 2017 Ouagadougou, Burkina Faso: Monthly Meeting of Local Members Friday, August 4, 2017 Mukono, Uganda: Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Saturday, August 5, 2017 Bunda, Tanzania: Microsoft Cloud Computing Wednesday, August 9, 2017 Bamenda, Cameroon: How to Create Digital Stories Friday, August 11, 2017 Mukono, Uganda Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Second Term 2017 Solar Mobile Computer Training Meetup for Saint John Kaama Primary Saturday, August 12, 2017 Bunda, Tanzania: Microsoft Cloud Computing Friday, August 18, 2017 Mukono, Uganda Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Second Term 2017 Solar Mobile Computer Training Meetup for Saint John Kaama Primary Saturday, August 19, 2017 Bunda, Tanzania: Microsoft Cloud Computing Friday, August 25, 2017 Mukono, Uganda: Second Term 2017 Solar Mobile Computer Training Meetup for Kibiribiri Primary School Second Term 2017 Solar Mobile Computer Training Meetup for Saint John Kaama Primary Saturday, August 26, 2017 Ouagadougou, Burkina Faso: Monthly Meeting of Local Members Sunday, August 27, 2017 Pangani, Tanzania: Social Media Surgery: Instagram for Farmers and Livestock Keepers Europe and United Kingdom Saturday, July 1, 2017 Saint-Étienne , France: Rencontres Mondiales du Logiciel Libre 2017 Monday, July 3, 2017 Edinburgh, United Kingdom: One Digital Meetup Leith Tuesday, July 4, 2017 Saint-Étienne , France: Rencontres Professionnelles du Logiciel Libre Monday, July 10, 2017 Birmingham, United Kingdom: Data Analysis for Nonprofits Tuesday, July 11, 2017 West Bridgford, United Kingdom: User Research and Service Design — Lunch and Learn for Nottinghamshire County Council Staff Wednesday, July 12, 2017 Cambridge, United Kingdom Social Media Surgery — Hands-on Help with Social Media Tech for Good — Law and Justice Tuesday, July 18, 2017 Bath, United Kingdom: Design for All — Technology for Everyone, Accessibility, and User Experience Thursday, July 20, 2017 Milngavie, United Kingdom: One Digital Meetup Milngavie Tuesday, July 25, 2017 Renens, Switzerland: OpenLab: Visite du Fablab de Renens Dublin, Ireland: Design Thinking For Good: IBM Health Corps Tuesday, August 1, 2017 West Bridgford, United Kingdom: Organisation Design — Lunch and Learn for Nottinghamshire County Council Staff Wednesday, August 9, 2017 Cambridge, United Kingdom: Social Media Surgery — Hands-on Help with Social Media Monday, August 14, 2017 Birmingham, United Kingdom: Summer Tech for Good Social Tuesday, August 29, 2017 Paudex, Switzerland: RdV4-0.ch: 2. Objets Connectés — IoT Renens, Switzerland: OpenLab: Visite du Fablab de Renens Thursday, August 31, 2017 Edinburgh, United Kingdom: One Digital Meetup Edinburgh Image : Michele Mateus / CC BY Siobhan Aspinall with Umbrella at The Digital Nonprofit 201 : Elijah van der Giessen via Michele Mateus / CC BY-NC 2.0 --> spanhidden
          Nallatech, A Molex Company, Joins Dell EMC Technology Partner Program   

LISLE, Ill., June 30, 2017 – Nallatech, a Molex company, recently announced its official membership in the multi-tier Dell EMC Technology Partner Program that includes ISVs, IHVs and Solution Providers.  Nallatech provides hardware, software and design services to enable high performance computing (HPC), network processing, and real-time embedded computing in datacenters.  Dell EMC has approved […]

The post Nallatech, A Molex Company, Joins Dell EMC Technology Partner Program appeared first on HPCwire.


          AI End Game: The Automation of All Work   

Last week we reported from ISC on an emerging type of high performance system architecture that integrates HPC and HPA (High Performance Analytics) and incorporates, at its center, exabyte-scale memory capacity, surrounded by a variety of accelerated processors. Until the arrival of quantum computing or other new computing paradigm, this is the architecture that could […]

The post AI End Game: The Automation of All Work appeared first on HPCwire.


          UCAR Deploys ADVA FSP 3000 CloudConnect in Supercomputing Network   

BOULDER, Co., June 29, 2017 — ADVA Optical Networking announced today that the University Corporation for Atmosphere Research (UCAR) has deployed its FSP 3000 CloudConnect data center interconnect (DCI) solution for ultra-high capacity connectivity to the Cheyenne supercomputer. The DCI technology is now being used to transport vital scientific data over two 200Gbit/s 16QAM connections between the […]

The post UCAR Deploys ADVA FSP 3000 CloudConnect in Supercomputing Network appeared first on HPCwire.


          MareNostrum 4 Begins Operation   

BARCELONA, June 29 — The MareNostrum supercomputer is beginning operation and will start executing applications for scientific research. MareNostrum 4, hosted by Barcelona Supercomputing Center, is entirely aimed at generating scientific knowledge and its computer architecture has been called ‘the most diverse and interesting in the world’ by international experts. The Spanish Ministry of Economy, […]

The post MareNostrum 4 Begins Operation appeared first on HPCwire.


          InfiniBand Continues to Lead TOP500 as Interconnect of Choice for HPC   

BEAVERTON, Ore., June 28, 2017 – The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand specification, today shared the latest TOP500 List results, which reveal that InfiniBand remains the most used High Performance Computing (HPC) interconnect. Additionally, the majority of newly listed TOP500 supercomputers are accelerated by InfiniBand technology. […]

The post InfiniBand Continues to Lead TOP500 as Interconnect of Choice for HPC appeared first on HPCwire.


          The 5 Most Important Benefits of Cloud Monitoring   

For any company that is heavily invested in information technology, it is inevitable that cloud computing becomes a dominant feature of its IT architecture. This…

The post The 5 Most Important Benefits of Cloud Monitoring appeared first on DirJournal Blogs.


          Why the JVM is a Good Choice for Serverless Computing: John Chapin Discusses AWS Lambda at QCon NY   

At QCon New York John Chapin presented “Fearless AWS Lambdas”, and not only argued that the JVM is a good platform on which to deploy serverless code, but also provided guidance on extracting the best performance from Java-based AWS Lambda functions.

By Daniel Bryant
          Can the Galaxy Tab S3 replace your everyday computing needs?   
The Samsung Galaxy Tab S3 looks to take on Apple's iPad head-on, but we look at the tablet to see if you can really replace your everyday computer for various tasks.
          Cavium and China Unicom trial 5G user cases on M-CORD   
Cavium, a provider of semiconductor products for enterprise, data centre, wired and wireless networking, and China Unicom announced a targeted program for the testing of 5G use cases on a M-CORD SDN/NFV platform leveraging Cavium's silicon-based white box hardware in M-CORD racks populated with ThunderX ARM-based data centre COTS servers and XPliant programmable SDN Ethernet-based white box switches.

Under the program, China Unicom and Cavium plan to shortly commence trials in a number of locations across mainland China to explore the potential of the new service.

Cavium and China Unicom are specifically demonstrating multi-access edge computing (MEC) use cases developed through a previously announced collaboration based on the ON.Lab M-CORD (Mobile Central Office Re-architected as a data centre) SDN/NFV platform at the Mobile World Congress (MWC) Shanghai.

The demonstration involves a M-CORD SDN/NFV software platform and hardware rack integrated with virtualised and disaggregated mobile infrastructure elements from the edge of the RAN to distributed mobile core and the ONOS and XOS SDN and orchestration software.

The companies stated that this architecture is designed to enable turnkey operation in any central office or edge data centre for a full NFV C-RAN deployment. The solution is based on a Cavium-powered rack that combines the ThunderX ARM based data centre servers with the programmable XPliant Ethernet leaf and spine SDN switches to provide a full platform for M-CORD.

Regarding the latest project, Raj Singh, VP and GM of the network and communication group at Cavium, said, "Cavium is collaborating with China Unicom to explore 5G target use cases leveraging the M-CORD SDN/NFV platform and working towards field deployment… a homogenous hardware architecture optimised for NFV and 5G is a pre-requisite for field deployments".



  • Earlier this year, Radisys and China Unicom announced they had partnered to build and integrate M-CORD development PODs featuring open source software. For the project Radisys, acting as systems integrator, used the CORD open reference implementation to enable cloud agility and improved economics in China Unicom's network. The companies also planned to develop deployment scenarios for the solution in the China Unicom network.
  • The resulting platform was intended to support future 5G services by enabling mobile edge services, virtualised RAN and virtualised EPC. The companies also planned to develop an open reference implementations of a virtualised RAN and next-generation mobile core architecture.

          Keeping an eye on Alibaba Cloud, Aliyun – Part 1   
Alibaba's Jack Ma made headlines across the world last week by laying out a plan for rapid global expansion of China's e-commerce behemoth. In an Investor Conference held at the company's Xixi headquarters in Hangzhou, China, Ma made the bold claim that Alibaba could reach $1 trillion in gross merchandise value by 2021 by becoming the primary online store for 2 billion people, as well as by expanding into new areas, one of which is the international public cloud services business. While Alibaba's investor event was overshadowed somewhat by the news that Amazon will spend $13.7 billion in cash to acquire Whole Foods, the premium U.S. grocery store chain, Jack Ma unveiled a strategy with clear potential to disrupt the cloud market.

Meanwhile, business at Alibaba Group (NYSE: BABA) is 'fantastic' and is only going to get better this year, according to the company CFO. For the most recent fiscal quarter ended March 31, 2017, the company reported revenue of RMB 38,579 million ($5,605 million), an increase of 60% year-over-year, including:

•   Revenue from core commerce of RMB31,570 million ($4,587 million), up 47% year-over-year.

•   Revenue from cloud computing of RMB 2,163 million ($314 million), up 103% year-over-year.

•   Revenue from digital media and entertainment of RMB 3,927 million ($571 million), up 234% year-over-year.

Growth at the parent company is primarily being driven by the steady increase in active buyers on its ecommerce platforms, both in numbers and in the value of goods and services being transacted. Annual active buyers reached 454 million, an increase of 31 million from the 12-month period ended on March 31, 2016. Mobile monthly active users (MAUs) on Alibaba Group’s China retail marketplaces reached 507 million in March, up 97 million over March 2016. Gross merchandise volume (GMV) transacted on Alibaba’s China retail marketplaces in fiscal year 2017 was RMB 3,767 billion ($547 billion), up 22% compared to RMB 3,092 billion in fiscal year 2016.

Alibaba Cloud, or Aliyun as it is known in Chinese, is firmly established as the leading infrastructure-as-a-service (IaaS) cloud in mainland China and is moving rapidly to become a Platform-as-a-Service (PaaS) provider and a Software-as-a-Service (SaaS) retailer. Some important Aliyun metrics emerged from the Investor presentation, including (with additional commentary):

·         Public cloud is growing: based on Gartner's figures from March 2017, Aliyun estimates the global public cloud market will amount to $245 billion in 2017, growing to $436 billion in 2021, a 15.9% CAGR.

·         China’s public cloud market is growing even faster, with Gartner figures showing China’s public cloud market, valued at $14 billion this year, growing to $25 billion in 2021, a 17.2% CAGR; by 2021, China’s share of the global public cloud market would still be under 6%, which seems odd given the country's share of global GDP is much higher and that ecommerce, social media and mobile technologies are booming in China - why so low versus the U.S. market?

·         Aliyun cited figures from IDC Tracker 2016 H1/H2 Global Cloud Market (IaaS), indicating it currently is the No.4 player in public cloud services worldwide, but with only a 3.2% share; No.1 was AWS, $8.4 billion, 46.1% share; No. 2 Microsoft, $1.4 billion, 7.6% share; No.3 IBM, $1.0 billion, 5.8% share; No.4 Alibaba, $0.57 billion, 3.2% share; No.5 Google, $0.519 billion, 2.9% share.

Clearly, AWS is dominating the public cloud market, especially in the U.S. The other U.S. public cloud players are investing aggressively to catch up and they too seem to have ambitions that reach to the sky. Alibaba's Jack Ma has previously been quoted in the press as saying that Alibaba would catch and surpass Amazon. When it comes to cloud services at least, this will be extremely difficult given its current 3.2% share versus AWS’ 46.1% share, and a capex budget that appears decisively smaller.

In its home market of China, Aliyun's IaaS revenue is equivalent to the next seven players combined. The numbers cited in IDC Tracker 2016 H1/H2 Global Cloud Market are as follows:

·         No.1 – Alibaba Group, $587 million, 40.7% market share

·         No.2 - China Telecom, $123 million, 8.5%

·         No.3 – Tencent, $106 million, 7.3%

·         No.4 – Kingsoft, $87 million, 6.0%

·         No.5 – Ucloud, $79 million, 5.5%

·         No.6 – Microsoft, $72 million, 5.0%

·         No.7 – China Unicom, $67 million, 4.6%

·         No.8 – AWS, $55 million, 3.8%

In addition, as of March 31, 2017 Aliyun had 874,000 paying customers, had 15 data centres worldwide and had 186 cloud service offers. It also claims a 96.7% retention rate amongst its top paying customers in Q1 2017 compared to a year earlier.

Over one-third of China’s Top 500 companies are on Alibaba Cloud, including China's Public Safety Bureau (PSB), CCTV, Sinopec, Sina Weibo, Xinhua News Agency,Toutiao, Geely, Mango TV, CEA, Quanmin Live, Panda TV and DJI, while two-thirds of Chinese Unicorn companies are on Alibaba Cloud. Global Software-as-a-Service (SaaS) now available on Aliyun include Accenture, SAP, Docker, here, SUSE, Haivision, Wowza, AppScale, AppEX, Hillstone, Checkpoint Software Technologies, Hitachi Data Systems and Red Hat.


Aliyun’s Computing Conference 2016 was attended by over 40,000 developers in person, with more than 7 million viewers online. At its investor conference, Aliyun also disclosed a number of major international brands that are now using its services, including Schneider Electric, Shisheido, Philips, Nestle and Vodafone, which is a good start. Nevertheless, attracting international companies will be harder, first, because Alibaba has only just recently begun building data centres outside of China, and two, they will be much less known and trusted than established brands such as IBM.


          r 3.4.1-1 x86_64   
Language and environment for statistical computing and graphics
          r 3.4.1-1 i686   
Language and environment for statistical computing and graphics
          Werner Vogels (CTO, Amazon) : "Nous installerons au moins trois datacenters AWS en 2017 en France"   
Nouveaux projets et services, déploiement dans l'Hexagone… Le directeur technique d'Amazon décrypte sa stratégie en matière de cloud.
          视频演讲: Blink:阿里新一代实时计算引擎   

在开源大数据技术业界,第一代实时计算引擎是 Storm,随后出现了 Samza,近几年持续火爆的 Spark 也推出了 Spark Streaming,但我们更看好 Flink 这个新一代的纯流式计算引擎。阿里巴巴搜索技术团队从去年开始改进 Flink,并创建了阿里的 Flink 分支,线上服务了阿里集团内部搜索、推荐、广告和蚂蚁等核心实时业务,我们称之为 Blink 计算引擎。目前我们也已经在和 Flink 母公司 DataArtiscans 一起合作,将 Blink 的改进全部贡献回 Flink 社区,共同推进 Flink 社区的发展,本次分享将全面介绍阿里新一代实时计算引擎 Blink 对 Flink 的各项改进,并向大家分享 Blink 计算引擎在阿里内部的典型应用场景。

By 马国维
          Ericsson Becomes an OpenStack Foundation Platinum Member   
Ericsson increases investment in OpenStack as platform for NFV, edge computing and distributed cloud.
          'New' Kontron Emerges With $1B Sales Target   
Following a difficult year, embedded computing system specialist has agreed a merger with S&T Deutschland, giving it an expanded portfolio and a role in hitting an annual sales target of $1 billion.
           UCAR Deploys ADVA CloudConnect for 200G in Supercomputing Network   
DCI technology being used to transport vital scientific data over two 200Gbit/s 16QAM connections.
          Comment on U.S. Military Sees Future in Neuromorphic Computing by OranjeeGeneral   
IBM can do crap, they demonstrated a 5nm manufacture protype but they have nowhere near a production ready commercial 5nm process they don't even have a commercial FAB at all anymore. They might license parts of the process to GF and that's it. But until GF gets to actually implement it and run at production level it is going to be at least till 2021-2. Besides the current IBM TrueNorth prototype performs absolutely abysmal on state of the art DNN if you actually start investigating its usage. What they are right though is ASIC is the future especially for the cloud and HPC. And you don't need to be on a bleeding edge for getting a good ROI you can go to an old/mature/cheap process node and still gain. Read this interesting paper to get an idea what I am talking about "Moonwalk: NRE Optimization in ASIC Clouds"
          Comment on The Biggest Shift in Supercomputing Since GPU Acceleration by jimmy   
@Rob, the deep learning algorithms for object recognition by far surpass anything that people were able to with classical deterministic models. That's why they are being used for self-driving cars; they have been proven to increase driver safety by a good margin. You can mention the one-off cases in the early days of self-driving, but that's not an interesting statistic at all. Deep learning is essentially an attempt to make sense of tons of noisy data, and many of the models today are not understood by their authors: "hey we did this and now we got this awesome result", very ad-hoc. In the end though, it's all statistical-mathematics, it's just that at the moment the slightly theoretically challenged CS folks are playing with this new toy, and mathematical understanding is inevitable.
          Discover CERN by drone   
aerial visit of CERN based on drone shots of iconic sites of the laboratory taken by drone competition pilot Chad Nowak, CERN Drone pilot Mike Struik, videographer Christoph M. Madsen and Photographer Maximilien Brice. Locations include: the Globe of Science and Innovation, ATLAS site on LHC Point1, the CERN Computing Centre, experimental Halls 180 and SMI2, the PS and PS booster area, LINAC 4, the CMS site at LHC P5, the ALICE site at LHC P2
          New photos offer first glimpse of Microsoft’s canceled Surface Mini   

Microsoft's Surface Mini never saw release, having been canceled shortly before it was set to be officially announced but images of a near-final prototype have been released on the web.

The post New photos offer first glimpse of Microsoft’s canceled Surface Mini appeared first on Digital Trends.


          Microsoft manages to cram artificial intelligence on the Raspberry Pi 3 PC board   

Microsoft said its Machine Learning and Optimization group compressed artificial intelligence down so it could run on the Raspberry Pi 3. The team used several techniques to reduce the AI's size and speed up its thinking process.

The post Microsoft manages to cram artificial intelligence on the Raspberry Pi 3 PC board appeared first on Digital Trends.


          (IT) Software Developer - Highly Skilled   

Location: Englewood Cliffs, NJ   

Job Title: Software Developer - Highly Skilled Qualifications: The CNBC Digital Technology team is seeking a Software Engineer to manage and build software solutions across CNBC's Digital Platform. Software engineer (primarily focusing on Backend development) will be responsible for building and managing software solutions for various projects. This role requires hands-on software development skills, deep technical expertise in web development, especially in developing with core java, spring, hibernate. Software engineer will be required to provide estimates for his tasks, follow technology best practices, participate and adhere to CNBC's Technical Design Review Process, Performance metrics/scalability, support integration and release planning activities in addition to being available for level 3 support to triage production issues. Required Skills " BS degree or higher in Computer Science with a minimum of 5+ years of relevant, broad engineering experience is required. " Experience with various Web-based Technologies, OO Modeling, Middleware, Relational Databases and distributed computing technologies. " Experience in Digital Video workflows (Ingest, Transcode, Publish) " Experience in Content Delivery Networks (CDN) " Experience with Video Content Management Systems " Expertise in cloud transcoding workflows. " Demonstrated experience running projects end-to-end " Possess expert knowledge in Performance, Scalability, Security, Enterprise System Architecture, and Engineering best practices. " Experience working on large scale, high traffic web sites/applications. " Experience working in financial, media domain. Responsibilities: Languages and Software: " Languages : JAVA (Core Java, Multithreading), Object Oriented languages 3Z 4 Web Technologies: XML, JSON, HTML, CSS, OO JavaScript, jQuery, AJAX, SOAP and RESTful web services " Framework : MVC Framework like Spring, JPA, Hibernate, Jaxb " Database : RDBMS like MySQL, Oracle, NO SQL databases " Tools : Git, SVN, Eclipse, Jira
 
Type: Contract
Location: Englewood Cliffs, NJ
Country: United States of America
Contact: Hiring Manager
Advertiser: First Tek
Reference: NT17-03957

          (IT) Full Stack Developer   

Rate: £350 - £450 per Day   Location: Glasgow, Scotland   

Full Stack Developer - 12 month contract - Glasgow City Centre One of Harvey Nash's leading FS clients is looking for an experienced full stack developer with an aptitude for general infrastructure knowledge. This will be an initial 12 month contract however the likelihood of extension is high. The successful candidate will be responsible for creating strategic solutions across a broad technology footprint. Experience within financial services would be advantageous, although not a prerequisite. Skill Set: - Previous Experience full-stack development experience with C#/C++/Java, Visual Studio, .Net, Windows/Linux web development - Understanding of secure code development/analysis - In-depth knowledge of how software works - Development using SQL and Relational Databases (eg SQL, DB2, Sybase, Oracle, MQ) - Windows Automation and Scripting (PowerShell, WMI) - Familiarity with common operating systems and entitlement models (Windows, Redhat Linux/Solaris) - Understanding of network architecture within an enterprise environment (eg Firewalls, Load Balancers) - Experience of developing in a structured Deployment Environment (DEV/QA/UAT/PROD) - Familiarity with the Software Development Life Cycle (SDLC) - Experience with Source Control and CI systems (eg GIT, Perforce, Jenkins) - Experience with Unit and Load testing tools - Experience with Code Review products (eg Crucible, FishEye) - Excellent communication/presentation skills and experience working with distributed teams - Candidates should demonstrate a strong ability to create technical, architectural and design documentationDesired Skills - Any experience creating (or working with) a "developer desktop" (dedicated desktop environment for developers) - Experience of the Linux development environment - An interest in cyber security - Knowledge of Defense in Depth computing principles - Experience with security products and technologies(eg Cyberark, PKI) - Systems management, user configuration and technology deployments across large, distributed environments (eg Chef, Zookeeper) - Understanding of core Windows Infrastructure technologies (eg Active Directory, GPO, CIFS, DFS, NFS) - Monitoring Tools (eg Scom, Netcool, WatchTower) - Experience with Apache/Tomcat-web server "Virtualisation" - Design patterns and best practices - Agile development: Planning, Retrospectives etc. To apply for this role or to discuss it in more detail then please call me and send a copy of your latest CV.
 
Rate: £350 - £450 per Day
Type: Contract
Location: Glasgow, Scotland
Country: UK
Contact: Cameron MacGrain
Advertiser: Harvey Nash Plc
Start Date: ASAP
Reference: JS-329601/001

          One Network Presents Cloud Computing Workshop at the Supply Chain and Logistics Summit North America 2011   

One Network Enterprises, a leading provider of demand-driven supply chain solutions in the cloud, is sponsoring the Supply Chain and Logistics Summit North America 2011. The event takes place at the Hyatt Regency, Dallas Texas December 5-7, 2011.

(PRWeb November 30, 2011)

Read the full story at http://www.prweb.com/releases/2011/11/prweb8999728.htm


          One Network Named an Inbound Logistics’ Top 100 Logistics IT Company 2011   

One Network Enterprises, a federated cloud computing platform for supply chain 2.0, business intelligence and sustainability, has been selected as a 2011 Top 100 Logistics IT Provider by Inbound Logistics Magazine.

(PRWeb May 03, 2011)

Read the full story at http://www.prweb.com/releases/2011/5/prweb8370122.htm


          One Network Sponsors the Aberdeen Supply Chain Management Summit 2011   

One Network Enterprises, a federated cloud computing platform for supply chain 2.0, business intelligence and sustainability, is sponsoring the Aberdeen Supply Chain Management Summit 2011. The event takes place at the Swissotel, Chicago, IL March 29, -30, 2011.

(PRWeb March 22, 2011)

Read the full story at http://www.prweb.com/releases/2011/3/prweb8223370.htm


          Microsoft planning to unveil major reorganization July 5: report   

Microsoft Inc. is planning to unveil a major reorganization on July 5, the Puget Sound Business Journal has reported. The Seattle-based publication said sources told it the changes would better align the company with its cloud-first strategy. Microsoft has been putting more focus on its Azure cloud business, which competes with Amazon.com Inc.'s Amazon Web Services cloud-computing division, the leader in the business. Microsoft shares were slightly higher in premarket trade Friday, and have gained 10% in 2017, while the Dow Jones Industrial Average and the S&P 500 have gained about 8%.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


          Threat Monitoring Analyst - Verizon - Basking Ridge, NJ   
What you’ll be doing... Responsibilities : A Threat Monitoring Analyst plays a critical role in Verizon’s enterprise computing defense. Analysts are...
From Verizon - Thu, 29 Jun 2017 10:58:51 GMT - View all Basking Ridge, NJ jobs
          C++ Developer - Distributed Computing - Morgan Stanley - Montréal, QC   
Comfortable programming in a Linux environment, familiar with ksh and bash. Morgan Stanley is a global financial services firm and a market leader in investment...
From Morgan Stanley - Wed, 28 Jun 2017 00:14:01 GMT - View all Montréal, QC jobs
          NECC 2008 Notes   
ISTE’s NETS•T Refreshed Roll-Out

We need real world, relevant assignments because we’ve already done well moving from the sage on the stage to the guide on the side. At this point we need to re-inspire teachers.

The new teacher standards include:
Facilitate and inspire student learning and creativity
Design and develop digital-age learning experiences and assessments
Model digital-age work and learning
Promote and model digital citizenship and responsibility
Engage in professional growth and leadership

There is a new tool available through ISTE’s website that will assist administrators in determining the level of technology integration occurring with their teachers.

Check out fact flippers: www.tammyworcester.com

Dan Edelson, Getting out of the Classroom with Technology

Volunteer Geography: A variant of citizen science. For example, students can make and share field observations and analyze and provide interpretations of that data. The concept is that students collect data by taking measurements, thy submit the data via a web form, they visualize it using interactive maps, they analyze patterns based on the data and visualization, and they may report back to others in their classes. One problem with this is that students will only be able to see small amounts of data if they are involved during the start if the project. An example of this was students testing soil samples following use of salt on icy roads. Students get to experience the full spectrum of the scientific process. In this case, students used probes and collected data in the classroom and submitted information via a website.

NGS FieldScope allows students to collect real world data. NGS chooses a region to study and invites teachers and students to participate. The teachers must purchase the equipment which costs about $1,000.

Chris Dede, Ubiquitous Computing

Goal: Repurpose common items for educational purposes (e.g., using cell phones for augmented learning).

Cheryl Lemke

We need to recognize that adolescent learning includes the home, school, peers, work, distributed resources, and communities – not just school. Our goal at this point is scalability of using technology tools for 21st century teaching, not just focusing on use in our own classrooms.

She suggests we use research-based methods to develop lessons and units that serve as “sheet music.” The teachers base their instruction on the sheet music, but also improvise.

A good teacher blog including student podcasts is “Learning on the Go.” The teacher sets up her class as a fictional consulting agency and the students solve real world algebra problems. Another teacher uses authentic travel agent activities to teach about Greek history.

SimCalc: http://www.simcalc.umassd.edu/software (teaches about perspective)

Media multi-tasking: We can only do one thing at a time, but we can quickly move from one thing to another. Kids are better at multi-tasking than adults. When learning, students are distracted when multi-tasking (except for things like music without lyrics in the background).

Universe: http://universe.daylife.com (identifies what is going on online in real time using a visual perspective)

Venezuela started teaching critical thinking to their elementary and middle school students 10 years ago. Now, they are finding increased average adult IQs across the country.

See http://www.flatworld.com

Alan November, “Designing Rigorous and Globally Connected Assignments”

This presentation is available from the “Archive of Articles” on NovemberLearning.com. This presentation is available at Digital Farm.

Students are connected to everyone in their lives – except their teachers because schools block everything. “Schools are the learning police.” There is more freedom in Chinese schools in terms of the Internet than here. We are so worried about their safety that we block their learning.

Vocabulary of the Web: Students need to learn information resources. This type of information is available on http://novemberlearning.com/blc

By adding site:en to Google searches, you will only get sites with an English country code. To get Turkey-based sites, type site:tr.

Adding view:timeline to a search, you can access the most recent information about a given search term.

Type link:http://Wikipedia.com to find out how many links exist to that particular site.

Hall Davidson, “It’s in Your Pocket: Teaching Spectacularly with Cell Phones”

http://www.myspace.com/sidekicknation (How kids use video on a daily basis)


Every classroom should have a student-designated web researcher. The teacher should never have to answer a factual question, they should only have to respond to higher-order thinking questions.

There is a Google feature that allows you to create your own search engine. November believes teachers and students should jointly build search engines. This will give students less stimuli when they do searches.

It would be nice if students could develop resources that teach content and then future students review these tutorials before class. Students, then, are responsible for learning their own content and class time is replaced with problem solving. When there’s not a lot of Internet access, students could have a DVD with all the information at home (because DVDs are more common in the home than Internet connections).

The http://jingproject.com is a downloadable application that allows you to create screencasts.

Instead of teaching teachers to use technology, November jokes that we should send two of our students to the training and one of the students should be the biggest trouble-maker in the class.

Wikipedia isn’t an encyclopedia, it’s a publishing house. Third grade students were told they would visit the Pitot House and write an article they would submit to the largest encyclopedia in the class. The students wrote and published their Wikipedia article and now they follow the RSS feed for the article and critique what other people write.

http://kiva.com: Organizes donations to small business entrepreneurs. The donors get their money back and they get reports on their projects. You can also talk to the other people who have invested in the same entrepreneurial project.


http://jott.com alters voice to text. You can call this service from your cell phone. Another option is fozme.com

http://polleverywhere.com
: Allows you to do automatic polls from cell phones (like the classroom response systems)

Terry Cavanaugh, GIS, Google Maps, and More for Literacy Projects

http://books.google.com

There are interactive maps that show all he locations mentioned in a book (e.g., The Travels of Marco Polo). [Note to self – check out the Bible.]

Gutenkarte (http://gutenkarte.org) also makes a map of a text, showing what places are most frequently mentioned. Amazon’s Concordance also does this by telling the 100 most used words in a given text.

http://editgrid.com allows you to map a story using latitude and longitude in a spreadsheet.

http://www.goglelittrips.com has 23 stories you can follow on Google Earth. You download the .kmz file and use it with Google Earth. An example is with Make Way for Ducklings. The entire story is mapped as sections are mentioned. Also, people have added pictures of items and informational text from specific locations in the book. Anyone can make a Google Lit Trip.

http://wetellstories.co.uk/stories/week1/: Tells a story using a map – the text is embedded in the map.

Teachers can get the Pro Version of Google Earth by writing to Google and requesting it. It is possible to make a map for each student so they can each map out a story.

A dimensional mouse allows you to move in three dimensions. They are available through Amazon.

Using virtual map pins, students can add quotes from book, write facts about the locations mentioned, and adding multimedia books. This is a means of having students have greater interactivity with books.

In September, cameras will have cameras with embedded geo-tags. Some buildings are going to start putting in geo-tagging points in the buildings.

Tony Vincent, Audio is Great! Video is Cool! IPods Can Do More!

Learning in Hand iPods is his iPod podcast. See http://learninginhand.com/ipods

http://spokentext.com
will speak any text into audio.

You can create cover art and lyrics (or primary source text) through going to Get Info for an individual song.

See http://NotontheTest.org

iPrep Press has comic books you can download to your iPod. BrainQuest also has quizzes for the iPod.

Ipod-notes.com allows you to combine Notes files

IPrepPress allows you to download a dictionary and many primary sources. Get 100 Words every high school students should know.

ManyBooks.com allows you to download books in the public domain.

iWriter allows you to link stories together as story

iQuizMaker allows you to make quizzes for your iPod. You can also share iQuizzes by going to iQuizShare (http://iquizshare.com/)

Use monitor mode to make your iSight camera not cause a mirroring effect.




Check out doc imaging and doc scanning on the PC.

Get book making ideas from web.mac.com/lindaoaks and check out her handouts on the NECC site

Download handouts from NECC site for Sharon Hirschy about making class books using PPT
          Apple's iPhone turns 10, bumpy start forgotten   

Apple's iPhone turns 10 this week, evoking memories of a rocky start for the device that ended up doing most to start the smartphone revolution and stirring interest in where it will go from here.

Apple has sold more than 1 billion iPhones since June 29, 2007, but the first iPhone, which launched without an App Store and was restricted to the AT&T network, was limited compared to today's version.

After sluggish initial sales, Apple slashed the price to spur holiday sales that year.

"The business model for year one of the iPhone was a disaster," Tony Fadell, one of the Apple developers of the device, told Reuters in an interview on Wednesday. "We pivoted and figured it out in year two."

The very concept of the iPhone came as a surprise to some of Apple's suppliers a decade ago, even though Apple, led by CEO Steve Jobs, had already expanded beyond computers with the iPod.

"We still have the voicemail from Steve Jobs when he called the CEO and founder here," said David Bairstow at Skyhook, the company that supplied location technology to early iPhones.

"He thought he was being pranked by someone in the office and it took him two days to call Steve Jobs back."

The iPhone hit its stride in 2008 when Apple introduced the App Store, which allowed developers to make and distribute their mobile applications with Apple taking a cut of any revenue.

Ten years later, services revenue is a crucial area of growth for Apple, bringing in $24.3 billion (S$33.5 billion) in revenue last year.

New model

Fans and investors are now looking forward to the 10th anniversary iPhone 8, expected this fall, asking whether it will deliver enough new features to spark a new generation to turn to Apple.

That new phone may have 3-D mapping sensors, support for "augmented reality" apps that would merge virtual and real worlds, and a new display with organic LEDs, which are light and flexible, according to analysts at Bernstein Research.

A decade after launching into a market largely occupied by BlackBerry and Microsoft devices, the iPhone now competes chiefly with phones running Google's Android software, which is distributed to Samsung Electronics and other manufacturers around the world.

Even though most of the world's smartphones now run on Android, Apple still garners most of the profit in the industry with its generally higher-priced devices.

More than 2 billion people now have smartphones, according to data from eMarketer, and Fadell, who has worked for both Apple and Alphabet, sees that as the hallmark of success.

"Being able to democratize computing and communication across the entire world is absolutely astounding to me," Fadell said.

"It warms my heart because that's something Steve tried to do with the Apple II and the Mac, which was the computer for the rest of us. It's finally here, 30 years later."

on SPH Brightcove

Friday, June 30, 2017 - 09:54
Others
https://studio.brightcove.com/products/videocloud/media/videos/5488952123001

          Rainbow Photons Pack More Computing Power   
Quantum bits, aka qubits, can simultaneously encode 0 and 1. But multicolored photons could enable even more states to exist at the same time, ramping up computing power. Christopher Intagliata...

-- Read more on ScientificAmerican.com

          Zones CustomerConnect Conference Examines Software and Its Role in the Digital Enterprise   

Customer Event Focuses on Software Enabling Security, Mobility, Cloud Computing, and the Internet of Things

(PRWeb October 14, 2015)

Read the full story at http://www.prweb.com/releases/2015/10/prweb13020062.htm


          Zones Helps Customers Realize the Benefits of Cloud Computing through the Microsoft Cloud Solution Provider Program   

The Microsoft Cloud Solution Provider Program Allows Zones to Provide Direct Billing, Sell Combined Offers and Services, and Directly Provision, Manage and Support Microsoft Cloud Offerings

(PRWeb January 28, 2015)

Read the full story at http://www.prweb.com/releases/2015/01/prweb12472008.htm


          Multi-Computer   
I watched the new iMac launch yesterday and it left me thinking, "why not". I am working with a University that has just agreed to outfit some of their labs with iMacs. The reason, because it can run Windows seemingly as well as a Dell. This has got me thinking, "Are the iMacs pointing us to the future?"

My father went to work for GE back in the late 60's/early 70's after getting out of the Air Force (I promise this is relevant). One of the perks of the job was that he always got to bring home the latest appliances because that was the section he worked in. I remember when the side-by-side refrigerators came into fashion. There were a lot of factors that pushed it. 1) More and more food had preservatives in them and didn't need to be refrigerated so there was less space needed to store cold food. 2) Families grew smaller and didn't need as much storage. 3) The average house wasn't even close to the square footage as it is today so space mattered. People saw that they could get rid of their old freezer and refrigerator and get a combination that fit in the same amount of space as just one of their old appliances.

So how the heck is this relevant? Well I think it shows a trend that people often apply to only computing technology. That is individuals want less physical things that can do more functions. Look at cell phones, Phone + PDA + Camera + GPS device. Part of the problem in my opinion though is that different OS's are better for different types of activities. I think Vista has proven that for me because I bought the Home Premium and it is still just Windows. I know there are a lot of cool things underneath Vista, but at the end of the day even with my $600 vid card for my on-line gaming fix, it still isn't better than the Wii for gaming. Some of that is hardware but a lot of that is simply OS. The new iMac though showed me that hardware is no longer the factor with a single chip set, multiple OS's can be ran on a device. Wouldn't it be cool to have a single computing device that was good for business applications and great for on-demand media and great for music composition and great for gaming. Wouldn't it be cool to have a cool new smart phone that also was a great mobile gaming device (I am talking DS or DS lite cool not Live Search club cool). Well I think the iMac is showing us that this is and should be possible.
          Social Networking   
It has been interesting to see how people are reacting this last year to social computing technologies as they become part of the mainstream packages (Lotus Connections & Microsoft MOSS) and as other technologies (SocialText) become real opportunities for enterprise sized organizations. We have been approached by a whole lot more companies/organizations than I ever would have thought (of course my expectations were low <30) wanting to know more about how they could use it and what they could use it for.

The software industry still suffers from marketing self gratifications in this space. What I mean by this is that we come up with words like Wiki, Blog or RSS and post it around like everyone knows what the heck we are talking about. When you ask a non-tech outside they seem to think you are talking about a fruity drink with an umbrella (Wiki), some new form of the creeping crud or something you go 4-wheelin' thru if you are down in KY (Blog), or the latest Rave party drug (RSS). I remember the most hilarious meeting I had been to in a while was one that we brought a vendor to that had blog technology. They pitched it to the customer (someone that has about $600M in capital improvement programs a year) as a tool that "your organization can begin to build communities of interest from". The customer then turned around and said, in all seriousness, "I pay these folks to work not talk about knitting". LOL!

Let me help then a bit. We have been playing around with a lot of social computing tools and solutions lately. My company is about to role out social computing to about 20,000 employees in the next 4-6 months. We have 3 schools, 4 municipalities, 2 defense organizations and about 8 enterprise organizations that are currently going through the evaluation stages and a couple of those are actually now exiting to solution deployment. So what are all of these folks using it for.

1) Talent Retention. You want to keep good talent make them feel like they are part of the company and that their words are heard. When we establish these types of technologies for this solution, we establish a methodology that engages the top management to either participate directly or to ensure their participation in commenting in the community. We all know that tool such as e-mail, the telephone and stop by meetings are horrible ways for people to stay in touch with what others are thinking. However, if you have 10-15% of your staff participating it gives you a good vibe on the community and smart management can make adjustments.

2) Mentoring. We have several industries that the talent pool is retiring at a much faster pace then the new crop is coming through school or that technology can keep up with. That being said, these organization must find a way to capture the knowledge of those users and pass it along to the masses. Things like building classroom curriculum isn't always the most effect. We use these technologies to build a "pod of knowledge" to team people up on tasks but to ensure that the work is a cooperative effort.

3) A One-Way Communication. I know this goes against the concept of "social" computing a bit but I tell you it these are some nice tools if you want to publish updates on processes such as RFI/RFP. There are many type of black hole processes where people are afraid to open the flood gate of communications but with this they can just push out updates that are much easier to do than managing a web site.

I could go on, but that is it for now...
          Going to take another stab   
Well I got nailed the first time I tried to do this. I think I had some things set wrong on the blog and thus I created way more traffic for myself then I ever intended. I am ready to give this another go and have switched things around so that users can just post without my scanning the posts first (which by the way caused too many people to send me e-mails instead of just posting on the blog...a dynamic in social computing to blog about later).

My goal in this next go around is to post once a week. I have learned a ton on this area in the last 6 months or so as the consulting group I am with rolls out more and more social networking solutions. One of the big lessons is to be consistant. I think once a week is a schedule that I can keep....hopefully.
          Mobile Computing done right   
http://www.mojopac.com/portal/content/hellomojo.jsp

Check these guys out!! This is a great take on the hoteling issues that have plagued us for a while. I have been a big user of VMWare for some time but the down side to that is that it requires an install on the host and the image is large because it is the entire OS. This helps eliviate all of that by a) running directly off of the storage device and b) just taking the apps you want. Pretty slick.
          UOL HOST apresenta novo Criador de Sites   

websiteSÃO PAULO, 30 de junho de 2017 /PRNewswire/ -- O UOL HOST, unidade de hospedagem, serviços web e cloud computing do UOL, apresenta o novo Criador de Sites, a plataforma mais completa do mercado nacional para criação de sites, blogs e loja virtual, de maneira simples, fácil e rápida....



          Can You Host a Drupal Website on an Amazon EC2 Micro Instance?   
Can You Host a Drupal Website on an Amazon EC2 Micro Instance?
amazon-aws
David Csonka Wed, 02/01/2017 - 05:15

If you are at the decision point of wondering which Amazon Web Services EC2 server type to host your Drupal website, you have hopefully already gone through a checklist to ensure that using AWS for your Drupal hosting makes sense. Starting at the lowest price point to access Amazon's EC2 web server platform, the "micro" instance is bound to be one of the most popular levels of service.

The AWS EC2 server types range from micro, to standard small, medium, large and a wide variety of other specially set up instance types for edge case needs. When selecting an EC2 micro as your Drupal web host, you'll want to take care to determine if the micro instance is suitable for your Drupal website's needs.

Burstable Instances

Micro instances are classified by AWS as "burstable", as in they provide a small amount of consistent CPU resources and will allow for an increase in CPU capacity in short bursts when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically but very little CPU at other times for background processes, daemons, etc. 

Amazon provides a series of charts highlighting CPU demand profiles which it deems are inappropriate for micro instances.

Considering Computer Cycles

Basically, if your application has consistently intensive computing needs you're going to want to use a standard class instance type, small, medium, or larger depending on your requirements. But even though at a steady state micro instances receive a fraction of the compute resources that standard small instances do, micros can intermittently and briefly burst up to 2 ECU's (EC2 Compute Unit). This is double the number of ECU's available for a standard small instance. 

Therefore, if you have a website with relatively low computing needs, like a blog, or a development site that doesn't receive much or any public traffic, a micro instance could be perfect suited for your project. A Drupal site that has the vast majority of its traffic coming from anonymous visitors (which can have completely cached pages delivered) can work quite well with a micro instance. Full caching that is augmented with APC or Varnish can help even a little micro server hundreds of requests.

However, if your Drupal site handles a lot of authenticated traffic (logged in users are considered authenticated), with users accessing dynamic content and viewing uncached pages, you're definitely going to want to consider a larger capacity standard class instance, or set up a robust auto-scaling system to roll out extra instances on demand to handle the load. Trying to do any work as an admin when your Drupal site has many modules installed, especially memory hogs like Rules or Views, can potentially bog down your micro server. You can utilize AWS Cloudwatch alarms to monitor CPU utilization of your micro instance to help determine if it is being overwhelmed.

Confused about hosting Drupal with Amazon AWS?

If you're considering using Amazon AWS to host your Drupal website, but haven't worked with it before, you might want to check out our other tutorials and blog posts on working with Amazon AWS. Specifically, review our tutorial on running a Drupal website on Amazon Web Services.

If you're not sure if you can set up Amazon AWS hosting on your own, don't worry we might be able to help you! Contact Us

Tags

          Senior Solution Architect (iOS, Android, Windows, Mobile Technologies) / 33-6 Consultancy Limited / Knutsford, Cheshire, United Kingdom   
33-6 Consultancy Limited/Knutsford, Cheshire, United Kingdom

My client, a Large UK Retail Banking group is seeking a strong VP Senior Solution Architect to come in and join their team on a permanent basis. This is a very exciting opportunity to work in one of the largest Technology hubs in the North West.

Requirements

Good understanding of mobile technologies.

Good knowledge of smart phones, emerging handsets, capabilities, different platforms (RIM, IOS, Android)

Lead problem solving and analytical sessions to find innovative technical solutions to address business problems.

Good understanding of Client Server Interaction, Application and Infrastructure.

Good Understanding of the Design phase within the application development life cycle.

Has written a document/cookbook for a specific subject matter.

Able to maintain high levels of productivity for prolonged periods, with "quiet periods built in to allow rest and recuperation. Able to refine and adapt techniques to meet new needs. Able to teach these skills to practitioners and experts.

Has wider interest in computing science eg mathematical methods or machine learning

Teaches and influences others to use different versioning strategies and systems.

Uses all Microsoft Tools.

Can use and advise on other tools that facilitate collaboration (eg SharePoint, Rationale, etc). Has awareness and keeps abreast of other industry tools. Uses MS Project as the planning tool of choice.

Financial services experience is desirable

Employment Type: Permanent

Apply To Job
          Real-time Twitter trending on a budget with riemann   

I recently stumbled upon this article http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/ by Michael Noll which explains a strategy for computing twitter trends with Storm.

I love Storm, but not everyone has a cluster already, and I think computing tops is a problem that lends itself well to single node computing since datasets often are very simple (in the twitter trend case we store a tuple of hashtag and time of insertion) and thus can fit in a single box’s memory capacity while being able to service many events per second.

It turns out, riemann is a great tool for tackling this type of problem and is able to handle a huge amount of events per second while keeping a small and concise configuration.

It goes without saying that Storm will be a better performer when you are trying to compute a vast amount of data (for instance, the real twitter firehose).

Accumulating tweets

In this example we will compute twitter trends from a sample of the firehose, as provided by twitter. The tweetstream ruby library provides a very easy way to process the “sample hose” and here is a small script which extracts hash tags from tweets and publishes them to a local riemann instance:

require 'tweetstream'
require 'riemann/client'

TweetStream.configure do |config|
  config.consumer_key       = 'xxx'
  config.consumer_secret    = 'xxx'
  config.oauth_token        = 'xxx'
  config.oauth_token_secret = 'xxx'
  config.auth_method        = :oauth
end

riemann = Riemann::Client.new

TweetStream::Client.new.sample do |status|
  status.text.scan(/\s#([[:alnum:]]+)/).map{|x| x.first.downcase}.each do |tag|
    riemann << {service: tag, metric: 1.0, tags: ["twitter"], ttl: 3600}
  end
end

For each tweet in the firehose we emit a riemann event tagged with twitter and a metric of 1.0, the service is the tag which was found.

The rationale for computing trends is as follows:

  • Keep a moving time window of an hour
  • Compute per-tag counts
  • Sort by computed count, then by time
  • Keep the top N events

Riemann provides several facilities out of the box which can be used to implement this, most noticeably:

  • The top stream which separates events in two streams: top & bottom
  • The moving-time-window stream

With recent changes in riemann’s top function we can use this simple configuration to compute trends:

(let [store    (index)
      trending (top 10 (juxt :metric :time) (tag "top" store) store)]
  (streams
    (by :service (moving-time-window 3600 (smap folds/sum trending)))))

Let’s break down what happens in this configuration.

  • We create an index and a trending stream which keeps the top 10 trending hashtags, we’ll get back to this one later.
  • For each incoming event, we split on service (the hashtag), and then sum all occurences in the last hour
  • This generate an event whose metric is the number of occurences in an hour which gets sent to trending

Now let’s look a bit more in-depth at what is provided by the trending stream. We are using the 4-arity version of top, so in this case:

  • We want to compute the top 10 (first argument)
  • We compare and sort events using the (juxt :metric :time) function. juxt yields a vector, which is the result of applying its arguments to its input. For an input event {:metric 1.0 :time 2} our function will yield [1.0 2], we leverage the fact that vectors implement the Comparable interface and thus will correctly sort event by metric, then time
  • We send events belonging to the top 10 to the stream (tag "top" store)
  • We send events not belonging to the top 10 or bumped from the top 10 to the stream store

Fetching results

Running twitter-hose.rb against such a configuration we can now query the index to retrieve. With the ruby riemann-client gem we just retrieve the indexed elements tagged with top:

require 'riemann/client'
require 'pp'

client = Riemann::Client.new
pp client['tagged "top"']

Going further

It might be interesting to play with a better comparison function than (juxt :metric :time), it would be interesting to compute a decay factor from the time and apply it to the metric and let comparisons be done on this output.

The skeleton of such a function could be:

(def decay-factor xxx)

(defn decaying [{:keys [metric time] :as event}]
  (let [ (unix-time)]
    (- metric (* ((unix-time) - time) decay-factor))))

This would allow expiring old trends quicker.

The full code for this example is available at:

/files/2014-01-14-twitter-trending.html

Other applications

When transferring that problem domain to the typical datasets riemann handles, the top stream can be a great way to find outliers in a production environment, in terms of CPU consumption, bursts of log types.

Toy scaling strategies

I’d like to advise implementers to look beyond riemann for scaling top extraction from streams, as tools like Storm are great for these use cases.

But in jest, I’ll mention that since the riemann-kafka plugin - by yours truly - allows producing and consuming to and from kafka queues, intermediate riemann cores could compute local tops and send the aggregated results over to a central riemann instance which would then determine the overall top.

I hope this gives you a good glimpse of what riemann can provide beyond simple threshold alerts.


          The death of the configuration file   

Taking on a new platform design recently I thought it was interesting to see how things evolved in the past years and how we design and think about platform architecture.

So what do we do ?

As system developers, system administrators and system engineers, what do we do ?

  • We develop software
  • We design architectures
  • We configure systems

But it isn’t the purpose of our jobs, for most of us, our purpose is to generate business value. From a non technical perspective we generate business value by creating a system which renders one or many functions and provides insight into its operation.

And we do this by developing, logging, configuration and maintaining software across many machines.

When I started doing this - back when knowing how to write a sendmail configuration file could get you a paycheck - it all came down to setting up a few machines, a database server a web server a mail server, each logging locally and providing its own way of reporting metrics.

When designing custom software, you would provide reports over a local AF_UNIX socket, and configure your software by writing elegant parsers with yacc (or its GNU equivalent, bison).

When I joined the OpenBSD team, I did a lot of work on configuration files, ask any members of the team, the configuration files are a big concern, and careful attention is put into clean, human readable and writable syntax, additionally, all configuration files are expected to look and feel the same, for consistency.

It seems as though the current state of large applications now demands another way to interact with operating systems, and some tools are now leading the way.

So what has changed ?

While our mission is still the same from a non technical perspective, the technical landscape has evolved and went through several phases.

  1. The first era of repeatable architecture

    We first realized that as soon as several machines performed the same task the need for repeatable, coherent environments became essential. Typical environments used a combination of cfengine, NFS and mostly perl scripts to achieve these goals.

    Insight and reporting was then providing either by horrible proprietary kludges that I shall not name here, or emergent tools such as netsaint (now nagios), mrtg and the like.

  2. The XML mistake

    Around that time, we started hearing more and more about XML, then touted as the solution to almost every problem. The rationale was that XML was - somewhat - easy to parse, and would allow developers to develop configuration interfaces separately from the core functionality.

    While this was a noble goal, it was mostly a huge failure. Above all, it was a victory of developers over people using their software, since they didn’t bother writing syntax parsers and let users cope with the complicated syntax.

    Another example was the difference between Linux’s iptables and OpenBSD’s pf. While the former was supposed to be the backend for a firewall handling tool that never saw the light of day, the latter provided a clean syntax.

  3. Infrastructure as code

    Fast forward a couple of years, most users of cfengine were fed up with its limitations, architectures while following the same logic as before became bigger and bigger. The need for repeatable and sane environments was as important as it ever was.

    At that point of time, PXE installations were added to the mix of big infrastructures and many people started looking at puppet as a viable alternative to cfengine.

    puppet provided a cleaner environment, and allowed easier formalization of technology, platform and configuration. Philosophically though, puppet stays very close to cfengine by providing a way to configure large amounts of system through a central repository.

    At that point, large architectures also needed command and control interfaces. As noted before, most of these were implemented as perl or shell scripts in SSH loops.

    On the monitoring and graphing front, not much was happening, nagios and cacti were almost ubiquitous, while some tools such as ganglia and collectd were making a bit of progress.

Where are we now ?

At some point recently, our applications started doing more. While for a long time the canonical dynamic web application was a busy forum, more complex sites started appearing everywhere. We were not building and operating sites anymore but applications. And while with the help of haproxy, varnish and the likes, the frontend was mostly a settled affair, complex backends demanded more work.

At the same time the advent of social enabled applications demanded much more insight into the habits of users in applications and thorough analytics.

New tools emerged to help us along the way:

  • In memory key value caches such as memcached and redis
  • Fast elastic key value stores such as cassandra
  • Distributed computing frameworks such as hadoop
  • And of course on demand virtualized instances, aka: The Cloud
  1. Some daemons only provide small functionality

    The main difference in the new stack found in backend systems is that the software stacks that run are not useful on their own anymore.

    Software such as zookeeper, kafka, rabbitmq serve no other purpose that to provide supporting services in applications and their functionality are almost only available as libraries to be used in distributed application code.

  2. Infrastructure as code is not infrastructure in code !

    What we missed along the way it seems is that even though our applications now span multiple machines and daemons provide a subset of functionality, most tools still reason with the machine as the top level abstraction.

    puppet for instance is meant to configure nodes, not cluster and makes dependencies very hard to manage. A perfect example is the complications involved in setting up configurations dependent on other machines.

    Monitoring and graphing, except for ganglia has long suffered from the same problem.

The new tools we need

We need to kill local configurations, plain and simple. With a simple enough library to interact with distant nodes, starting and stopping service, configuration can happen in a single place and instead of relying on a repository based configuration manager, configuration should happen from inside applications and not be an external process.

If this happens in a library, command & control must also be added to the mix, with centralized and tagged logging, reporting and metrics.

This is going to take some time, because it is a huge shift in the way we write software and design applications. Today, configuration management is a very complex stack of workarounds for non standardized interactions with local package management, service control and software configuration.

Today dynamically configuring bind, haproxy and nginx, installing a package on a Debian or OpenBSD, restarting a service, all these very simple tasks which we automate and operate from a central repository force us to build complex abstractions. When using puppet, chef or pallet, we write complex templates because software was meant to be configured by humans.

The same goes for checking the output of running arbitrary scripts on machines.

  1. Where we’ll be tomorrow

    With the ease PaaS solutions bring to developers, and offers such as the ones from VMWare and open initiatives such as OpenStack, it seems as though virtualized environments will very soon be found everywhere, even in private companies which will deploy such environments on their own hardware.

    I would not bet on it happening but a terse input and output format for system tools and daemons would go a long way in ensuring easy and fast interaction with configuration management and command and control software.

    While it was a mistake to try to push XML as a terse format replacing configuration file to interact with single machines, a terse format is needed to interact with many machines providing the same service, or to run many tasks in parallel - even though, admittedly , tools such as capistrano or mcollective do a good job at running things and providing sensible output.

  2. The future is now !

    Some projects are leading the way in this new orientation, 2011 as I’ve seen it called will be the year of the time series boom. For package management and logging, Jordan Sissel released such great tools as logstash and fpm. For easy graphing and deployment etsy released great tools, amongst which statsd.

    As for bridging the gap between provisionning, configuration management, command and control and deploys I think two tools, both based on jclouds1 are going in the right direction:

    • Whirr2: Which let you start a cluster through code, providing

    recipes for standard deploys (zookeeper, hadoop)

    • pallet3: Which lets you describe your infrastructure as code and

    interact with it in your own code. pallet’s phase approach to cluster configuration provides a smooth dependency framework which allows easy description of dependencies between configuration across different clusters of machines.

  3. Who’s getting left out ?

    One area where things seem to move much slower is network device configuration, for people running open source based load-balancers and firewalls, things are looking a bit nicer, but the switch landscape is a mess. As tools mostly geared towards public cloud services will make their way in private corporate environments, hopefully they’ll also get some of the programmable


          la historia de la world wide web   

Historia de la World Wide WebEste NeXTcube usado por Berners-Lee en el CERN se convirtió en el primer servidor web.La idea subyacente de la Web se remonta a la propuesta de Vannevar Bush en los años 40 sobre un sistema similar: a grandes rasgos, un entramado de información distribuida con una interfaz operativa que permitía el acceso tanto a la misma como a otros artículos relevantes
determinados por claves. Este proyecto nunca fue materializado, quedando relegado al plano teórico bajo el nombre de MEMEX. Es en los años 50 cuando Ted Nelson realiza la primera referencia a un sistema de hipertexto, donde la información es enlazada de forma libre. Pero no es hasta 1980, con un soporte operativo tecnológico para la distribución de información en redes informáticas, cuando Tim Berners-Lee propone ENQUIRE al CERN (refiriéndose a Enquire Within Upon Everything, en castellano Preguntando de Todo Sobre Todo), donde se materializa la realización práctica de este concepto de incipientes nociones de la Web.
En marzo de 1989, Tim Berners Lee, ya como personal de la divisón DD del CERN, redacta la propuesta,[2] que referenciaba a ENQUIRE y describía un sistema de gestión de información más elaborado. No hubo un bautizo oficial o un acuñamiento del término web en esas referencias iniciales utilizándose para tal efecto el término mesh. Sin embargo, el World Wide Web ya había nacido. Con la ayuda de Robert Cailliau, se publicó una propuesta más formal para la world wide web[3] el 12 de noviembre de 1990.
Berners-Lee usó un NeXTcube como el primer servidor web del mundo y también escribió el primer navegador web, WorldWideWeb en 1990. En las Navidades del mismo año, Berners-Lee había creado todas las herramientas necesarias para que una web funcionase:[4] el primer navegador web (el cual también era un editor web), el primer servidor web y las primeras páginas web[5] que al mismo tiempo describían el proyecto.
El 6 de agosto de 1991, envió un pequeño resumen del proyecto World Wide Web al newsgroup[6] alt.hypertext. Esta fecha también señala el debut de la web como un servicio disponible públicamente en Internet.
El concepto, subyacente y crucial, del hipertexto tiene sus orígenes en viejos proyectos de la década de los 60, como el Proyecto Xanadu de Ted Nelson y el sistema on-line NLS de Douglas Engelbart. Los dos, Nelson y Engelbart, estaban a su vez inspirados por el ya citado sistema basado en microfilm "memex", de Vannevar Bush.
El gran avance de Berners-Lee fue unir hipertexto e Internet. En su libro Weaving the Web (en castellano, Tejiendo la Red), explica que él había sugerido repetidamente que la unión entre las dos tecnologías era posible para miembros de las dos comunidades tecnológicas, pero como nadie aceptó su invitación, decidió, finalmente, hacer frente al proyecto él mismo. En el proceso, desarrolló un sistema de identificadores únicos globales para los recursos web y también: el Uniform Resource Identifier.
World Wide Web tenía algunas diferencias de los otros sistemas de hipertexto que estaban disponibles en aquel momento:
WWW sólo requería enlaces unidireccionales en vez de los bidireccionales. Esto hacía posible que una persona enlazara a otro recurso sin necesidad de ninguna acción del propietario de ese recurso. Con ello se reducía significativamente la dificultad de implementar servidores web y navegadores (en comparación con los sistemas anteriores), pero en cambio presentaba el problema crónico de los enlaces rotos.
A diferencia de sus predecesores, como HyperCard, World Wide Web era no-propietario, haciendo posible desarrollar servidores y clientes independientemente y añadir extensiones sin restricciones de licencia.
El 30 de abril de 1993, el CERN anunció[7] que la web sería gratuita para todos, sin ningún tipo de honorarios.ViolaWWW fue un navegador bastante popular en los comienzos de la web que estaba basado en el concepto de la herramienta hipertextual de software de Mac denominada HyperCard. Sin embargo, los investigadores generalmente están de acuerdo en que el punto de inflexión de la World Wide Web comenzó con la introducción[8] del navegador[9] web Mosaic en 1993, un navegador gráfico desarrollado por un equipo del NCSA en la Universidad de Illinois en Urbana-Champaign (NCSA-UIUC), dirigido por Marc Andreessen. Funding para Mosaic vino del High-Performance Computing and Communications Initiative, un programa de fondos iniciado por el entonces gobernador Al Gore High Performance Computing and Communication Act of 1991, también conocida como la Gore Bill.[10] Antes del lanzamiento de Mosaic, las páginas web no integraban un amplio entorno gráfico y su popularidad fue menor que otros protocolos anteriores ya en uso sobre Internet, como el protocolo Gopher y WAIS. El interfaz gráfico de usuario de Mosaic permitió a la WWW convertirse en el protocolo de Internet más popular de una manera fulgurante

          CDAC Recruitment 2017–2018 cdac.in Project Engineers/Doctor/Nursing Jobs   

CDAC Recruitment Latest employment notice has been dispersed Centre for Development of Advanced Computing entitled as CDAC Recruitment. In this concern, organization is planning to appoint talented and self-motivated aspirants in order to fill up the Project Engineers/Doctor/Nursing Jobs. Contenders who are looking for the Jobs under government sector, they may apply for CDAC Recruitment ...

The post CDAC Recruitment 2017–2018 cdac.in Project Engineers/Doctor/Nursing Jobs appeared first on 2017 Recruitment, Online Application Form.


          SON-TWIN10G-TB2 Twin 10G Thunderbolt 2 to Dual Port Copper 10Gb Ethernet Adaptor   
SON-TWIN10G-TB2 Twin 10G Thunderbolt 2 to Dual Port Copper 10Gb Ethernet Adaptor

SON-TWIN10G-TB2 Twin 10G Thunderbolt 2 to Dual Port Copper 10Gb Ethernet Adaptor

Sonnet SON-TWIN10G-TB2 (SONTWIN10GTB2, SON TWIN10G TB2, SON/TWIN10G/TB2, son-twin10g-tb2) Twin 10G Thunderbolt 2 to Dual Port Copper 10Gb Ethernet Adaptor Twin 10G Thunderbolt 2 to Dual-Port 10 Gigabit Ethernet Adapter Lightning Fast 10GBase-T Connectivity Over Thunderbolt With increasing demands for greater data transfer speeds and more bandwidth over shared networks, and with specialized applications such as HD video editing using high-performance shared storage systems, the deployment of 10 Gigabit Ethernet (10GbE) networking has skyrocketed. This high-speed wired networking standard offers ten times the performance of Gigabit Ethernet, the most common wired network connection included with most computers today, and yet, most computers aren't equipped to get you connected. If you have a computer with a Thunderbolt port (Thunderbolt 2 port required for Windows compatibility), Sonnet has a simple solution—the Twin 10G. This Thunderbolt-certified for Mac® and Windows, 10GBase-T adapter is a powerful and cost-effective way to add 10GbE connectivity to your setup. The Twin 10G is equipped with two RJ45 and two Thunderbolt 2 ports. Using RJ45 connectors, the Twin 10G is able to connect to 10 Gigabit infrastructure via inexpensive CAT-6 or CAT-6A copper cabling at distances up to 55 or 100 meters, respectively. Through one of its two Thunderbolt 2 ports, the Twin 10G connects to your computer through an included Thunderbolt cable; the second Thunderbolt port supports daisy chaining up to six devices to a single port on the host computer. Sleek, Quiet, and Compact The Twin 10G is a striking example of form following function—this adapter's attractive and rugged aluminium housing is engineered to effectively dissipate heat. Together with a temperature-tuned, ultra-quiet fan to quietly cool the components inside, this design enables the Twin 10G to be used comfortably in noise-sensitive environments. Measuring just 4.9" wide x 6.3" long x 2.7" high, the Twin 10G takes up little space. Simple Configuration, Advanced Features The Twin 10G is easy to set up. To keep things simple, you configure its settings through the OS X® Network control panel or Windows Device Manager. When you configure the device, you can also activate its advanced features: link aggregation (teaming) of its two 10GbE ports increases throughput beyond what a single connection can sustain; transparent failover between 10GbE ports keeps your computer connected in case a single cable is disconnected or one of the ports fails; jumbo packet (jumbo frame) support can increase throughput and reduce CPU utilization; and Wake-on-LAN support enables remotely waking your computer when you need it, and not wasting energy when you don't. This Sonnet solution is perfect for high-performance computing where low latency, high bandwidth, and low CPU overhead are required. Its increased throughput performance and low host-CPU utilization are achieved with stateless offloads, allowing your computer to perform better while large files transfers or high I/O operations take place. New Mac Pro Mate The Twin 10G incorporates mounting points to support firm attachment to our xMacâ„¢ Pro Server and RackMacâ„¢ Pro rackmount enclosures for the new Mac Pro. The adapter is secured to the inside of the enclosure, thereby enabling 10GbE connectivity for the Mac Pro via a separate Thunderbolt 2 port for maximum bandwidth without taking up a PCIe slot or requiring extra rack space. IN THE BOX: Twin 10G Thunderbolt 2 to Dual-Port 10 Gigabit Ethernet Adapter 0.5m Thunderbolt cable Universal power adapter Power cord Documentation  


          Finisar, Applied Opto Join Davidson’s Fiber-Optic Pantheon   
D.A. Davidson's Mark Kelleher returns with three more optical picks, Finisar, Applied Optoelectronics and NeoPhotonics, benefitting from things such as the build-out of cloud computing and "3-D sensing" in Apple's iPhone.
          Jonny Sun and Jonathan Zittrain on Joke Tweets, Memes, and Being an Alien Online   
Join Jonny Sun, the author of the popular Twitter account @jonnysun, for a conversation in celebration of his new book “everyone’s a aliebn when ur a aliebn too” by jomny sun (the aliebn). This debut illustrated book is the unforgettable story of a lost, lonely, and confused alien finding friendship, acceptance, and love among the creatures of Earth. Constructed from many of Jonny’s re-contextualized tweets, the book is also a creative thesis on the narrative formats of social media, and a defense of the humanity-fulfilling aspects of social media born out of his experiences on Twitter. About Jonny Jonathan Sun is the author behind @jonnysun. When he isn’t tweeting, he is an architect, designer, engineer, artist, playwright and comedy writer. His work across multiple disciplines broadly addresses narratives of human experience. As a playwright, Jonathan’s work has been performed at the Yale School of Drama, and in Toronto at Hart House Theater and Factory Theater. As an artist and illustrator, his work has been exhibited at MIT, Yale, New Haven ArtSpace, and the University of Toronto. His work has been appeared on NPR, Buzzfeed, Playboy, GQ, and McSweeney’s. In his other life, he is a doctoral student at MIT and Berkman Klein fellow at Harvard. About Jonathan Jonathan Zittrain is the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Vice Dean for Library and Information Resources at the Harvard Law School Library, and co-founder of the Berkman Klein Center for Internet & Society. His research interests include battles for control of digital property and content, cryptography, electronic privacy, the roles of intermediaries within Internet architecture, human computing, and the useful and unobtrusive deployment of technology in education. For more on this discussion visit: https://cyber.harvard.edu/events/2017/06/Sun

          Beautiful Code   

Beautiful Code Edited By Andy Oram & Greg Wilson

2.5/5

Beautiful Code is a collection of essays from programmers working in a number of different areas, from language design, to operating systems, to enterprise application development. If you are a programmer, chances are good that at least a couple of essays in here will appeal to you.

First, the good. Some essays are great. Yukihiro Matsumoto, the creator of Ruby, has arguably the best (and shortest) essay in the collection, concentrating on what makes code beautiful and how those factors influenced his design of Ruby. Elliote Rusty Harold’s contribution on lessons learned creating XML verifiers is also a standout. He goes through several implementations, learning from each to improve the performance of the next, all while maintaining correctness across all cases. Charles Petzold’s description of on-the-fly code generation for image processing is dense, but interesting. As a sometimes Python programmer, Andrew Kuchling’s discussion of the design trade-offs in the design of Python’s dictionary implementation was much appreciated and gives insights into performance optimizations you can make with your application if needed.

Unfortunately there is also a fair amount of bad. One issue is that the book is simply too long. The editors mention they got a more enthusiastic response from contributors then they expected. They may have felt compelled to include all or most of the responses. But beyond the length, some of the essays are just bad. For example, Douglas Crockford’s “Top Down Operator Precedence” essay dives right in without actually explaining the algorithm. It is explained piecemeal throughout the code, but you never get a good feel for exactly what is going on. Other contributors have the view that whatever skills they need to do their work is essential to being a true software engineer. For example, Bryan Cantrill writes that postmortem debugging with core dumps “is an essential part of our craft - a skill that every serious software engineer should develop.” Quite honestly, only a very narrow niche of software engineers are serious then. Other authors take similar narrow views at times, whether it is the author of math libraries feeling that everybody needs to care about high performance computing or C programmers feeling that every real programmer should implement their own dynamic dispatch code in C at some point in their careers.

Beautiful Code is worth the read, but don’t hesitate to skim over the essays that don’t interest you. I probably would have enjoyed it more if I didn’t force myself through most of the them. (Also see Jeff Atwood’s review for a good explanation of why the title of the book is misleading.)


          Java/Microservices - (Ipswich)   
Hello, Principal Java/Microservices Software EngineersDuration : 6+ months contract to hireLocation : Ipswich, MARequirements:o Minimum 10 years of experience in specification, design, development, maintenance enterprise-scale mission critical distributed systems with demanding non-functional requirementso Bachelor's Degree in Computer Science, Computer Information Systems or related field of study. Master's Degree preferredo 8+ years of experience with SOA concepts, including data services and canonical modelso 8+ years of experience working with relational databaseso 8+ years of experience of building complex server side solution in Java and/or C#o 8+ years of experience in software development lifecycleo 3+ years of experience building complex solutions utilizing integration frameworks and ESBo Demonstrate strong knowledge and experience applying enterprise patterns to solving business problemsPreferred Qualifications:o Leadership experienceo Strong abilities troubleshooting and tuning distributed environments processing high volume of transactionso Familiarity with model driven architectureo Familiarity with BPM technologieso Experience with any of the following technologies: Oracle, MySQL, SQL Server, Linux, Windows, NFS, Netapp, Rest/SOAP, ETL, XML technologieso In depth technical understanding of systems, databases, networking, and computing environmentso Familiarity with NLP and search technologies, AWS cloud based technologies, Content Management systems, publishing domain, EA frameworks such as TOGAF and Zachmano 2+ years of experience building complex Big Data solutionso Excellent verbal, written and presentation skills with ability to communicate complex technical concepts to technical and non-technical professionalsRegards Pallavi781-791-3115 ( 468 )Java,Microservices,cloud,AWS,architect Source: http://www.juju.com/jad/000000009qiqw5?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Software Development Engineer in Test - Folio - (Ipswich)   
SkillsRequirements:5+ yrs Java & Object Oriented Design/ProgrammingImplementation of 1 or more production RESTful interfaces in a microservices model2+ yrs product implementation experience with databases, both SQL and NoSQL ? PostgreSQL specifically is a plus2+ yrs product implementation experience in a cloud computing environment ? AWS specifically is a plus3+ yrs experience using Agile and/or SAFePreferred Qualifications:CI/CD using (eg) Jenkins, Maven, GradleSCM - Git/GitHubTest Driven Development (TDD) and Automated Unit TestingDeveloping automated integration and acceptance testsAutomating UI testing (eg Selenium, Sauce Labs)Developing performance and load tests at high scale (eg JMeter)General HTTP knowledge including familiarity with cURL or similar toolsLinux ? general knowledge, shell scripting ? RedHat/Amazon Linux specifically is a plusVirtualization ? Docker, Vagrant, etc.Open Source Software ? general knowledge SW dev model, experience contributing toRAML, JSON, XMLJavaScript and related tools/frameworks ? Both client-side and server-side - React, Node.js, webpack, npm/yarn, etc.Security related experience ?SSO, OAuth, SAML, LDAP, etc.Logging/Monitoring/Alerting/Analytics ? SumoLogic, Datadog, collectd, SNMP, JMX, etc.Why the North Shore of Boston and EBSCO are great places to live and work!Here at EBSCO we will provide relocation assistance to the best and brightest people. We are 45 minutes outside of Boston just minutes from the beach in Ipswich, MA. Ipswich is a part of the North Shore and contains a wide variety of locally owned shops, restaurants, and farms.
          Mathematics in Computing: An Accessible Guide to Historical, Foundational and Application Contexts   

          Byte Into IT - 28 June 2017   

This week on Byte Into IT we have Vanessa and Jo in the studio to talk about all things technology, computing and gaming.

Our interview for this week is with Sarah Moran from Girl Geek Academy, talking about SheHacks, an upcoming all female 'hackathon'.


          Amazon Web Services: My EBS is stuck!   
Many of us were affected by the Amazon EBS issues at the end of October 2012. If you had EC2 instances in us-east-1, you were likely affected by the issues.
          Mountain Lion Roars….and Leaves Some Blood   
One issue I ran into when installing Virtualbox revolved around Apple’s software installation security in Mountain Lion
          TNR Global Launches Search Application for Museum Collections   
We use open source search technology that works with most museum software systems and databases including the popular museum software product PastPerfect.
          The Future of Search Doesn’t Come in a Box: The Google Mini Says Goodbye   
The future of search doesn’t come in a box. Last week while many were on vacation, Google abandoned the smallest member of its’ Search Appliance family, the Google Mini. The small blue piece of external hardware was used for smaller data sets with a stable, some might say stagnant, data with slow and steady query rates. [...]
          Bishop: Makes Your Web Service Shiny   
The idea is to provide a relatively small library that will make your life easier and hopefully more pleasant by making it straightforward to provide a consistent web service API that obeys HTTP semantics.
          New Systems and DevOps Blog   
has lots of new approaches to discuss in terms of systems, cloud computing, DevOps, System Architecture, and how developers and systems staff need to communicate well and work together
          Fast to Lucene Solr: Choosing a Document Processing Pipeline for Solr   
If we want to leverage the power that Solr offers, but we need support for a more robust document processing framework, what are our options?
          Elasticsearch Evaluation White Paper Released: Promising for Big Data   
We believe that Elasticsearch is a product that everyone working in the field of big data will want to take a look at.
          For Many Companies, Migration to a New Search Engine is Inevitable   
"It's basically a road map for companies looking at options for migration, and we outline Solr as a very good option"
          Mobile Search: Good UX means fewer touches, simple design   
Mobile search must be designed for a minimum number of touches before users arrive at the end result. If it takes more than 2-3 touches, the user will look elsewhere for answers.
          On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i, by Jeremiah Blocki and Samson Zhou   
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that (1) An Argon2i DAG is $\left(e,O\left(n^3/e^3\right)\right))$-reducible. (2) The cumulative pebbling cost for Argon2i is at most $O\left(n^{1.768}\right)$. This improves upon the previous best upper bound of $O\left(n^{1.8}\right)$ [Alwen and Blocki, EURO S&P 2017]. (3) Argon2i DAG is $\left(e,\tilde{\Omega}\left(n^3/e^3\right)\right))$-depth robust. By contrast, analysis of [Alwen et al., EUROCRYPT 2017] only established that Argon2i was $\left(e,\tilde{\Omega}\left(n^2/e^2\right)\right))$-depth robust. (4) The cumulative pebbling complexity of Argon2i is at least $\tilde{\Omega}\left( n^{1.75}\right)$. This improves on the previous best bound of $\Omega\left( n^{1.66}\right)$ [Alwen et al., EUROCRYPT 2017] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing. We also show that Argon2i has high {\em fractional} depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
          Privacy-Preserving Distributed Linear Regression on High-Dimensional Data, by Adrià Gascón and Phillipp Schoppmann and Borja Balle and Mariana Raykova and Jack Doerner and Samee Zahur and David Evans   
We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao's garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD) algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.'s method for privacy-preserving ridge regression (S&P 2013), and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.
          Moving to the Cloud?   

Cloud computing is a fundamentally new way of providing technology infrastructure and other related components and services for your business. It has been the driving force in transforming companies and industries around the world. Here are the top business drivers…

The post Moving to the Cloud? appeared first on The Data Center Journal.


          Discover CERN by drone   
aerial visit of CERN based on drone shots of iconic sites of the laboratory taken by drone competition pilot Chad Nowak, CERN Drone pilot Mike Struik, videographer Christoph M. Madsen and Photographer Maximilien Brice. Locations include: the Globe of Science and Innovation, ATLAS site on LHC Point1, the CERN Computing Centre, experimental Halls 180 and SMI2, the PS and PS booster area, LINAC 4, the CMS site at LHC P5, the ALICE site at LHC P2
          Envisioning Middlebury: Talk by Gardner Campbell on Friday   
We are pleased to announce the second in our series of speakers for Envisioning Middlebury, our yearlong conversation. Dr. Gardner Campbell serves as associate professor of English and special assistant to the provost at Virginia Commonwealth University. In his talk, he will discuss how the paradigm of “romantic computing—the experience of wonders [and] uncanny encounters” […]
          Enabling more stable and scalable quantum computing   
Researchers have discovered a new topological material which may enable fault-tolerant quantum computing.
          2017 Arkansas Times Academic All-Star Team   
Meet the best and brightest high school students in the state.

The class of 2017, our 23rd, is made up of athletes, coders, budding politicians and brain experts. There's rarely a B on the transcripts of these students — in not just this, their senior year, but in any year of their high school careers.

Back in 1995, we created the Academic All-Star Team to honor what we then called "the silent majority — the kids who go to school, do their homework (most of it, anyway), graduate and go on to be contributing members of society." Too often, we argued then, all Arkansans heard about young people was how poorly they were faring. Or, when students did get positive attention, it came for athletic achievement.

As you read profiles of this year's All-Stars, it should be abundantly clear that good things are happening in Arkansas schools and there are many academic achievers who deserve to be celebrated. You should get a good idea, as well, of how these stellar students are busy outside school, with extracurricular activities, volunteer work, mission activities and more.

They'll be honored this week at a ceremony at the University of Arkansas at Little Rock with plaques and $250 cash awards.

Many college plans listed here are not set in stone, as students await information on scholarships and acceptances.

CAROLINE COPLIN-CHUDY
Age: 17
Hometown: North Little Rock
High School: Mount St. Mary Academy
Parents: (guardian) Dennis Chudy
College plans: Duke University

Caroline Coplin-Chudy has a 4.4 grade point average — high enough to rank second in her class at Mount St. Mary Academy — and lost her mother to leukemia during her sophomore year, something she told us came to be a source of inspiration and drive during her academic development. "It was a big adjustment. After my mom passed away, it was just my stepdad. It's a weird realization coming to the idea that both of your parents are gone, and it's just you. ... I still think of her every single day. She motivates me to do well in everything, because my whole life I wanted to make her proud." Caroline is president of Mount St. Mary's Investment Club and of SADD (Students Against Destructive Decisions). She's also been a regular volunteer for several years at the Little Rock Compassion Center, whose recovery branch provides meals and health resources to people suffering from addiction. Caroline said she found healing from her own grief in the friendships she forged there. As the recipient of a Questbridge scholarship, described by Caroline's guidance counselor and nominator Amy Perkins as a program where lower-income students qualify for tuition to schools with which they "match" via an early decision process, Caroline will attend Duke University on a full scholarship. "I'm going to study biology and psych, with a minor in Spanish. My plan is to work at the Duke Center for Addiction [Science and Technology] helping people with drug addictions overcome that sort of thing. It's something that I've had experience with, watching my family go through things like that."

AXEL NTAMATUNGIRO
Age: 17
Hometown: Pine Bluff
High School: Subiaco Academy
Parents: Sixte Ntamatungiro and Sylvana Niciteretse
College plans: Rice University, neuroscience

Axel Ntamatungiro grew up among books and maps dispersed throughout his home that "paint[ed] the walls with nuanced shades of knowledge." It shows. Not often can a high school senior explain, as Axel does, his love for studying the brain so easily. "Neuroscience is basically a neuron turning on and off," he said. "The fact that you have billions of these combinations that lead to consciousness, that's unbelievable." To continue learning about the mind, Axel is headed to Rice University on a full ride as a QuestBridge scholar. Maybe medical school or graduate school after that. Axel said his parents taught him a "humble intellectualism" that helped him understand "the irrationality of life." They always told him: "Work hard, but you need to realize you don't always get what you deserve." And life has been, at times, irrational and difficult for his family. Axel was the only member of his family born in the United States — in Little Rock in 1999. The rest migrated from Burundi in the early 1990s. They stayed here as the Rwandan genocide inflicted incredible damage in the area. That past was never hidden from Axel. "Instead of avoiding my questions, my parents level-headedly answered [them], telling me about Belgian colonialism, Hutu-Tutsi tension and the systematic poverty afflicting Burundi," he said. Maybe that is why Axel has never been afraid to ask big questions. He said it also helped to have a diverse group of friends who taught him new things. At his cafeteria table for lunch are kids from all over: Nigeria, Fort Smith, Japan, Bentonville and Russia. Everyone's small stories add to a global perspective, something bigger from something small, kind of like those neurons.

JADE DESPAIN
Age: 18
Hometown: Springdale
High School: Haas Hall Academy (Fayetteville)
Parents: Brenan and Tiffany DeSpain
College plans: U.S. Naval Academy, nuclear engineering

For Jade DeSpain, the question, "Where's your hometown?" isn't necessarily as straightforward as it seems. The National Merit semifinalist, swimming star and Quiz Bowler spent much of her childhood in Beijing, where her parents — both fluent in Mandarin — taught her Chinese concurrently with English (and where, she notes, she acquired an "incredible prowess with chopsticks.") "We've moved around so much that I don't really have a 'hometown,' but Springdale is the closest I've ever gotten," she said. She's made her impact there, too, tutoring students free of charge through her volunteer work with the M&N Augustine Foundation and putting in time at the Arkansas Council for the Blind and the Springdale Animal Shelter. Jade is ranked second in her class, and her high school transcript is full of aced courses in trigonometry, physics and calculus. She's also the co-founder of Haas Hall Academy's coding club, so a career in nuclear energy development — Jade's field of choice — isn't just an aspiration; it's the plan. "I have a deep appreciation for nature," she told us, citing Devil's Den State Park as a spot to which she feels closely connected, and stressing the importance of preserving natural spaces and developing more long-term options for sustainable energy. On Christmas Day 2016, Jade checked her email to find that she'd attained something she'd wanted as early as age 12: acceptance to the U.S. Naval Academy. There, she'll major in nuclear engineering and complete her five mandatory post-Academy years in the Navy, after which she hopes to acquire a Ph.D. in the field.

AVERY ELLIOTT
Age: 18
Hometown: Cabot
High school: Cabot High School
Parents: Dan and Melissa Elliott
College plans: University of Arkansas, medicine

Though many of our All-Stars seem destined from birth for academic greatness, there is the occasional inspiring All-Star who has had to overcome seemingly insurmountable odds. One of those is Cabot High School's Avery Elliott, who was born with nystagmus, a condition that causes involuntary eye movements that can make it hard for sufferers to concentrate and learn. Though it's hard to imagine it now, when she was in elementary school Avery found herself falling further and further behind her classmates in reading because of her condition. "That was difficult," she said. "I was behind schedule until about third or fourth grade. I would have to go home and really work with my parents to keep up with the rest of the class." Even though she struggled early on, Avery said that, in a way, the nystagmus contributed to her success and gave her a direction to follow. "I had to learn to really study even outside of school," she said. "I learned some very good study habits. But I think it also really affected where I wanted to go as far as my career. ... I really learned that a medical team can not only dispense medicine, but can really affect someone's life." A National Merit finalist who has volunteered extensively with Special Olympics and already completed 43 hours of college-level coursework, Avery has been awarded the University of Arkansas Fellowship. She plans to study medicine at UAMS after completing her undergrad degree, then practice in Arkansas. That goal has always pushed her to succeed academically. "I wanted to go into the medical field from an early age," she said, "so I knew starting out in high school that I needed to make very good grades in order to get where I needed to. I had to really learn the material, rather than just trying to ace a test."

JARED GILLIAM
Age: 17
Hometown: Cabot
High school: Cabot High School
Parents: Dan and LeAnne Gilliam
College plans: University of Arkansas, engineering

When most young people say they want to change the world, it's easy to believe that's just pie-in-the-sky thinking by someone who hasn't yet been through the Academy of Hard Knocks. When Jared Gilliam says he wants to change the world, however, there's a good chance he might actually pull it off. Jared even has a plan: He'll change the world through engineering. A National Merit finalist and AP scholar with a GPA of 4.18 and a perfect score of 36 on the ACT, Jared is well placed to do just that. A musician who plays percussion with the Cabot High Marching Band, Jared said his favorite subject in school is math. "I think I'm mostly interested in engineering because I've always been sort of a problem-solver," he said. "I've enjoyed math and science, working through things and finding solutions to everyday problems. This year, I've been in robotics, so we've spent time working on a robot to perform various tasks. I've enjoyed that a lot. I think engineering is where my ability would best be used." He'll attend the University of Arkansas, which has offered him the Honors College Fellowship. He said the drive to excel academically has always been a part of his life. "I've grown up being encouraged to do well, and I guess I have my parents to thank for that and all my teachers," he said. "I think knowing that I have the ability to do all of this, I feel compelled to do what I can to make a difference. I think life would be pretty boring if I didn't go out there and do all the things I do. I don't think I could settle for not being successful."

BENJAMIN KEATING
Age: 18
Hometown: Fort Smith
High School: Southside High School
Parents: Drs. Bill and Janice Keating
College plans: Undecided

If you were looking for a ringing endorsement of Ben Keating's character, you'd need to look no further than Amy Slater, the guidance counselor who nominated him for our Academic All-Stars roster and who said of Ben, "He is all the things I hope my son turns out to be. ... He really thinks about things, and he practices the trumpet and piano for hours a day. It's crazy, his dedication." Ben probably had something to prove here; he admits to some skepticism on the part of his mother when he announced he'd be pursuing a career in music. He's certainly proved his mettle; Ben is band president at Southside, was a principal trumpet for the 2017 National Youth Honor Orchestra, first chair for Southside's Wind Symphony and for the All-State Jazz Band and was ranked in the top-tier bands for All-State Band and All-State Orchestra each year from 2014-16. The accolades go on and on: Ben has received a Young Artist Award from the International Trumpet Guild, a Gold Medal from the National Piano Guild and superior ratings from the National Federation of Music Clubs competitions for over a decade. He plays for the Arkansas Symphony Youth Orchestra and as a volunteer musician for the Fort Smith Community Band. Ben is still deciding where to attend college, but wherever he goes, he hopes to continue playing with an orchestra. Eventually, he wants to teach at the university level. "Ultimately," he wrote, "I want to use my passion to unite people of all different races, backgrounds and cultures. In today's society that is politically and culturally divided, it is more important than ever to share the universal language of music."

KATHERINE HAHN
Age: 17
Hometown: Hindsville
High School: Huntsville High School
Parents: Shannon Hahn
College plans: Massachusetts Institute of Technology, biochemical engineering

Katherine Hahn is ranked first in her class at Huntsville High School, which she attends because her hometown of Hindsville is too small to support its own school system. The population of Hindsville is "about 75 people," she told us. At Huntsville High, Katherine plays bass drum in the marching band and marimba/xylophone in the concert band and runs with the Huntsville cross-country team. Her real passion, though, is science. "I think I've always wanted to go to a college that was science-based and research-based," she told us. Her high school principal, Roxanne Enix, noted her own surprise when Katherine announced that she'd take 10 credits her senior year, instead of the recommended eight. "I thought she had lost her mind," Enix stated. Those credits, over half of which are in AP classes, are what Katherine hopes have prepared her for the rigorous workload at MIT. Aiming for a career in pharmaceutical development, Katherine plans to study biochemical engineering, something she said resonated personally with her as a result of her mother's struggle with skin cancer. "Biology helps me understand why medicine does the things it does," Katherine told us. "Whenever I first started out, I wanted to do environmental stuff," she said, but turned her attention to drug delivery systems after observing so many friends and loved ones battling cancer. "I want to help stop people from being scared of losing people," she explained. Katherine, a native of Tahlequah, Okla., who moved to Arkansas around fifth grade, has served on the Madison County Health Coalition as Youth Leader and was named Student of the Year in 2017 by the Huntsville Chamber of Commerce and Huntsville High School.

GEORGIANA BURNSIDE
Age: 18
Hometown: Little Rock
High school: Little Rock Christian Academy
Parents: Bob and Ann Burnside
College plans: Stanford University, biology and public policy

When this reporter mentioned to friends at UAMS that she'd just spoken to an amazingly poised, optimistic and intelligent young woman with a spinal cord injury, they said in unison, "You mean Georgiana Burnside." Her reputation as a teenager who at 16 was paralyzed from the waist down in a snow skiing accident but who considers the event a "blessing" no doubt goes further than UAMS, all the way to Denver's Craig Hospital, where she spent "the most memorable two months in my life," she said, and where she returns to continue her rehabilitation. What is a spinal cord injury? She answers that it is a) a life changed in a split second, b) finding out that a bad attitude is the true disability, c) a time to show off wheelchair tricks, and d) spontaneous moments of unfortunate incontinence. In her essay for the Arkansas Times, Georgiana writes, "my physical brokenness has developed wholeness in my heart about the capacity life holds for individuals regardless of their disabilities." In a phone interview, Georgiana, once a figure skater, talked about her work with Easter Seals, fundraisers for Craig Hospital, and giving talks and testimony about her faith. Georgiana has regained the ability to walk with hiking sticks and leg braces, thanks to the strength in her quads. And, thanks to support from the High Fives Foundation in Truckee, Calif., which sponsors athletes with injuries and which has paid for some of her rehabilitation, Georgiana returned to the slopes over spring break, skiing upright with the aid of long forearm equipment. At Stanford, she'll study to be a doctor, with a goal to return to Craig Hospital as a physician who'll treat other injured youths who, though they may have, like Georgiana, at first believed their life was over, will learn they have "a unique role ... enabling the advancement of society."

MITCHELL HARVEY
Age: 17
Hometown: North Little Rock
High School: North Little Rock High School
Parents: David and Susan Harvey
College plans: Likely Mississippi State University, chemical engineering

Mitchell Harvey is a big fan of the periodic table. "The elements are amazing little things," he wrote in his Academic All-Star essay. "They make up everything, yet we hardly see them in their pure form in everyday life." Mitchell decided they needed more exposure, so he started collecting examples of the elements and taking them to school for his peers and teachers to see. He extracted helium from an abandoned tank on the side of the road. He found zinc in wheel weights, grew crystals of copper with electrolysis and made bromine, which he describes as "a blood-red liquid that fumes profusely," from a "crude" homemade distillation setup and pool chemicals. Though you can buy sodium readily, Mitchell made his by melting drain cleaner (sodium hydroxide) with a blowtorch and then passing a current through it, separating the mixture into sodium metal, oxygen and water. His parents were OK with the procedure, he says, because he wore a Tyvek suit, three pairs of gloves, safety goggles and a face shield. While on a college visit in California last summer, Mitchell toured Griffith Observatory in Los Angeles and was impressed by the large periodic table display exhibit there. So he decided to build one for North Little Rock High. He got money from the school's alumni group, the Wildcat Foundation, to pay for the supplies necessary to construct the 9-foot-by-6-foot display. He hopes to have it completed in the next two weeks and fill it with examples of elements he has collected, though he may need additional funding to pay for other elements. No. 1 in a class of 687, Mitchell scored a perfect 36 on the ACT. He's also an Eagle Scout, and led a project to plant 800 native hardwood seedlings at Toltec Mounds Archeological State Park. After college, Mitchell said, he might start his own waste remediation business. "The business model I would be going for would be taking some byproduct that's hazardous and turning it into something useful."

CARSON MOLDER
Age: 18
Hometown: Mabelvale
High school: Bryant High SCHOOL
Parents: Kevin and Ruby Molder
College plans: University of Arkansas

Not everybody plays the mellophone and likes to draw up better interstate exchanges, but Carson Molder does both. The University of Arkansas Honors College-bound student, No. 1 in his class, likes to create three-dimensional schemes in his head, and has been creating road designs since he was young. But as a musician who plays the French horn in his school's orchestra and the mellophone in the Legacy of Bryant marching band, and who has won a band scholarship in addition to his Honors College reward to the UA, he said that one day he may be an audio engineer. "I'm going to put things together and see what sticks," he said of his future. Meanwhile, Carson said the internet has been his Nina, Pinta and Santa Maria, taking him to new places that he otherwise could not get to. "I can count on my hands the number of times I have set foot outside Arkansas," Carson wrote in an essay for the Arkansas Times. But with the internet, "I can gaze into the redwood forests of California and the skyscrapers of New York City without leaving my desk." Without the internet, he said in a phone interview, "I would not be at the top of my class." Carson added, "It's not going to replace going out and visiting these things, but if you're a kid and don't have the money to go out, you can visit Yellowstone." Carson, who describes himself as "really ambitious," is looking forward to studying with Dr. Alan Mantooth, the director of the UA National Center for Reliable Electric Power Transmission. The UA, he said, "will provide me the tools" he'll need to succeed in graduate school, which he hopes will be Stanford University.

OLIVIA LANGER
Age: 18
Hometown: Jonesboro
High school: Brookland High School
Parents: Kelly Webb and Jonathan Langer
College plans: University of California, Santa Barbara, chemistry

You might think that a student who is No. 1 in her class and a National Merit finalist with nary a B on her high school transcript might not consider one of her greatest achievements her selection as her high school's drum major three years in a row. But here's the thing: Schoolwork comes easy to Olivia Langer. "I never had to work hard," she told us. In fact, her style of learning is "conversation-based," she said; she enjoys "debate without argument." But music was different: "I struggled at points, and had to put in extra work to be good." Her selection as drum major was "something I know I've worked for," she said, and she has enjoyed the responsibilities that come with it. "I like to take care of people. The band calls me band mom," she added. Beside numerous academic awards, Olivia also earned a 2017 state Horatio Alger scholarship for students who have overcome great obstacles. Hers, Olivia said, was financial: She's always had a place to stay and food to eat, but she hasn't been able to afford academic programs. "Honestly, I wasn't able to visit any of the colleges I applied to," she said. So she will see the UC Santa Barbara campus for the first time when she arrives this fall. She's considering a double major in chemistry and anthropology; she's interested in the evolutionary side of anthropology, and plans to seek graduate and post-graduate degrees.

REBECCA PARHAM
Age: 18
Hometown: Alma
High School: Arkansas School for Mathematics, Sciences and the Arts
Parents: Eileen and Rick Parham
College plans: University of Arkansas or Hendrix College

On a visit to Hanamaki, Japan, with her school, Rebecca Parham noticed that once a month all the citizens would clean the front of their homes and shops. Folks would give each other gifts, too. "It was clear people tended to think for the whole," she said. "I thought that was really nice." An avid chemist, Rebecca did not just improve her Japanese on the trip, she brought those lessons of helping the community back to Arkansas. Her work has been at the intersection of heady science and community impact. In her robotics club, she noticed that girls were less likely to participate. "I decided that was not OK," she said. So, she designed a day with LEGO kits to encourage women to pursue STEM education. That desire to make an impact goes beyond school, too. For her senior project, Rebecca designed a test for homebuyers to see if meth had been cooked on their property (yes, meth). Her parents, on hearing of this project choice, asked her to "please explain a little bit further ... ." Here's the gist: The method of meth production in rural areas has shifted to something called the Birch reduction; older testing kits would no longer work. But Rebecca thought she could produce one that could. She designed a flame test. It finds lithium compounds left behind. The process of invention was "definitely frustrating," Rebecca said, but you "learn things you never thought of before." Rebecca did not plan to spend senior year in her dorm late at night "searching online" how to identify meth production, but she has a driving curiosity toward science and how it "connects to the world." She hopes to work in renewable energy — to be part of the global community, from Japan to Arkansas — making the world a nice place in which to live.

GRANT ROBINSON
Age: 18
Hometown: Searcy
High school: Searcy High School
Parents: Eric and Lisa Robinson College plans: University of Arkansas

Grant Robinson's father is a cardiologist, and Grant long figured he would follow in his dad's footsteps. But now he's not so sure. Last summer, he was selected, among thousands of applicants from around the world, to participate in a Stanford University summer engineering program. He got to experience a taste of college life, to take advantage of Stanford's decked-out labs and to tour the area to see results of civil engineering. The most memorable part of the program? Grant's small group built a Rube Goldberg machine — a complicated gadget that performs a simple task in a convoluted way — that, by Grant's estimation, was "the most complex and aesthetically pleasing" in the program. It included an electromagnet the group handmade and chemical reactions triggered by the machine. Grant's academic achievements are the byproduct of a natural curiosity. He said he spends what little free time he has exploring YouTube, trying to figure out the way the world works. Another influence: His father, who pulled himself out of poverty to become a doctor, has always instilled in him the importance of hard work. The message clearly stuck. Grant is second in his class of 263, with a 4.27 GPA. He scored a 35 on the ACT. He's a Presidential Scholar. His classmates voted him most likely to receive the Nobel Prize. He also participated in Project Unify (now known as Unified Champion Schools), an effort by the Special Olympics to get young people with and without special needs to come together for activities. Grant helped plan a basketball tournament as part of the project. In the fall, he'll be rooting on the Razorbacks at the University of Arkansas.  

JOHN SNYDER
Age: 18
Hometown: Little Rock
High school: Little Rock Christian Academy
Parents: Jill and Steve Snyder
College plans: Cornell University, industrial and labor relations

Whatever you were doing by your senior year in high school, chances are you probably hadn't already authored a book, much less a book on the complicated intersection of taxation and politics. John Snyder has, though. His book, "The Politics of Fiscal Policy," explores the political aspects of economics, including the pros and cons of various governmental tax schemes and their effect on government spending. It's for sale on Amazon right now. "It's pretty concise," John said, "but I wanted a way to express all my ideas in economic terms. That was a great way to do that." A history buff who serves as vice president of his class, John has a stunning 4.49 GPA and is ranked first in his class of 129. Though he wanted to be a lawyer when he was younger, his plan now is to go into investment banking. "Ultimately I want to have my own hedge fund — this thing called an activist hedge fund — and eventually I want to be actively involved in politics, whether that's in the midst of my business career or after ... . I'd love to run for public office one day." At Cornell University, John will be studying industrial and labor relations, a field that marries his love of multiple subjects. "Basically it ties in business, law, economics and history all into sort of one degree," he said. "You can do limitless things [with the degree]. Some people go into law school, some go into banking, some go to politics. That's why I chose that degree." John said his philosophy is that we have only a limited amount of time on earth, and so we should try to make the most of our lives. "I think there are a lot of things I can do to change the way things currently are in society, whether it's related to business or in academia or public policy," he said. "If I don't play a role in that and I'm not striving to do my best, I would feel like I'm wasting my potential."

PRESTON STONE
Age: 18
Hometown: Benton
High school: Benton High School
Parents: Haley Hicks and Brec Stone
College plans: University of Arkansas, pre-med

Benton High School's Big Man on Campus — No. 1 in his class, captain of the football team, an AP Scholar, straight As — can add to his resume the fact that he helped build his home. Preston, his two brothers and his mother bounced around a bit after her divorce, from Texas to Arkansas, living with grandparents and friends, Preston said. Then the family was selected by Habitat for Humanity, and he and his brothers pitched in to build their house. "It was the first place I could truly call home and it allowed me the stability I needed to grow into the kind of student I am today," he wrote in his essay for the Arkansas Times. Preston, who also helped build a school outreach group called SERVE to help new or struggling students, also credits sports for giving him purpose. He recently volunteered to trade in the pigskin for a basketball, joining a team that played boys at the Alexander Juvenile Detention Center. "It was an awesome experience," Preston said in a phone interview. "We were a little bit nervous at first" at the detention center, he said, but the team enjoyed the game — even though they lost to the Alexander team, formed to reward inmates with good behavior. "They practice every day," Preston said. Preston has received a $70,000 Honors College scholarship at Fayetteville. He won't be playing football with the Razorbacks. Instead he is thinking of following a pre-med track that will lead him to sports medicine. He plans to go Greek, as well.

KARINA BAO
Age: 18
Hometown: Little Rock
High school: Central High School
Parents: Amy Yu and Shawn Bao
College plans: undecided

Karina Bao embraces complexity. The Central High School valedictorian (in a class of 636) is a member of the school's back-to-back state champion Ethics Bowl Team, for which she said she spent hours "researching, discussing and sometimes even arguing" case studies. Unlike debate, she said Ethics Bowl is "really about the back-and-forth and considering different caveats and nuances and considerations" in issues ranging from local food to gender identity. As president of the school's Brain Club, she leads discussions on brain diseases, disorders and anatomy. It's a role for which she's more than qualified: She placed first in the U.S. Brain Bee, a youth neuroscience competition in which contestants answer questions about anatomy and make diagnoses based on patient actors. Placing No. 1 in the U.S. competition landed Karina a trip to Copenhagen, Denmark, to the International Brain Bee, which happened to coincide with a Federation of International Neuroscientists conference, where Karina got to talk to scientists from all over the world about their groundbreaking research. She placed fifth in the international competition. A perennial outstanding delegate winner at Model United Nations competitions, Karina said Model U.N. has helped her to "not be scared of the complexity and interconnectedness of pressing issues we face today." In her spare time, Karina volunteers on the oncology wing of Baptist Hospital. "You don't get to do much," she said. "But at least we get to talk to people and help them with whatever they need and be there to listen." In her Academic All-Stars essay, Karina echoed the same drive for understanding: "The stories other people share with me become not my own when I retell them, but a part of humanity's collective spirit to understand each other. We grow from hours of listening and crying, to empathize, to have the strength and openness to pop each successive layer of the protective bubble that keeps us from seeing the very world in which we reside."

BRYCE COHEA
Age: 19
Hometown: Greenwood
High school: Greenwood High School
Parents: Mike and Robin Cohea
College plans: University of Tulsa or Vanderbilt university, biology

Though he grew up landlocked, far from the deep blue sea, Greenwood High School standout Bryce Cohea knew from an early age that he wanted to be a marine biologist. To reach that goal, Bryce had to start early. "In the ninth grade," he wrote in his Academic All-Stars essay, "I began planning out all my classes for the next four years. I wanted to graduate top of my class, and in order to do that I would need to take every advanced placement class and get an A in every class." That's exactly what he did, too, making nothing less than a perfect grade in every class for his entire high school career. With a 4.25 GPA and a rank of No. 1 in his class of 275, Bryce has volunteered extensively with the Salvation Army and collected shoes for the homeless; he helps unload trucks and stock shelves at the food bank at his church. A National Merit semifinalist, he also has the distinction of having scored the first perfect ACT score of 36 in Greenwood High School history. "I've honestly been a good test-taker," he said. "The first time I took it, I got a 34. After that, I got the test back and I worked on whatever I missed. After a few more tries, I got a 36." Bryce was still deciding on which university to attend when we spoke to him, but he definitely plans to study science. The subject has always interested him, he said. "I'm planning on majoring in biology and then specializing after that," he said.

IMANI GOSSERAND
Age: 16
Hometown: Rogers
HIGH SCHOOL: ROGERS HIGH SCHOOL
Parents: James and Hyesun Gosserand

College plans: University of Southern California, Harvey Mudd College or Columbia University, computer science or environmental science

Imani Gosserand has a journal in which she organizes the many moving parts of her life — competitive gymnastics, AP classes, computer science, Young Democrats, volunteering — into lists. Personal stuff is in there, too: bucket lists, remembrances. The journal combines the creative and the organized; it is problem-solving with an artful flare, which is how Imani operates. "I really like being able to create something of my own," she said of computer science. At a camp at Stanford University, in California, her team won the competition to program a car. Imani, not surprisingly, is good at math: She learned multiplication at age 4 and went on to skip two grades. Imani thinks schoolwork is fun. "We had a huge packet of homework problems we had to do over one of our breaks," she said. "And no one else was excited about it except for me. I was like 'Oh, I'm so excited to do all these problems!' " She brings that enthusiasm for problem-solving to bigger issues, as well. "I feel like there are so many opportunities for me because our world relies on technology, so I think I could go into any field," she said. She's excited to explore and see where she can help. "I want to meet people from around the world and hear different perspectives."

C.J. FOWLER
Age: 18
Hometown: Little Rock
High School: Central High School
Parents: Bobbi and Dustin McDaniel and Chris and Kim Fowler
College plans: Yale University

C.J. Fowler has long been around Democratic politics. His stepfather is former Arkansas Attorney General Dustin McDaniel. But C.J. said he decided to become more politically involved himself after he came out as gay. "The situation that I'm in is not great," he said. "People are not always accepting. But it's on me if I want to try to change that and make it better for the people who come after me. I have to make sure that my community and all marginalized communities have a seat at the table, because far too often a bunch of old gray white guys are making policies that hurt everyone else." The student body president of Central High, C.J. said he's tried to move the student council, a glorified dance committee, toward advocacy and activism for students throughout the district, whose future is being decided by those "people sitting in dark rooms." He said students too often get left out of the conversation about the district "because we're too young to have opinions. But we're not; we're living it every day." C.J. has been a fixture at Little Rock School District public comment periods. Though he can't point to any policy victories, he said at least LRSD Superintendent Mike Poore knows who he is and that he disagrees with him. C.J., who is also the executive director of Young Democrats of Arkansas, sees the backlash against President Trump as encouraging. "We're realizing that, if we're going to go all in for progressive values, we need to go all in." Rather than join the chorus of progressives in the Northeast after he finishes at Yale, C.J. says he wants to come back to Arkansas and possibly continue in politics. He admires state Sen. Joyce Elliott (D-Little Rock) and says he hopes if he ever holds office that he can follow her example.

SOPHIE PRICE
Age: 18
Hometown: Fort Smith
High School: Southside High
Parents: Claire Price and Scott Price
College plans: Vanderbilt University, political science

"Growing up, I would always argue with everybody," Sophie Price said. Sometimes it was just to play devil's advocate, but mostly, it was because Sophie wants to find the capital-t Truth. Some of this digging for truth is class: seven AP course just this year and 12 during her time in high school. But, some of it is also talking with people, discussing issues. "The best way to improve your argument is to hear the counters, to hear the other side," Sophie said, and often she is willing to be convinced. She wants to do the right thing; she believes in justice. Which is why after college at Vanderbilt on a full scholarship, she wants to field arguments as a judge. "My whole life I've followed this ideal that you have to do what's right," Sophie said. "I want to be a judge so I can kind of decide that." Vanderbilt was the only school to which Sophie applied. She knew it was the right one for her. She arrived in Nashville on a rainy day in January, but through the gloom, she knew. "Something about the beautiful campus and the intelligent people and these varying perspectives just sold me immediately," she said. In a few months she was back at Vanderbilt for a camp where she studied law, and it cemented the deal. "There was something so exhilarating about being able to have this case and have the facts and kind of create your own narrative and really advocate for someone that drew me in," she said. Watch out, because "everything I do, I want to give it a 120 percent," Sophie said.

MEAGAN OLSEN
Age: 17
Hometown: Fayetteville
High School: Fayetteville High School
Parents: Anjanette Olsen
College plans: University of Arkansas Honors College, chemical engineering

Fayetteville High School’s top student, with a perfect ACT score of 36, a 4.2 gradepoint average and the co-author of a paper on fractal self-assembly, is not just a bookworm. She’s a leader, her counselor Cindy Alley says, who shows “grit, motivation to succeed and a desire to help others.” She is also, Alley says, “a pure joy to be around.” In her essay for the Arkansas Times, Meagan talked about how she came to understand “ternary counters,” a base-3 method of counting in which only the digits 0, 1 and 2 are used. (Binary counters of 0 and 1 make up our computer’s “thinking,” as people with 10 fingers, we use base 10 to count.) Meagan, trying to make a “self-assembling ternary counter,” said she banged her head against “endless walls” for weeks. Then just after 1 a.m., she woke up with the answer. It’s a wise child who gives credit where credit is due: “I understood,” she wrote, “my mother’s advice about taking a break whenever I was upset.” Meagan’s paper on fractal self-assembly was published in the 22nd International Conference on DNBA Computing and Molecular Programing. She no longer lets frustration prevent her from solving a problem; sometimes, she’ll just sleep on it. Meagan told the Times she plans to attend a small conference this summer and then take some needed down time. She plans to use her degree from Fayetteville to pursue biomedical research.
          New Kernels and Linux Foundation Efforts   
  • Four new stable kernels
  • Linux Foundation Launches Open Security Controller Project

    he Linux Foundation launched a new open source project focused on security for orchestration of multi-cloud environments.

    The Open Security Controller Project software will automate the deployment of virtualized network security functions — such as firewalls, intrusion prevention systems, and application data controllers — to protect east-west traffic inside the data center.

  • Open Security Controller: Security service orchestration for multi-cloud environments

    The Linux Foundation launched the Open Security Controller project, an open source project focused on centralizing security services orchestration for multi-cloud environments.

  • The Linux Foundation explains the importance of open source in autonomous, connected cars

    Open source computing has always been a major boon to the world of developers, and technology as a whole. Take Google's pioneering Android OS for example, based on the open source code, which can be safely credited with impacting the world of everyday technology in an unprecedented manner when it was introduced. It is, hence, no surprise when a large part of the automobile industry is looking at open source platforms to build on advanced automobile dynamics.


          Estas obras de arte han sido creadas por una doble red neuronal y logran gustar más que las pintadas por humanos    

Cerebro

Dicen que hay relación entre la creatividad y la inteligencia, y puede que ésta sea otra cosa a comprobar con la artificial. Y la verdad es que resulta bastante curioso ver cómo se desarrolla y defiende esta especie de "creatividad artificial" si hablamos de una inteligencia artificial que es capaz de crear arte mezclando estilos pictóricos.

Dejando pinceles y óleos a un lado y tirando de algoritmos y redes neuronales, lo que han hecho unos investigadores de la Universidad Rutgers (Nueva Jersey) y el laboratorio de IA de Facebook e California es crear un sistema dual para crear obras de arte. Pero el objetivo no era que fuese algo aleatorio o fundamentado en lo abstracto, sino que el sistema fuese capaz de crear algo catalogado como arte sin corresponderse a ninguna corriente artística existente (barroco, cubismo, etc.).

Dos cerebros juntos piensan más que uno, y dos redes neuronales también

Según explican en el trabajo publicado, para la creación del sistema se ha partido de la modificación de un algoritmo llamado Generative Adversial Network (GAN), denominación que deja ver la base de su funcionamiento, dado que lo que hace es enfrentar dos redes neuronales con el fin de que esa confrontación de un resultado cada vez mejor. Los componentes de este curioso enfrentamiento son una red que crea una solución y otra que se encarga de evaluarla, siendo el algoritmo el catalizador para que una y otra red den con la solución más acertada.

Aplicándolo a la creación de arte, el equipo habla de una Creative Adversarial Network (CAN), y se trata de que una de las redes (la generadora) genere imágenes continuamente y otra (la discriminadora) se encargue de catalogarlo o no como arte, enviando dos señales contradictorias a la generadora para conseguir una creación nueva, no demasiado innovadora y no perteneciente a ningún estilo. La discriminadora puede juzgar gracias a haber recibido un entrenamiento con 81.449 pinturas de 1.119 artistas distintos (del siglo XV al XX), con lo que es capaz de discernir entre una obra de arte y otro elemento (como una fotografía o un diagrama) y el estilo al que pertenece.

Like Esta selección es la que más gustó al jurado humano.

De ahí que, como decíamos al principio, el objetivo sea que el sistema dé con una creación que no pueda catalogarse en ninguno de estos estilos, siendo en la práctica un nuevo estilo per se. Explica Marian Mazzone en New Scientist, historiadora de arte en el Colegio de Charleston en Carolina del Sur que ha trabajado en el proyecto, que la idea es crear arte que sea innovador pero no en exceso.

De este modo, lo que buscan estos investigadores es lograr una generación de arte artificial al 100%, de modo que no intervenga ningún ser humano en el proceso. Aunque como explican, igual que la creatividad humana en el arte parte de experiencias y conocimientos previos, y que la motivación para el estudio parte de la hipótesis de Colin Martindale (que fue profesor de psicología en la Universidad de Maine) de que las nuevas creaciones de arte surgen de un intento de romper con lo previo y mejorar, lo cual han querido computerizar usando este GAN modificado.

Artistas imprevisibles y sin pinceles

En cuanto a los resultados, los investigadores concluyen que hay una marcada ausencia de figuras, tendiendo más a un estilo abstracto y que probablemente se deba a esa mínima de no basarse en ningún género ya existente. Y lo interesante es que lo que hicieron es mostrar los trabajos en público, de modo que éste fuese el encargado de catalogarlo como arte, así como diferenciar una creación humana de una artificial.

Para este cara a cara (con trampa) escogieron obras abstractas expresionistas (creadas entre 1945 y 2017) y otro set de obras que se mostraron en el Art Basel 2016, buscando trabajos que destacasen en creatividad (siendo éste un evento de referencia en el arte contemporáneo), frente a las creadas por los sistemas artificiales. A los sujetos se les preguntó si pensaban que las obras habían sido creadas por un ser humano o por una máquina, qué nota le pondrían (por gusto) o si daban la impresión de corresponder a un artista novel o a uno con experiencia.

Lo que vieron es que se le daba más nota a las imágenes generadas por CAN que a las del Art Basel (aunque reñido, 53% frente a un 42%), aunque enfrentando las de CAN al conjunto de obras humanas éste quedaba un 9% por debajo (53% versus 62%).

Unlike La selección de obras creadas por la AI que menos gustó.
¿Pero entonces lo que crea una AI es arte? Según sus experimentos, estos investigadores así lo consideran

¿Pero entonces lo que crea una AI es arte? Con el fin de determinar esto realizaron otro test preguntando por la intencionalidad en la creación, si se veía una estructura, una inspiración o si sentían que la obra se comunicaba con ellos. Según los resultados, explican que los sujetos encontraron las imágenes generadas por CAN "intencionales, visualmente estructuradas, comunicativas e inspiradoras", con lo que creen que pueden considerarse arte.

Ante ojos ignorantes en este campo, la verdad es que las muestras que vemos pueden ser asumidas perfectamente como la creación de un ser humano. Eso sí, esta vez no tienen ese factor espeluznante como los retratos artificiales que seguían nuestro cursor que veíamos la semana pasada.

Información e imágenes | Arxiv, Freepik
En Xataka | Este algoritmo imita el estilo de los pintores más famosos y lo aplica a cualquier foto

También te recomendamos

¿Cómo entrenar a una plataforma de inteligencia artificial? Jugando a Pac-Man, claro

Ni Twitter ni Facebook ni Google, ¿cómo se lo montan los chinos en Internet?

Google ha creado un 'Paint con esteroides': IA para que los que no sabemos dibujar obtengamos resultados brillantes

-
La noticia Estas obras de arte han sido creadas por una doble red neuronal y logran gustar más que las pintadas por humanos fue publicada originalmente en Xataka por Anna Martí .


          Essential Computer Security: Everyone's Guide to Email, Internet, and Wireless Security   

Essential Computer Security provides the vast home user and small office computer market with the information they must know in order to understand the risks of computing on the Internet and what they can do to protect themselves. Tony Bradley is the Guide for the About.com site for Internet Network Security. In his role managing the content for a site that has over 600,000 page views per month and a weekly newsletter with 25,000 subscribers, Tony has learned how to talk to people, everyday people, about computer security. Intended for the security illiterate, Essential Computer Security is a source of jargon-less advice everyone needs to operate their computer securely. * Written in easy to understand non-technical language that novices can comprehend * Provides detailed coverage of the essential security subjects that everyone needs to know * Covers just enough information to educate without being overwhelming


          Is this Microsoft's cancelled Surface Mini? - CNET   

CNET

Is this Microsoft's cancelled Surface Mini?
CNET
Was Microsoft planning to release a smaller Surface called "Surface Mini" in 2014? According to Windows Central, it supposedly was. Back in 2014, a third-party revealed a case for a "Surface Mini" product ahead of Microsoft's planned event in New York ...
Microsoft's canceled Surface Mini tablet emerges in leaked imagesThe Verge
I'm Glad Microsoft Cancelled This Surface MiniGizmodo
New photos offer first glimpse of Microsoft's canceled Surface MiniDigital Trends
Motley Fool -PCWorld -Business Insider -Thurrott.com (blog)
all 40 news articles »

          The most expensive Galaxy Note 8 could cost way more than $1000 - BGR   

BGR

The most expensive Galaxy Note 8 could cost way more than $1000
BGR
Samsung will unveil the Galaxy Note 8 in less than two months, which means we'll be bombarded by more leaks and rumors in the coming weeks. The phone will be very similar to the Galaxy S8+, so we shouldn't expect any surprises in terms of design or ...
Deal: Unlocked Samsung Galaxy S8 Dual-SIM for just $599 – 6/30/17Android Headlines
Can the Galaxy Tab S3 replace your everyday computing needs?Phandroid.com
A bigger Galaxy Note 8 may answer power-users' big complaintSlashGear
Droid Life (press release) (blog) -ValueWalk -Tech Times -Phone Arena
all 157 news articles »

          Team accelerates rendering with AI   
Modern films and TV shows are filled with spectacular, computer-generated sequences which are computed by rendering systems that simulate the flow of light in a 3D scene. However, computing many light rays is an immensely labor-intensive and time-consuming process. The alternative is to render the images using only a few light rays, but this shortcut results in inaccuracies that show up as objectionable noise in the final image.
          Visit to PS1 in Queens   
I finished my midterm projects for Physical Computing and Intro to Computational Media on Friday, so I decided to get out and get some “culture” by attending the ArtOut with Marina Zurkow at PS1 in Queens.  Elizabeth, who I worked with to make the Herbivores animation, has an in-depth post about the visit.  I don’t … Continue reading Visit to PS1 in Queens
          南洋理工大学   
新加坡著名大学之一的南洋理工大学正在招聘人才:
1. Corporate Communication Office
. Assistant Manager/Manager
(Marketing Communication)

. Assistant Manager/Manager
(Publicity, Media and Account Relations)

2. High Performance Computing Initiative
. IT Specialist/Senior IT Specialist
(System Support Team)
. Research Engineer/Scientist
(Application Support Team)

详细的情形敬请上网查核,以免谬误:
http://www.ntu.edu.sg/ohr/Career/Pages/default.aspx
          Unleashing the Power of Google Apps   
Google Apps is one the latest challenges to Microsoft's dominance of productivity software on the desktop. A clear example of cloud computing, as all data created is stored on the Internet, Google apps includes applications for communications, including Gmail for electronic mail, Gtalk for instant messaging and Google calendar for organizing individuals schedules as well as sharing events, meetings and calendars with others; Google Docs, which can be used for creating documents, spreadsheets and presentations; The Start Page which is a repository for users to preview their electronic mail, calendars and other content and Google Sites which maintains related documents and other information in place are part of the series of applications for collaboration and publishing. Google Apps is available in Standard, Premier and Education Editions, the first and last being free and the Premier Edition costing $50 per head per user per year.

According to Google, 500,000 businesses have signed up to use Google Apps, although the breakdown between paid and free subscriptions is unavailable. Many organizations are apparently impressed by the lower costs compared to a traditional desktop application, that include purchase of the software itself, reduced support and storage costs as well as claims of faster learning curves.

In addition, evidence is rising that that Google may be shaking up the software productivity market sooner than had been anticipated. Gartner's research has revealed many top corporations using Google Apps, with at least 6% of large enterprise information technology personnel using a Google web-based application on a daily basis. Google's share of the productivity market is becoming quickly pervasive.

However, in general, Google Apps are generally not as powerful in terms of capabilities when compared to Microsoft's Office Suite. However, whereas Microsoft's products are installed on desktops, Google's applications facilitate mobile collaboration, given their accessibility from the Internet at any time and any place. This capability for mobile users is very much ahead of Microsoft's plans for the deployment of its applications over the web. Yet, the heart of the controversy lies in the number of paid subscriptions to the service, which Google is hesitant to reveal, making it reliant on its income from these applications from advertising on the free edition of Gmail.

The implications for Microsoft and the industry are somewhat apparent. Microsoft will most likely need to adjust the pricing of its Office Suite in order to acknowledge the presence of Google Apps in the marketplace. In addition, users of office productivity have much to gain with this competition. Both suites will continue to add features and improvements in order to gain customers over each other.

Related to the applications arena, Google is now trying to be the premier search engine for image searching. Google has developed an image-recognition technology, called Visual Rank, which reduces the number of irrelevant images returned in a search by 83%. The implications for better search relevancy mean faster clicking through web pages which means increased revenue for Google. Other start-ups are making attempts in this arena, yet do not have the ability to make overall breakthroughs that can improve upon Google's technology. Their technologies may tackle image searches specific to an industry rather than compete against Google's which is geared for the computer industry in general. The release of this new capability is yet to be disclosed.

Visit http://www.OCRuggedLaptops.com for more information about the rugged laptop industry.
Article Source: http://EzineArticles.com/?expert=Mack_Harris
          Migrate database for mobile application from Parse.com to similar service by caius9090   
My Android App back-end was using parse.com to hold the data. This service has now shut down but I want to see if there still some way to migrate the data out and host it on a similar service. Migration... (Budget: $30 - $250 USD, Jobs: Android, Cloud Computing, Database Programming, Mobile Phone, PHP)
          Big Data Developer   
NJ-Jersey City, Job Summary & Responsibilities Compliance Technology provides technology solutions to help Compliance manage the firm's regulatory and reputational risks, and enables them to advise and assist the firm?s businesses. Data Analytics team part of Compliance Technology is seeking Java/Scala developers with deep knowledge of distributed computing principles. The candidate is responsible for design, dev
             


La radioafición esta presente en todo el mundo. Mapa de prefijos de los países Mundiales




=====    ==========    =====

Sueño radioeléctrico
                                          Pedro Martínez  EA3GFP

Suena muy bien poder viajar por las ondas radioelectricas, con el dedo en el dial del equipo en las  distintas frecuencias de radio. Hoy por hoy nuestra radio vive en los corazones de muchas personas, porque fue y es gran medio de comunicación. Yo siempre estoy con la radio como radioescucha y como radioaficionado con mi indicativo de llamada EA3GFP, a veces sueño con ella y creo que soy un viajero de las ondas.

                                                     ¿Que es un radioaficionado?  

La radioafición, origen mismo de las comunicaciones por radio, es una de las actividades más desarrolladas en el mundo entero por su expansión geografica y cantidad de personas que ejercen esta actividad, más aún con el importantisimo desarrollo de la electrónica. Desde los primeros de la radio, Hertz, Marconi,etc, el desarrollo de la radio comenzó a establecer de un modo amateur, por la investigación misma en una parte de la ciencia hasta entonces desconocida. Si leemos la vida de Marconi.
Vemos en sus trabajos y esfuerzo el espirítu de un verdadero radioaficionado. La construcción permanente de transmisores, receptores y antenas. Si nos restringimos a la definición establecida por la ley de qué es un radioaficionado podemos decir que es "una persona debidamente autorizada qu se interesa en la radiotecnia con carácter exclusivamente individual, sin fines de lucro y que realiza con su estación actividades de instrucción, de intercomunicación".
De más está decir, la cantidad de veces que los radioaficionados intervienen con sus estaciones y equipos, donde convocados o no, prestan su servicio en situaciones de emergencia, catastrofes, y otras necesidades donde las comunicaciones tradicionales cesan o no se cuenta con ellas. Destacamos entonces con ésto, que la actividad no es simplemente un hobby, sino un servicio, en la que el radioaficionado pone a disposición sus equipos y conocimientos, para ser utilizados para la comunidad, es por esta razón, además de por causas de interrelacionarse con otros tipos de telecomunicaciones, que lo rige una ley de la Nación. 
Desde sus comienzos, la radioafición ha colaborado en muchos campos y sobre todo en los casos de desastre, como huracanes, tormentas, inundaciones, terremotos, accidentes aéreos, ferrocarriles y calamidades de todo tipo en donde las comunicaciones regulares se han suspendido por causa de estos mismos desastres.Muchas personas le deben la vida a la radioafición ya que por su medio se ha logrado conseguir medicinas y ayudas médicas en lugares remotos, debido a una ágil comunicación entre radioaficionados.
Tambien tiene su importancia, el radioaficionado puede ser de cualquir parte de mundo, y cuando este está despierto, en otros países pueden estar en horario nocturno, por lo que el estar algunas horas sin dormir, es normal para conseguir el contacto de DX significa de larga distancia con el pais deseado, los prefijos de los indicativos han sido atribuidos a nivel mundial por la Unión Internacional de Telecomunicaciones.
El donde, es desde donde emite el radioaficionado, que generalmente será desde su domicilio, pero en ocasiones, hay expediciones de grupos de radioaficionados, los cuales se dirígen a una isla o país especifico, el cual no tiene entre sus habitantes, ningun radioaficionado. Dichas expediciones son anunciadas en la revistas y webs para conocimientos mundial.
El interés por la radioafición es universal y no tiene limitaciones en cuanto a edad, sexo,raza, religión, creencias politicas, ocupación o nivel  social.se puede encontrar entre los radioaficionados a persona de cualquier profesión y clase social hablando en cualquier idioma sin ello sea un obtaculo para establecer una comunicación y una amistad que puede durar toda la vida. los radioaficionados son una gran familia, una fraternidad universal, todos los radioaficionados tienen unos principios generales y comunes que los agrupan en torno a ellos y que pueden resumirse así. 
El porqué de esos contactos, es la satisfacción de haber contactado, aunque sea solo 40 segundo, con la estación deseada. Oir que dicha estación, pronuncia nuestra matricula (indicativo) dándonos paso para efectuar dicho contacto, pronunciando más tarde nuestro nombre de pila y dándonos tambien las gracias por el contacto, lo mismo le ocurre al corresponsal. Dicho contacto va culminado meses después, con la recepción mediante la Unión de Radioaficionados de cada pais (URE en España) de la tarjeta de confirmación del contacto, llamada QSL, las señales de radio viajan hasta los confines del mundo, a la velocidad de la luz.. Si el idioma es diferente, basicamente es un mero contacto, pero si se domina el inglés o es en habla castellana, se puede hablar de todo pero......hay un código ético del radioaficionado, el cual prohibe hablar de sexo, politica y religión ¿por que? Es obio que si estas hablando con un colega radioaficionada, o radioaficionado, de Israel, Irak, Marruecos, India, etc, todos cumplimos todas las leyes de Telecomunicaciones de no se asi cualquier radioaficionado puede comunicar nuestro incumplimiento.
El 15 de abril del 2012 hizo exactamente 100 años del hundimiento del Titanic tras colisionar con un iceberg. Los radioaficionados participamos ese día con indicativo especial del 1 al 30 de abril de 2012 para rendir un tributo especial al radioaficionado Jack Philips, que hasta su último aliento estuvo cumpliendo con sus funciones, transmitiendo la señal de socorro todo el tiempo que pudo. El Titanic, herido de muerte, gracias a la radio pudieron acudir otros navios para salvar muchas vidas.
Vale la pena mencionar que los prímeros satélites no oficiales lanzados al espacio fueron diseñados y fabricados  por radioaficionados. El caso citado es la serie Oscar (Orbital Satelite Carrying Amateur Radio) satélite orbital para el uso de radioaficionados.
Para terminar, un radioaficionado  es una persoma respetuosa y educada. Habla con cualquier tipo de persona y de cualquier estatus o clase social..  
                                ___________    ___   ___________


SERÓN ES NUESTRO PUEBLO AQUI  NACIMOS.

  SERON IS OUR PEOPLE HERE WERE BORN.





Como llegar
 How to get there.
 
Situación.
Situado al noroeste de la provincia de Almería, SeRón se alza en la ladera norte de la sierra de los Filabres, dominando desde su altura el valle del Almanzora. Su término municipal presenta una extensión de 168,80 Km y una población aproximada de 3.000 habitantes, que se reparten entre el núcleo principal y sus numerosas barriadas, salpicadas a lo largo de los márgenes del rió Almanzora. El actual pueblo de origen musulmán, presenta una estructura de calles estrechas y sinuosas que nos llevan hasta la parte más elevada, la cual aparece coronada por su castillo. Sus casas encaladas se descuelgan por la ladera creando un bello y pintoresco paisaje de la villa.

Situatión.
.Located northwest of the province of Almeria, SeRón stands on the northern slope of the Sierra de los Filabres, dominating from its height the Almanzora Valley. The township has an area of 168,80 Km and a populatión of 3.000 inhabitants, which is divided between the core and its many neighborhoods, dotted slong the banks of the river Almanzora. The present village of Muslim origin, presents a narrow, winding streets that lead up to the highest part, which is crowned by its  castle. Encalanadas  home unhook  the  side creating a beautiful and picturesque landscape of the town. 



                                               BARRIADAS DE SERÓN
                                                      SERÓN SLUMS 
                                     IGLESIA DE NTRA. SRA. DE LA ANUNCIACION
                                    CHURCH OF OUR. SRA. THE ANNUNCIATION
La iglesia de Serón fue construida conforme avanzaba el siglo XVI recién conquistadas las tierras de los árabes, se piensa en sequida en construir templos a los cristianos. Quizás sea este el motivo de tener en cuenta a Serón entre los territorios de realengo a los que la corona ayudaba con maravedies (Moneda en tiempos de los Reyes Católicos) para dotarlo de una catedral. Como casi todas las iglesias construidas en esta zona después de la reconquista son de estilo mudéjar, su planta es rectangular con tres naves separadas por grandes pilares. La capital mayor está un poco más elevada y situada tras el arco toral, es de planta rectangular y cubierta de armadura de limabordón lo que comúnmente se llama antesanado de madera. Tiene dos portadas que presentan idéntico diseño, son de estilo barroco, están realizadas en cantería, la principal situada a los pies de la iglesia esta franqueada por el escudo del obispo Portocarrero, descendiente del marques de Villena, personaje clave en la construcción del monumento.
                                                            --------             ---------
The church was built SeRón as the century XVI recently conquered the land of the Arabs, then you think to build temples to Christians, perhaps this is reason to consdes SeRón between the territories of the Crown to those who helped with mrs crown (currency in times of Ferdinand and Isabella) to give it a cathedral. LiKe most churches built in this area after the reconquest are Mudejar style, its plan is rectangular with three naves separated by pillars The more capital is alittle higher and located behind the arch, is rectangular and covered with armor limabordon what is commonly called wood crafts, It hastwo doors with identical design, they are in Baroque style, are made of stone, the main located at the foot ofthe churchis flanqed by the arms of Bishop Portocarrero, a descendant of Villena masque, a Key player in building the monument.
                                                                    -------    -------
                                                                    


                                                              CASTILLO
El castillo de Serón, de época Nazari, data del siglo XIII y se encuentra situado en la parte más elevada del pueblo.Desde él se divisa todo el valle del Almanzora. La sierra de las Estancias y parte de la Provincia de Granada. Jugó un importante papel en época musulmana debido a su carácter defensivo y sirvió de refugio en la sublevación de los moriscos.
De la fortaleza original sólo queda una pared y algunos lienzos de muro diseminado a lo largo de la construcción. Su planta es rectangular, formada por cubos donde se van erigiendo torreones, construidos a base de mamposteria unida con argamasa, usando el ladrillo para las esquinas, puertas y ventanas, en ellos se distingue con precisión los que posteriormente se han adosado como elementos de restauración.
Lproporciones son bastante grandes, sobre todo en su base, ya que estas se van reduciendo conforme se asciende, que dando en la parte superior una pequeña explanada, donde se ha construido una torre de reloj.
                                                          
                                                                      Castillo

The castle of SeRón, Nazari period, XIII century and is located in the highest part of town. Since you can see the whole valley of Almanzora. The sierra de las Estancias and part of the province of Granada played an important role in the Muslim period due to ist defensive nature and served as a refuge in the revolt pf the Moors. 
Of the original fort is only a wall and some wal oaintings Lorge spread to construction. Lts plan is rectangular, formed by the bucket where they are erecting towers, built of masonry together with mortar, using the bricks for the corners, doors and windows, they preciom Rainforest is with those who subsequently were attached as elements of restoration.
The situation of this castle is majestic, its proportions are quite iarge, especially at its base, as these are reduced as one moves, which give the top asmall plain, where it has built a clock thwer.

 
                                                        TORRE DEL RELOJ
                                                            CLOCK TOWER
En la parte más elevada del castillo, se construyó a finales del siglo XIX una torre para que albergara el mecanismo de un reloj.
La torre de estilo Neo-mudéjar, es de planta cuadrada, elevada sobre un zócalo de mampoteria, el busto se hace de piedra y se usa el ladrillo para recubrir la puerta, las ventanas y reforzar las esquinas. La torre se divide en dos pisos con ventanas bipartitas en cada uno de sus frentes, estas presenta en el remate un arco de medio punto realizado en ladrillo. Las campanas que coronan el edificio están destinadas a una para las señales en caso de incendio u otra catástrofe y la otra para el reloj que da un toque en los cuartos, dos en la media hora, tres en los tres cuartos y cuatro a la hora; que las repite dos veces.
                                                         --------     -------
In the highest part of the castle, built in the late nineteenth a tower to house the clock mechanism.
The tower of Neo-Mudejar style, is square raised on a plinth mampoteria, the bust is made of stone and brick is used to coat the door, windows and reinforce the corners. The tower has two floors with bipartite windows in each of their foreheads, are presented in the auction one arch made of brick.The bells that crown the buildind are designed to one for the signals in case of fire or other catastrophe and one for the clock that you tap rooms, two on the half hour, three in the backs and four at the time, that repeated twice.











ERMITA NTRA.SRA. DE LOS REMEDIOS

                                               CHAPEL NTRA. SRA REMEDIES

Construcción Neoclásica del siglo XIX, que fue mandada  construir bajo el obispado de Don José María Orberá y Carrión, en su política de dotar a toda la provincia de templos dignos para el culto.
La ermita presenta un exterior muy sobrio, sin embargo su interior destaca por su simplicidad y belleza. Su ornamentación se organiza a base de orden jónico enmarcando arcos de medio punto y sustentando entablamentos, sobre los que se vuela la bóveda de cañón con lumetos terminados en óculos. Tras el arco toral se abre la capilla mayor donde se encuentra la imagen de la patrona la "Virgen de los Remedios" que se cubre con cúpula bellamente decorada con la paloma del Espíritu Santo y ángeles.
                                           
                                                         ---------   --------

                                                     CASA DE LA CULTURA
                                                       
Edificio que junto con el Ayuntamiento dibujan la Plaza Nueva. En su interior se ha dado utilidad a la Biblioteca Municipal, Sala de Exposiciones, Oficina de Información Juvenil, Centro de Tercera Edad, y Guadalinfo Centro de Informática.
                                           ___________      _____________

                                                   HOUSE OF CULTURE
Building along with city draw the Plaza Nueva. The interior has been useful to the Municpal Library, Exhibition Hall, Youth Informatiòn Office, Denior Center, and Guadalinfo Computing Center
 















          Informe: Tecnologías orientadas a la movilidad: orientación y tendencias   
El Ministerio de Industria, Energía y Turismo, a través del Observatorio Nacional de Telecomunicaciones y de la Sociedad de la Información (ONTSI), y la Fundación Vodafone España han realizado este  informe, que analiza el grado de implantación de las TIC orientadas a la movilidad en hogares y empresas. El estudio se ha basado en una encuesta a 600 compañías representativas del tejido empresarial español de cara a  valorar el grado de implantación de las TIC móviles, las ventajas percibidas y las principales barreras. Además, se ha realizado una consulta a un grupo de expertos vinculados a las tecnologías orientadas a la movilidad desde diferentes perspectivas para analizar el impacto económico y social de estas tecnologías. También se han analizado los trabajos previos más relevantes como punto de partida o de profundización. El concepto de movilidad estudiado ha sido amplio, no sólo referido a dispositivos “ubicuos, transportables y con autonomía suficiente” como ordenadores portátiles, smartphones, tabletas, phablets o wearables, sino a tecnologías que facilitan la movilidad como cloud computing, redes sociales, códigos QR, comunicación M2M o BYOD.

          Multi-tenancy for the cloud enables economies of scale   
Multi-tenancy for the cloud enables economies of scale
Multi-tenancy is an architecture where a single instance of a software application runs on a server and services multiple customers – referred to, in this case, as tenants. “Multi-tenancy enables separation between tenants running applications in a shared environment,” explains Dennis Naidoo, senior systems engineer: Middle East, Africa and Turkey at Tintri, Inc., a leading [&hellip
          Custom Web Development in a Crunch   

(Left to right) Senior UX Architect, Drew Malcolm; Director, User Experience, Nick Sabadosh; Visual Designer, Jeremy DeJiacomo;User Experience Designer, Xi Bi and User Experience Designer, Mithila Tople at Connect 2016. Connecting at Connect Our annual end-user conference (this year, part of VMworld 2017) brings together thousands of enterprise mobility enthusiasts, partners and customers. It’s […]

The post Custom Web Development in a Crunch appeared first on VMware End-User Computing Blog.


          New @ VMworld: Industry Workshops & Showcase for Financial Services, Healthcare, Government & Retail   

Last year, VMworld attendees gave our industry-specific sessions such high marks that we&#rsquo;re taking it up a notch this year. Join us for two VMworld firsts: Complimentary half-day, pre-conference industry workshops on August 27 A full-featured industry showcase August 27–31 Our increased commitment to industries at VMworld this year mirrors our dedication to solving unique […]

The post New @ VMworld: Industry Workshops & Showcase for Financial Services, Healthcare, Government & Retail appeared first on VMware End-User Computing Blog.


          Small Biz Cybersecurity Tips for the Summer   
Cybersecurity guru and Udemy course instructor Kevin Cardwell shares some advice for keeping safe in an unusually active summer for attackers.
          Microsoft said to be planning sales force overhaul; layoffs likely   

The Redmond company is said to be planning more changes to its massive sales organization as it emphasizes sales of cloud-computing products.
          New method could enable more stable and scalable quantum computing, Penn physicists report   
(University of Pennsylvania) Researchers from the University of Pennsylvania, in collaboration with Johns Hopkins University and Goucher College, have discovered a new topological material which may enable fault-tolerant quantum computing.
          India Smart Cities Mission shows IoT potential for improving quality of life at vast scale   
The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) transformation discussion examines the potential impact and improvement of low-power edge computing benefits on rapidly modernizing cities.

These so-called smart city initiatives are exploiting open, wide area networking (WAN) technologies to make urban life richer in services, safer, and far more responsive to residences’ needs. We will now learn how such pervasively connected and data-driven IoT architectures are helping cities in India vastly improve the quality of life there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how communication service providers have become agents of digital urban transformation are VS Shridhar, Senior Vice President and Head of the Internet-of-Things Business Unit at Tata Communications in Chennai area, India, and Nigel Upton, General Manager of the Universal IoT Platform and Global Connectivity Platform and Communications Solutions Business at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about India’s Smart Cities mission. What are you up to and how are these new technologies coming to bear on improving urban quality of life?

Shridhar: The government is clearly focusing on Smart Cities as part of their urbanization plan, as they believe Smart Cities will not only improve the quality of living, but also generate employment, and take the whole country forward in terms of technologically embracing and improving the quality of life.

So with that in mind, the Government of India has launched 100 Smart Cities initiatives. It’s quite interesting because each of the cities that aspire to belong had to make a plan and their own strategy around how they are going to evolve and how they are going to execute it, present it, and get selected. There was a proper selection process.

Many of the cities made it, and of course some of them didn’t make it. Interestingly, some of the cities that didn’t make it are developing their own plans.
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
There is lot of excitement and curiosity as well as action in the Smart Cities project. Admittedly, it’s a slow process, it’s not something that you can do at the blink of the eye, and Rome wasn’t built overnight, but I definitely see a lot of progress.

Gardner:Nigel, it seems that the timing for this is auspicious, given that there are some foundational technologies that are now available at very low cost compared to the past, and that have much more of a pervasive opportunity to gather information and make a two-way street, if you will, between the edge and central administration. How is the technology evolution synching up with these Smart Cities initiatives in India?

Upton:I am not sure whether it’s timing or luck, or whatever it happens to be, but adoption of the digitization of city infrastructure and services is to some extent driven by economics. While I like to tease my colleagues in India about their sensitivity to price, the truth of the matter is that the economics of digitization -- and therefore IoT in smart cities -- needs to be at the right price, depending on where it is in the world, and India has some very specific price points to hit. That will drive the rate of adoption.

And so, we're very encouraged that innovation is continuing to drive price points down to the point that mass adoption can then be taken up, and the benefits realized to a much more broad spectrum of the population. Working with Tata Communications has really helped HPE understand this and continue to evolve as technology and be part of the partner ecosystem because it does take a village to raise an IoT smart city. You need a lot of partners to make this happen, and that combination of partnership, willingness to work together and driving the economic price points to the point of adoption has been absolutely critical in getting us to where we are today.

Balanced Bandwidth

Gardner:Shridhar, we have some very important optimization opportunities around things like street lighting, waste removal, public safety, water quality; of course, the pervasive need for traffic and parking, monitoring and improvement.

How do things like a low-power specification Internet and network gateways and low-power WANs (LPWANs) create a new foundation technically to improve these services? How do we connect the services and the technology for an improved outcome?

Shridhar:If you look at human interaction to the Internet, we have a lot of technology coming our way. We used to have 2G, that has moved to 3G and to 4G, and that is a lot of bandwidth coming our way. We would like to have a tremendous amount of access and bandwidth speeds and so on, right?

Shridhar
So the human interaction and experience is improving vastly, given the networks that are growing. On the machine-to-machine (M2M) side, it’s going to be different. They don’t need oodles of bandwidth. About 80 to 90 percent of all machine interactions are going to be very, very low bandwidth – and, of course, low power. I will come to the low power in a moment, but it’s going to be very low bandwidth requirement.

In order to switch off a streetlight, how much bandwidth do you actually require? Or, in order to sense temperature or air quality or water and water quality, how much bandwidth do you actually require?

When you ask these questions, you get an answer that the machines don’t require that much bandwidth. More importantly, when there are millions -- or possibly billions -- of devices to be deployed in the years to come, how are you going to service a piece of equipment that is telling a streetlight to switch on and switch off if the battery runs out?

Machines are different from humans in terms of interactions. When we deploy machines that require low bandwidth and low power consumption, a battery can enable such a machine to communicate for years.

Aside from heavy video streaming applications or constant security monitoring, where low-bandwidth, low-power technology doesn’t work, the majority of the cases are all about low bandwidth and low power. And these machines can communicate with the quality of service that is required.

When it communicates, the network has to be available. You then need to establish a network that is highly available, which consumes very little power and provides the right amount of bandwidth. So studies show that less than 50 kbps connectivity should suffice for the majority of these requirements.

Now the machine interaction also means that you collect all of them into a platform and basically act on them. It's not about just sensing it, it's measuring it, analyzing it, and acting on it.

Low-power to the people

So the whole stack consists not just of connectivity alone. It’s LPWAN technology that is emerging now and is becoming a de facto standard as more-and-more countries start embracing it.

At Tata Communications we have embraced the LPWAN technology from the LoRa Alliance, a consortium of more than 400 partners who have gotten together and are driving standards. We are creating this network over the next 18 to 24 months across India. We have made these networks available right now in four cities. By the end of the year, it will be many more cities -- almost 60 cities across India by March 2018.

Gardner: Nigel, how do you see the opportunity, the market, for a standard architecture around this sort of low-power, low-bandwidth network? This is a proof of concept in India, but what's the potential here for taking this even further? Is this something that has global potential?
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
Upton: The global potential is undoubtedly there, and there is an additional element that we didn't talk about which is that not all devices require the same amount of bandwidth. So we have talked about video surveillance requiring higher bandwidth, we have talked about devices that have low-power bandwidth and will essentially be created once and forgotten when expected to last 5 or 10 years.

Upton
We also need to add in the aspect of security, and that really gave HPE and Tata the common ground of understanding that the world is made up of a variety of network requirements, some of which will be met by LPWAN, some of which will require more bandwidth, maybe as high as 5G.

The real advantage of being able to use a common architecture to be able to take the data from these devices is the idea of having things like a common management, common security, and a common data model so that you really have the power of being able to take information, take data from all of these different types of devices and pull it into a common platform that is based on a standard.

In our case, we selected the oneM2M standard, it’s the best standard available to be able to build that common data model and that's the reason why we deployed the oneM2M model within the universal IoT platform to get that consistency no matter what type of device over no matter what type of network.

Gardner: It certainly sounds like this is an unprecedented opportunity to gather insight and analysis into areas that you just really couldn't have measured before. So going back to the economics of this, Shridhar, have you had any opportunity through these pilot projects in such cities as Jamshedpur to demonstrate a return on investment, perhaps on street lighting, perhaps on quality of utilization and efficiency? Is there a strong financial incentive to do this once the initial hurdle of upfront costs is met?

Data-driven cost reduction lights up India

Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions.
Shridhar: Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions. So if you look at how things have been progressing, I will give you a few examples of how the costs have started constructing and playing out. One of course is to have devices, meeting at certain price point, we talked about how in India -- we talked that Nigel was remarking how constant still this Indian market is, but it’s important, once we delivered to a certain cost, we believe we can now deliver globally to scale. That’s very important, so if we build something in India it would deliver to the global market as well.

The streetlight example, let’s take that specifically and see what kind of benefits it would give. When a streetlight operates for about 12 hours a day, it costs about Rs.12, which is about $0.15, but when you start optimizing it and say, okay, this is a streetlight that is supported currently on halogen and you move it to LED, it brings a little bit of cost saving, in some cases significant as well. India is going through an LED revolution as you may have read in the newspapers, those streetlights are being converted, and that’s one distinct cost advantage.

Now they are looking and driving, let’s say, the usage and the electricity bills even lower by optimizing it. Let’s say you sync it with the astronomical clock, that 6:30 in the evening it comes up and let’s say 6:30 in the morning it shuts down linking to the astronomical clock because now you are connecting this controller to the Internet.

The second thing that you would do is during busy hours keep it at the brightest, let’s say between 7:00 and 10:00, you keep it at the brightest and after that you start minimizing it. You can control it down in 10 percent increments.

The point I am making is, you basically deliver intensity of light to the kind of requirement that you have. If it is busy, or if there is nobody on the street, or if there is a safety requirement -- a sensor will trigger up a series of lights, and so on.

So your ability to play around with just having streetlight being delivered to the requirement is so high that it brings down total cost. While I was telling you about $0.15 that you would spend per streetlight, that could be brought down to $0.05. So that’s the kind of advantage by better controlling the streetlights. The business case builds up, and a customer can save 60 to 70 percent just by doing this. Obviously, then the business case stands out.

The question that you are asking is an interesting one because each of the applications has its own way of returning the investment back, while the optimization of resources is being done. There is also a collateral positive benefit by saving the environment. So not only do I gain a business savings and business optimization, but I also pass on a general, bigger message of a green environment. Environment and safety are the two biggest benefits of implementing this and it would really appeal to our customers.

Gardner:It’s always great to put hard economic metrics on these things, but Shridhar just mentioned safety. Even when you can't measure in direct economics, it's invaluable when you can bring a higher degree of safety to an urban environment.

It opens up for more foot traffic, which can lead to greater economic development, which can then provide more tax revenue. It seems to me that there is a multiplier effect when you have this sort of intelligent urban landscape that creates a cascading set of benefits: the more data, the more efficiency; the more efficiency, the more economic development; the more revenue, the more data and so on. So tell us a little bit about this ongoing multiplier and virtuous adoption benefit when you go to intelligent urban environments?

Quality of life, under control

Upton:Yes, also it’s important to note that it differs almost by country to country and almost within region to region within countries. The interesting challenge with smart cities is that often you're dealing with elected officials rather than hard-nosed businessman who are only interested in the financial return. And it's because you're dealing with politicians and they are therefore representing the citizens in their area, either their city or their town or their region, their priorities are not always the same.

There is quite a variation of one of the particular challenges, particular social challenges as well as the particular quality of life challenges in each of the areas that they work in. So things like personal safety are a very big deal in some regions. I am currently in Tokyo and here there is much more concern around quality of life and mobility with a rapidly aging population and their challenges are somewhat different.
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
But in India, the set of opportunities and challenges that are set out, they are in that combination of economic as well as social, and if you solve them and you essentially give citizens more peace of mind, more ability to be able to move freely, to be able to take part in the economic interaction within that area, then undoubtedly that leads to greater growth, but it is worth bearing in mind that it does vary almost city by city and region by region.

Gardner:Shridhar, do you have any other input into a cascading ongoing set of benefits when you get more data, more network opportunity. I guess I am trying to understand for a longer-term objective that being intelligent and data-driven has an ongoing set of benefits, what might those be? How can this be a long-term data and analytics treasure trove when you think about it in terms of how to provide better urban experiences?

Home/work help

Shridhar:From our perspective, when we looked at the customer benefits there is a huge amount of focus around the smart cities and how smart cities are benefiting from a network. If you look at the enterprise customers, they are also looking at safety, which is an overlapping application that a smart city would have.

So the enterprise wants to provide safety to its workers, for example, in mines or in difficult terrains, environments where they are focusing on helping them. Or women’s safety, which is as you know in India is a big thing as well -- how do you provide a device which is not very obvious and it gives the women all the safety that is there.

So all this in some form is providing data. One of the things that comes to my mind when you ask about how data-driven resources can be and what kind of quality it would give is if you action your mind to some of the customer services devices, there could be applications or let’s say a housewife could have a multiple button kind of a device where she can order a service.

Depending on the service she presses and an aggregate of households across India, you would know the trends and direction of a certain service, and mind you, it could be as simple as a three-button device which says Service A, Service B, Service C, and it could be a consumer service that gets extended to a particular household that we sell it as a service.

So you could get lots of trends and patterns that are emerging from that, and we believe that the customer experience is going to change, because no longer is a customer going to retain in his mind what kind of phone numbers or your, let's say, apps and all to order, you give them the convenience of just a button-press service. That immediately comes to my mind.

Feedback fosters change

The second one is in terms of feedback. You use the same three-button service to say, how well have you used utility -- or rather how -- what kind of quality of service that you rate multiple utilities that you are using, and there is toilet revolution in India. For example, you put these buttons out there, they will tell you at any given point of time what’s the user satisfaction and so on.

So these are all data that is getting gathered and I believe that while it is early days for us to go on and put out analytics and give you distinct kind of benefits that are there, but some of the things that customers are already looking at is which geographies, which segment, who are my biggest -- profile of the customers using this and so on. That kind of information is going to come out very, very distinctly.

The Smart Cities is all about experience. The enterprises are now looking at the data that is coming out and seeing how they can use it to better segment, and provide better customer experience which would obviously mean both adding to their top line as well as helping them manage their bottom line. So it's beyond safety, it's getting into the customer experience – the realm of managing customer experience.

Gardner:From a go-to-market perspective, or a go-to-city’s perspective, these are very complex undertakings, lots of moving parts, lots of different technologies and standards. How are Tata and HPE are coming together -- along with other service providers, Pointnextfor example? How do you put this into a package that can then actually be managed and put in place? How do we make this appealing not only in terms of its potential but being actionable as well when it comes to different cities and regions?

Upton:The concept of Smart Cities has been around for a while and various governments around the world have pumped money into their cities over an extended period of time.
We now have the infrastructure in place, we have the price points and we have IoT becoming mainstream.

As usual, these things always take more time than you think, and I do not believe today that we have a technology challenge on our hands. We have much more of a business model challenge. Being able to deploy technology to be able to bring benefits to citizens, I think that is finally getting to the point where it is much better understood where innovation of the device level, whether it's streetlights, whether it's the ability to measure water quality, sound quality, humidity, all of these metrics that we have available to us now. There has been very rapid innovation at that device level and at the economics of how to produce them, at a price that will enable widespread deployment.

All that has been happening rapidly over the last few years getting us to the point where we now have the infrastructure in place, we have the price points in place, and we have IoT becoming mainstream enough that it is entering into the manufacturing process of all sorts of different devices, as I said, ranging from streetlights to personal security devices through to track and trace devices that are built into the manufacturing process of goods.
That is now reaching mainstream and we are now able to take advantage of this massive data that’s now being produced to be able to produce even more efficient and smarter cities, and make them safer places for our citizens.

Gardner:Last word to you, Shridhar. If people wanted to learn more about the pilot proof of concept (PoC) that you are doing there at Jamshedpur and other cities, through the Smart Cities Mission, where might they go, are there any resources, how would you provide more information to those interested in pursuing more of these technologies?

Pilot projects take flight

Shridhar:I would be very happy to help them look at the PoCs that we are doing. I would classify the PoCs that we are doing is as far as safety is concerned, we talked of energy management in one big bucket that is there, then the customer service I spoke about, the fourth one I would say is more on the utility side. Gas and water are two big applications where customers are looking at these PoCs very seriously.

And there is very one interesting application in that one customer wanted for pest control, where he wanted his mouse traps to have sensors so that they will at any point of time know if there is a rat trap at all, which I thought was a very interesting thing.
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
There are multiple streams that we have, we have done multiple PoCs, we will be very happy as Tata Communications team [to provide more information], and the HPE folks are in touch with us.

You could write to us, to me in particular for some period of time. We are also putting information on our website. We have marketing collateral, which describes this. We will do some of the joint workshops with HPE as well.

So there are multiple ways to reach us, and one of the best ways obviously is through our website. We are always there to provide more important help, and we believe that we can’t do it all alone; it’s about the ecosystem getting to know and getting to work on it.

While we have partners like HPE on the platform level, we also have partners such as Semtech, who established Center of Excellence in Mumbai along with us. So the access to the ecosystem from HPE side as well as our other partners is available, and we are happy to work and co-create the solutions going forward.


          How confluence of cloud, UC and data-driven insights newly empowers contact center agents   
The next BriefingsDirect customer experience insights discussion explores how Contact center-as-a-service (CCaaS) capabilities are becoming more powerful as a result of leveraging cloud computing, multi-mode communications channels, and the ability to provide optimized and contextual user experiences.

More than ever, businesses have to make difficult and complex decisions about how to best source their customer-facing services. Which apps and services, what data and resources should be in the cloud or on-premises -- or in some combination -- are among the most consequential choices business leaders now face. As the confluence of cloud and unified communications (UC) -- along with data-driven analytics -- gain traction, the contact center function stands out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

We’ll now hear why traditional contact center technology has become outdated, inflexible and cumbersome, and why CCaaS is becoming more popular in meeting the heightened user experience requirements of today.
Here to share more on the next chapter of contact center and customer service enhancements, is Vasili Triant, CEO of Serenovain Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the new trends reshaping the contact center function?

Triant:What’s changed in the world of contact center and customer service is that we’re seeing a generational spread -- everything from baby boomers all the way now to Gen Z.

With the proliferation of smartphones through the early 2000s, and new technologies and new channels -- things like WeChat and Viber -- all these customers are now potential inbound discussions with brands. And they all have different mediums that they want to communicate on. It’s no longer just phone or e-mail: It’s phone, e-mail, web chat, SMS, WeChat, Facebook, Twitter, LinkedIn, and there are other channels coming around the corner that we don't even know about yet.

Triant
When you take all of these folks -- customers or brands -- and you take all of these technologies that consumers want to engage with across all of these different channels – it’s simple, they want to be heard. It's now the responsibility of brands to determine what is the best way to respond and it’s not always one-to-one.

So it’s not a phone call for a phone call, it’s maybe an SMS to a phone call, or a phone call to a web chat -- whatever those [multi-channels] may be. The complexity of how we communicate with customers has increased. The needs have changed dramatically. And the legacy types of technologies out there, they can't keep up -- that's what's really driven the shift, the paradigm shift, within the contact center space.

Gardner:It’s interesting that the new business channels for marketing and capturing business are growing more complex. They still have to then match on the back end how they support those users, interact with them, and carry them through any sort of process -- whether it's on-boarding and engaging, or it’s supporting and servicing them.

What we’re requiring then is a different architecture to support all of that. It seems very auspicious that we have architectural improvements right along with these new requirements.

Triant:We have two things that have collided at the same time – cloud technologies and the growth of truly global companies.  

Most of the new channels that have rolled out are in the cloud. I mean, think about it -- Facebook is a cloud technology, Twitter is a cloud technology. WeChat, Viber, all these things, they are all cloud technologies. It’s becoming a Software-as-a-Service (SaaS)-based world. The easiest and best way to integrate with these other cloud technologies is via the cloud -- versus on-premises. So what began as the shift of on-premises technology to cloud contact center -- and that really began in 2011-2012 – has rapidly picked up speed with the adoption of multi-channels as a primary method of communication.

The only way to keep up with the pace of development of all these channels is through cloud technologies because you need to develop an agile world, you need to be able to get the upgrades out to customers in a quick fashion, in an easy fashion, and in an inexpensive fashion. That's the core difference between the on-premises world and the cloud world.

At the same time, we are no longer talking about a United States company, an Australia company, or a UK company -- we are talking about everything as global brands, or global businesses. Customer service is global now, and no one cares about borders or countries when it comes to communication with a brand.
Customer service is global now, and no one cares about borders or countries when it comes to communications with a brand.


Gardner:We have been speaking about this through the context of the end-user, the consumer. But this architecture and its ability to leverage cloud also benefits the agent, the person who is responsible for keeping that end-user happy and providing them with the utmost in intelligent services. So how does the new architecture also aid and abet the agent.

Triant: The agent is frankly one of the most important pieces to this entire puzzle. We talk a lot about channels and how to engage with the customer, but that's really what we call listening. But even in just simple day-to-day human interactions, one of the most important things is how you communicate back. There has been a series of time-and-motion studies done within contact centers, within brands -- and you can even look at your personal experiences. You don’t have to read reports to understand this.
The baseline for how an interaction will begin and end and whether that will be a happy or a poor interaction with the brand, is going to be dependent on the agents’ state of mind. If I call up and I speak to “Joe,” and he starts the conversation, he is in a great mood and he is having a great day, then my conversation will most likely end in a positive interaction because it started that way.

But if someone is frustrated, they had a rough day, they can’t find their information, their computers have been crashing or rebooting, then the interaction is guaranteed to end up poor. You hear this all the time, “Oh, can you wait a moment, my systems are loading. Oh, I can’t get you an answer, that screen is not coming up. I can't see your account information.” The agents are frustrated because they can’t do their job, and that frustration then blends into your conversation.

So using the technology to make it easy for the agent to do their job is essential. If they have to go from one screen to another screen to conduct one interaction with the customer -- they are going to be frustrated, and that will lead to a poor experience with the customer.

The cloud technologies like Serenova, which is web-based, are able to bring all those technologies into one screen. The agent can have all the information brought to them easily, all in one click, and then be able to answer all the customer needs. The agent is happy and that adds to the customer satisfaction. The conclusion of the call is a happy customer, which is what we all want. That’s a great scenario and you need cloud technology to do that because the on-premises world does not deliver a great agent experience.

One-stop service

Gardner:Another thing that the older technologies don't provide is the ability to have a flexible spectrum to move across these channels. Many times when I engage with an organization I might start with an SMS or a text chat, but then if that can’t satisfy my needs, I want to get a deeper level of satisfaction. So it might end up going to a phone call or an interaction on the web, or even a shared desktop, if I’m in IT support, for example.

The newer cloud technology allows you to intercept via different types of channels, but you can also escalate and vary between and among them seamlessly. Why is that flexibility both of benefit to the end-user as well as the agent?

Triant: I always tell companies and customers of ours that you don't have to over-think this; all you have to do is look to your personal life. Most common things that we as users deal with -- such as cell phone companies, cable companies, airlines, -- you can get onto any of these websites and begin chatting, but you can find that your interaction isn’t going well. Before I started at Serenova, I had these experiences where I was dealing with the cable company and -- chat, chat, chat, -- trying to solve my problem. But we couldn't get there, and so then we needed to get on the phone. But they said, “Here is our 800 number, call in.” I’d call in, but I’d have to start a whole new interaction.

Basically, I’d have to re-explain my entire situation. Then, I am talking with one person, and they have to turn around and send me an email, but I am not going to get that email for 30 to 45 minutes because they have to get off the phone, and get into another system and send it off. In the meantime, I am frustrated, I am ticked off -- and guess what I have done now? I have left that brand. This happens across the board. I can even have two totally different types of interactions with the company.

You can use a major airline brand as an example. One of our employees called on the phone trying to resolve an issue that was caused by the airline. They basically said, “No, no, no.” It made her very frustrated. She decided she’s going to fly with a different airline now. She then sent a social post [to that effect], and the airline’s VP of Customer Service answered it, and within minutes they had resolved her issue. But they already spent three hours on the phone trying to push her off through yet another channel because it was a totally different group, a totally different experience.

By leveraging technologies where you can pivot from one channel to another, everyone will get answers quicker. I can be chatting with you, Dana, and realize that we need to escalate to a voice conversation, for example, and I as the agent; I can then turn that conversation into a voice call. You don't have to re-explain yourself and you are like, “Wow, that's cool! Now I’m on the phone with a facility,” and we are able to handle our business.

As agent, I can also pivot simultaneously to an email channel to send you something as simple as a user guide or a series of knowledge-based articles that I may have at my fingertips as an agent. But you and I are still on the phone call. Even better yet, after-the-fact, as a business, I have all the analytics and the business intelligence to say that I had one interaction with Dana that started out as a web chat, pivoted to a phone call, and I simultaneously then sent a knowledge-based article of “X” around this issue and I can report on it all at once. Not three separate interactions, not three separate events -- and I have made you a happy customer.

Gardner:We are clearly talking about enabling the agent to be a super-agent, and they can, of course, be anywhere. I think this is really important now because the function of an agent -- we are already seeing the beginnings of this -- but it's going to certainly include and increase having more artificial intelligence (AI) and machine learning and associated data analytics benefits. The agent then might be a combination of human and AI functions and services.

So we need to be able to integrate at a core communications basis. Without going too far down this futuristic route, isn't it important for that agent to be an assimilation of more assets and more services over time?

Artificial Intelligence plus human support

Triant:I‘m glad you brought up AI and these other technologies. The reality is that we've been through a number of cycles around what this technology is going to do and how it is going to interact with an agent. In my view, and I have been in this world for a while, the agent is the most important piece of customer service and brand engagement. But you have to be able to bring information to them, and you have to be able to give information to your customers so that if there is something simple, get it to them as quick as possible -- but also bring all the relevant information to the agent.

AI has had multiple forms; it has existed for a long time. Sometimes people get confused because of marketing schemes and sales tactics [and view AI] as a way for cost avoidance, to reduce agents and eliminate staff by implementing these technologies. Really the focus is how to create a better customer experience, how to create a better agent experience.

We have had AI in our product for last three years, and we are re-releasing some components that will bring business intelligence to the forefront around the end of the year. What it essentially does is alIow you to see what you're doing as a user out on the Internet and within these technologies. I can see that you have been looking for knowledge-based articles around, for example, “why my refrigerator keeps freezing up and how can I defrost it.” You can see such things on Twitter and you can see these things on Facebook. The amount of information that exists out there is phenomenal and in real-time. I can now gather that information … and I can proactively, as a business, make decisions about what I want to do with you as a potential consumer.

I can even identify you as a consumer within my business, know how many products you have acquired from me, and whether you're a “platinum” customer or even a basic customer, and then make a decision.

For example, I have TVs, refrigerators, washer-dryers and other appliances all from the same manufacturer. So I am a large consumer to that one manufacturer because all of my components are there. But I may be searching a knowledge-based article on why the refrigerator continues to freeze up.

Now I may call in about just the refrigerator, but wouldn't it be great for that agent to know that I own 22 other products from that same company? I'm not just calling about the refrigerator; I am technically calling about the entire brand. My experience around the refrigerator freaking out may change my entire brand decision going forward. That information may prompt me to decide that I want to route that customer to a different pool of agents, based on what their total lifetime value is as a brand-level consumer.

Through AI, by leveraging all this information, I can be a better steward to my customer and to the agent, because I will tell you, an agent will act differently if they understand the importance of that customer or to know that I, Vasili, have spent the last two hours searching online for information, which I posted on Facebook and I posted on Twitter.
Through AI, by leveraging all this information, I can be a better steward to the customer and to the agent.

At that point, the level of my frustration already has reached a certain height on a scale. As an agent, if you knew that, you might treat me differently because you already know that I am frustrated. The agent may be able to realize that you have been looking for some information on this, realize you have been on Facebook and Twitter. They can then say: “I am really sorry, I'm not able to get you answers. Let me see how I can help you, it seems that you are looking online about how to keep the refrigerator from freezing up.”

If I start the conversation that way, I've now diffused a lot of the frustration of the customer. The agent has already started that interaction better. Bringing that information to that person, that’s powerful, that’s business intelligence -- and that’s creating action from all that information.

Keep your cool

Gardner:It’s fascinating that that level of sentiment analysis brings together the best of what AI and machine learning can do, which is to analyze all of these threads of data and information and determine a temperature, if you will, of a person's mood and pass that on to a human agent who can then have the emotional capacity to be ready to help that person get to a lower temperature, be more able to help them overall.

It’s becoming clear to me, Vasili, that this contact center function and CCaaS architectural benefits are far more strategic to an organization than we may have thought, that it is about more than just customer service. This really is the best interface between a company -- and all the resources and assets it has across customer service, marketing, and sales interactions. Do you agree that this has become far more strategic because of these new capabilities?

Triant:Absolutely, and as brands begin to realize the power of what the technology can do for their overall business, it will continue to evolve, and gain pace around global adoption.
As brands begin to realize the power of what the technology can do for their overall businesses, it will continue to evolve and gain global adoption.

We have only scratched the surface on adoption of these cloud technologies within organizations. A majority of brands out there look at these interactions as a cost of doing business. They still seek to reduce that cost versus the lifetime value of both the consumer, as well as the agent experience. This will shift, it is shifting, and there are companies that are thriving by recognizing that entire equation and how to leverage the technologies.

Technology is nothing without action and result. There have been some really cool things that have existed for a while, but they don’t ever produce any result that’s meaningful to the customer so they never get adopted and deployed and ultimately reach some type of a mass proliferation of results.

Gardner:You mentioned cost. Let’s dig into that. For organizations that are attracted to the capabilities and the strategic implications of CCaaS, how do we evaluate it in terms of cost? The old CapEx approach often had a high upfront cost, and then high operating costs, if you have an inefficient call center. Other costs involve losing your customers, losing brand affinity, losing your perception in the market. So when you talk to a prospect or customer, how do you help them tease out the understanding of a pay-as-you-go service as highly efficient? Does the highly empowered agent approach save money, or even make money, and CCaaS becomes not a cost center but a revenue generator?

Cost consciousness

Triant:Interesting point, Dana. When I started at Serenova about five years ago, customers all the time would say, “What’s the cost of owning the technology?” And, “Oh, my, on-premises stuff has already depreciated and I already own it, so it’s cheaper for me to keep it.” That was the conversation pretty much every day. Beginning in 2013, it rapidly started shifting. This shift was mainly driven by the fact that organizations started realizing that consumers want to engage on different channels, and the on-premises guys couldn’t keep up with this demand.

The cost of ownership no longer matters. What matters is that the on-premises guys just literally could not deliver the functionality. And so, whether that's Cisco, Avaya, or Shoretel, they quickly started falling away in consideration for technology companies that were looking to deploy applications for their business to meet these needs.

The cost of ownership quickly disappeared as the main discussion point. Instead it came around to, “What is the solution that you're going to deliver?” Customers that are looking for contact center technologies are beginning to take a cloud-first approach. And once they see the power of CCaaS through demonstration and through some trials of what an agent can do – and it’s all browser-based, there is no client install, there is no equipment on-premises - then it takes on a life of its own. It’s about, “What is the experience going to be? Are these channels all integrated? Can I get it all from one manufacturer?”

Following that, organizations focus on other intricacies around - Can it scale? Can it be redundant? Is it global? But those become architectural concerns for the brands themselves. There is a chunk of the industry that is not looking at these technologies, and they are stuck in brand euphoria or have to stay with on-premises infrastructure, or with a certain vendor because of their name or that they are going to get there someday.

As we have seen, Avaya has declared bankruptcy. Avaya does not have cloud technologies despite their marketing message. So the customers that are in those technologies now realize they have to find a path to keep up with the basic customer service at a global scale. Unfortunately, those customers have to find a path forward and they don’t have one right now.
It's less about cost of ownership and it’s more about the high cost of not doing anything. If I don't do anything, what’s going to be the cost? That cost ultimately becomes - I’m not going to be able to have engagement with my customers because the consumers are changing.
It's less about cost of ownership and it's more about the high cost of not doing anything.

Gardner:What about this idea of considering your contact center function not just as a cost center, but also as a business development function? Am I being too optimistic.

It seems to me that as AI and the best of what human interactions can do combine across multichannels, that this becomes no longer just a cost center for support, a check-off box, but a strategic must-do for any business.

Multi-channel customer interaction

Triant:When an organization reaches the pinnacle of happiness within what these technologies can do, they will realize that no longer do you need to have delineation between a marketing department that answers social media posts, an inside sales department that is only taking calls for upgrades and renewals, and a customer service department that’s dealing with complaints or inbound questions. They will see that you can leverage all the applications across a pool of agents with different skills.

I may have a higher skill around social media than over voice, or I may have a higher skill level around a sales activity, or renewal activity, over customer service problems. I should be able to do any interaction. And potentially one day it'll just be customer interaction department and the channels are just a medium of inbound and outbound choice for a brand.

But you can now take information from whatever you see the customer doing. Each of their actions have a leading indicator, everything has a predictive action prior to the inbound touch, everything does. Now that a brand can see that, it will be able to have “consumer interaction departments,” and it will be properly routed to the right person based on that information. You’ll be able to bring information to that agent that will allow them to answer the customer’s questions.

Gardner:I can see how that agent’s job would be very satisfying and fulfilling when you are that important, when you have that sort of a key role in your organization that empowers people. That’s good news for people that are trying to find those skills and fill those positions.

Vasili, we only have a few minutes left, but I’d love to hear about a couple of examples. It’s one thing to tell, it’s another thing to show. Do we have some examples of organizations that have embraced this concept of a strategic contact center, taken advantage of those multi-channels, added perhaps some intelligence and improved the status and capability of the agents -- all to some business benefit? Walk us through a couple of actual use cases where this has all come together.

Cloud communication culture shift

Triant:No one has reached that level of euphoria per se, but there are definitely companies that are moving in that direction.

It is a culture change, so it takes time. I know as well as anybody what it takes to shift a culture, and it doesn't happen overnight. As an example, there is a ride-hailing company that engages in a different way with their consumer, and their consumer might be different than what you think from the way I am describing it. They use voice systems and SMS and often want to pivot between the two. Our technology actually allows the agent to make that decision even if they aren’t even physically in the same country. They are dynamically spread across multiple countries to answer any question they may need to answer based on time and day.

But they can pivot from what’s predominantly an SMS inbound and outbound communication into a voice interaction, and then they can also follow up with an e-mail, and that’s already happened. Now, it initially started with some SMS inbound and outbound, then they added voice – an interesting move as most people think adding voice is what people are getting away from. What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.
What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

That's one example. Another company that provides the latest technology in food order and delivery initially started with voice-only to order and deliver food. Now they've added SMS confirmations automatically, and e-mail as well for confirmation or for more information from the inbound voice call. And now, once they are an existing customer, they can even start an order from an SMS, and pivot back to a voice call for confirmation -- all within one interaction. They are literally one of the fastest growing alternative food delivery companies, growing at a global scale.

They are deploying agents globally across one technology. They would not be able to do this with legacy technologies because of the expense. When you get into these kinds of high-volume, low-margin businesses, cost matters. When you can have an OpEx model that will scale, you are adding better customer service to the applications, and you are able to allow them to build a profitable model because you are not burning them with high CapEx processes.

Gardner:Before we sign off, you had mentioned your pipeline about your products and services, such as engaging more with AI capabilities toward the end of the year. Could give us a level-set on your roadmap? Where are your products and services now? Where do you go next?

A customer journey begins with insight

Triant:We have been building cloud technologies for 16 years in the contact center space. We released our latest CCaaS platform in March 2016 called CxEngage. We then had a major upgrade to the platform in March of this year, where we take that agent experience to the next level. It’s really our leapfrog in the agent interface and making it easier, bringing in more information to them.

Where we are going next is around the customer journey -- predictive interactions. Some people call it AI, but I will call it “customer journey mapping with predictive action insights.” That’s going to be a big cornerstone in our product, including business analytics. It’s focused around looking at a combination of speech, data and text -- all simultaneously creating predictive actions. This is another core area we are going in an and continue to expand the reach of our platform from a global scale.

At this point, we are a global company. We have the only global cloud platform built on a single software stack with one data pipeline. We now have more users on a pure cloud platform than any of our competitors globally. I know that’s a big statement, but when you look at a pure cloud infrastructure, you're talking in a whole different realm of what services you are able to offer to customers. Our ability to provide a broad reach including to Europe, South Africa, Australia, India, and Singapore -- and still deliver good cloud quality at a reasonable cost and redundant fashion –  we are second to none in that space.

Gardner:I’m afraid we will have to leave it there. We have been listening to a sponsored BriefingsDirect discussion on how CCaaS capabilities are becoming more powerful as a result of cloud computing, multimode communications channels, and the ability to provide optimized and contextual user experiences.

And we’ve learned how new levels of insight and intelligence are now making CCaaS approaches able to meet the highest user experience requirements of today and tomorrow. So please join me now in thanking our guest, Vasili Triant, CEO of Serenova in Austin, Texas.

Triant:Thank you very much, Dana. I appreciate you having me today.

Gardner:This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor, Serenova, as well as to you, our audience. Do come back next time and thanks for listening.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Serenova.

Transcript of a discussion on how contact center-as-a-service capabilities are becoming more powerful to provide optimized and contextual user experiences for agents and customers. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:


          Jobs - ks1/2 Teacher - Part-time   
United Kingdom
...soon as possible Our school is going through exciting times as we are expanding to three form entry and have a brand new building with superb additional facilities. We were delighted that Ofsted recognised our strengths in all areas (March 2014). We require a mps outstanding practitioner who can offer a specialist subject such as computing/science/arts to enhance ...
meega.eu

          The forgotten cyberspace of the Neuromancer computer game   

William Gibson's 1984 novel Neuromancer is far from forgotten; the times seem almost uncannily like an interregnum between the world he wrote in and the world he wrote. But the 1988 video game adaptation is another matter. [via]

Mark Hill:

The game’s developers were challenged with portraying this futuristic nonspace while still creating an accessible and interesting game, and all with computers that were barely a step up from a calculator and a potent imagination. The end result is surreal, abstract, and lonely. It’s a virtual world that’s simultaneously leagues beyond our internet, yet stunted and impractical, a world where you can bank online before doing battle with an artificial intelligence yet won’t let you run a simple search query and forces you to “physically” move between one virtual location and the next. It’s cyberspace as envisioned by a world that didn’t yet have the computing power to experience it for real, a virtual 2058 that would look archaic before the turn of the millennium.

Hill gets it, especially how the game seeks to understand cyberspace as a city. But I think he's wrong in suggesting that contemporary hardware limitations ("a step up from a calculator") were the game's undoing. If anything, I feel that the cusp of the 16-bit era was perfect for implementing Neuromancer as a solipsistic, non-networked adventure game. Indeed, much of the history of the 16-bit era can be read as increasingly successful efforts to implement the vision of Neuromancer as a narrative experience rather than a labyrinthine multidimensional bulletin board.

          AI's Emerging Role In IoT Highlighted At IBM Genius Of Things Event   

Photo: Bergman Group

IBM hosted an artificial intelligent (AI) event at its Munich Watson IoT HQ, where it underlined its claim as a leading global AI and internet-of-things (IoT) platform providers in the enterprise context. AI and the IoT are both very important topics for enterprise users. However, there remains some uncertainty among enterprises regarding the exact benefits that both AI and IoT can generate and how businesses should prepare for the deployment of AI and IoT in their organizations.

One year into the launch of its Munich-based Watson IoT headquarters, IBM invited about one thousand customers to share an update of its AI and IoT activities to date. The IBM "Genius of Things" Summit presented interesting insights for both AI and IoT deployments. It underlined that IBM is clearly one of the leading global AI and IoT platform providers in the enterprise context. Some of the most important insights for me were that:

Read more
          Let’s see your personal computing workstations, here’s ours   
How us how you set your PCs (Macs, Chromebooks, iPhones and Android devices are all personal computers to someone)
          alles gute zum 60er, cern!   
Das war nicht schwer, heute was als Thema zu finden. Das CERN wird 60! Auf den Tag genau! Alles Gute!! [via] Für diejenigen Leserinnen/Leser, die nun überhaupt nichts mit dem CERN anfangen können, hier, das war März 1989: Imagos Creditos So ähnlich jedenfalls. Das Web war übrigens ein „Abfallprodukt“ von den Wissenschaftlern dort. Die ursprüngliche […]
          Storage Field Day 13 – Wrap-up and Link-o-rama   
Disclaimer: I recently attendedStorage Field Day 13. My flights,accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed […]]> Disclaimer: I recently attendedStorage Field Day 13. My flights,accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don&#rsquo;t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen, Tom and Claire,and the presenters at Storage Field Day 13 and Pure//Accelerate. I had a super funand educational time. For easy reference, here&#rsquo;s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be At Storage Field Day 13

Storage Field Day Exclusive at Pure//Accelerate 2017 – General Session Notes

Storage Field Day Exclusive at Pure//Accelerate 2017 – FlashBlade 2.0

Storage Field Day Exclusive at Pure//Accelerate 2017 – Purity Update

Storage Field Day 13 – Day 0

NetApp Doesn&#rsquo;t Want You To Be Special (This Is A Good Thing)

Dell EMC&#rsquo;s in the Midst of a Midrange Resurrection

X-IO Technologies Are Living On The Edge

SNIA&#rsquo;s Swordfish Is Better Than The Film

ScaleIO Is Not Your Father&#rsquo;s SDS

Dell EMC’s Isilon All-Flash Is Starting To Make Sense

Primary Data Attacks Application Ignorance

StorageCraft Are In Your Data Centre And In The Cloud

Storage Field Day 13 – (Fairly) Full Disclosure

 

Also, here&#rsquo;s a number of links to posts by my fellow delegates (in no particular order). They&#rsquo;re allverysmart people, and you should check out their stuff, particularly if you haven&#rsquo;t before. I&#rsquo;ll attempt tokeep this updated as more posts are published. But if it gets stale, the Storage Field Day 13and Storage Field Day Exclusive at Pure Accelerate 2017 landing pages will have updated links.

 

Alex Galbraith (@AlexGalbraith)

Storage Field Day 13 (SFD13) – Preview

 

Brandon Graves (@BrandonGraves08)

Delegate For Storage Field Day 13

Storage Field Day Is AlmostHere

 

Chris Evans (@ChrisMEvans)

Pure Accelerate: FlashArray Gets Synchronous Replication

 

Erik Ableson (@EAbleson)

 

Matthew Leib (@MBLeib)

Pure Storage Accelerate/Storage Field Day 13 – PreFlight

 

Jason Nash (@TheJasonNash)

 

Justin Warren (@JPWarren)

Pure Storage Charts A Course To The Future Of Big Data

 

Max Mortillaro (@DarkkAvenger)

See you at Storage Field Day 13 and Pure Accelerate!

Storage Field Day 13 Primer – Exablox

SFD13 Primer – X-IO Axellio Edge Computing Platform

Real-time Storage Analytics: one step further towards AI-enabled storage arrays?

 

Mike Preston (@MWPreston)

A field day of Storage lies ahead!

Primary Data set to make their 5th appearance at Storage Field Day

Hear more from Exablox at Storage Field Day 13

X-IO Technology A #SFD13 preview

Hear more from Exablox at Storage Field Day 13

SNIA comes back for another Storage Field Day

 

Ray Lucchesi (@RayLucchesi)

Axellio, next gen, IO intensive server for RT analytics by X-IO Technologies

 

Scott D. Lowe (@OtherScottLowe)

Backup and Recovery in the Cloud: Simplification is Actually Really Hard

The Purity of Hyperconverged Infrastructure: What&#rsquo;s in a Name?

 

Stephen Foskett (@SFoskett)

The Year of Cloud Extension

 

Finally, thanks again to Stephenand the team at Gestalt IT for making it all happen. It was an educational and enjoyable weekand I really valued the opportunity I was given to attend. Here’s a photo of the Storage Field Day 13 delegates.

[image courtesy of Tech Field Day]


          C++ Developer - Distributed Computing - Morgan Stanley - Montréal, QC   
Comfortable programming in a Linux environment, familiar with ksh and bash. Morgan Stanley is a global financial services firm and a market leader in investment...
From Morgan Stanley - Wed, 28 Jun 2017 00:14:01 GMT - View all Montréal, QC jobs
          John Davis II: Who is actually responsible for cyber attacks?   

Who is actually responsible for that cyber attack that hit your organization? Often it comes down to guess work. Few people have much faith in the accuracy of the attribution. So what to do? John Davis II, senior information scientist at the Rand Corporation and co-director for scalable computing and analysis, joined Federal Drive with Tom Temin with recommendations.

The post John Davis II: Who is actually responsible for cyber attacks? appeared first on FederalNewsRadio.com.


          MareNostrum 4 entra en producción y comienza a ejecutar aplicaciones destinadas a la investigación científica   
29 de junio de 2017 MareNostrum 4, propiedad del Barcelona Supercomputing Center – Centro Nacional de Supercomputación (BSC-CNS), está íntegramente destinado a la generación de conocimiento científico ... - Fuente: mineco.gob.es
          Quality Matters Reviews   
          Quality Matters Reviews 
some suggestions for visual and hearing components
By Grace Windsheimer      gwindsheimer@cgcc.cc.or.us

One of the ongoing issues in passing the Quality Matters rubric, is #8.2, visual and hearing impaired students. There are actually court cases against colleges right now from students who feel they don't have access to online courses because of their disability. Here is the information on the court cases for you. http://chronicle.com/article/Blind-Students-Demand-Access/125695/ 

Visually Impaired
I found a ppt reader that students can download. Tried it and it reads both ppt (2003) and pptx (2007) text, so I'll be adding that to my Moodle pages for students to download if they choose.
Here is the link to the PowerTalk download-easy to download and use.
http://fullmeasure.co.uk/powertalk/#requirements

Here are some links to screen readers: from Linda Hughitt
Free screen readers:
Listing of programs:

Hearing impaired
I also try to find videos with CC (closed captions) to add to my Moodle shells for the hearing impaired-they are getting easier to find.   Here is a web page with closed caption videos   http://www.google.com/search?q=is:free&tbs=vid:1,cc:1 You can also search on YouTube for cc videos. Here are some videos on You Tube for CGCC  http://www.youtube.com/cgcclive that have automatic captions.

Try them out, they actually are done quite nicely and for those that are hard to hear for all of us it is a great alternative.
          Quantum computing initiative advances drug discovery   
Quantum computing holds the answer for treating several serious neurological diseases, as well as offering solutions to the personalized medicine initiative. New research from Accenture, 1QBit and Biogen has delivered promising results.
          Logiciels Malvaillants dit Malware, ransomware et autres attaques   

Logiciels Malvaillants dit Malware, ransomware et autres attaques… Article écrit par Alexandre Laquerre, Ces dernières semaines, nous avons beaucoup entendu parlé dans les nouvelles internationales du […]

The post Logiciels Malvaillants dit Malware, ransomware et autres attaques appeared first on LINKBYNET Blog - IT, Cloud computing, outsourcing and security Montreal.


          Software Engineer   
AL-Montgomery, The Software Engineer is responsible for providing software engineering expertise for the design, development and delivery of software to support computer products, systems, and subsystems. This hands-on position requires a demonstrated history of computing software design, debug, validation, qualification, and production readiness. The successful candidate will possess the ability to work in a hi
          Linux-friendly COM Express duo taps PowerPC based QorIQs   
Artesyn’s rugged line of COMX-T modules debut with COMs using NXP’s quad-core QorIQ T2081 and QorIQ T1042 SoCs, clocked to 1.5GHz and 1.4GHz, respectively. Artesyn Embedded Computing has launched a line of 125 x 95mm COM Express Basic Type 6 COMX-T Series computer-on-modules that run Linux on NXP’s Power Architecture based QorIQ T processors: The […]
          4 cool facts you should know about FreeDOS   
In the early 1990s, I was a DOS "power user." I used DOS for everything and even wrote my own tools to extend the DOS command line. Sure, we had Microsoft Windows, but if you remember what computing looked like at the time, Windows 3.1 was not that great. I preferred working in DOS.read more
          Veeam to deliver support for Nutanix AHV Hypervisor   

Veeam Software, has announced an expanded partnership with Nutanix, a leader in enterprise cloud computing and hyper-converged infrastructure, in which Veeam becomes the Premier Availability solution provider for Nutanix virtualized environments. Veeam adds support of Nutanix AHV in its flagship Veeam Availability Suite, allowing joint Nutanix and Veeam customers to benefit from an enterprise-class Availability […]

The post Veeam to deliver support for Nutanix AHV Hypervisor appeared first on Ervik.as - EUC, HCI, Cloud and Virtualization Blog.


          Microsoft Office 365 Personal - 1 User - 1 Year Subscription   
Microsoft Office 365 Personal - 1 User - 1 Year Subscription

Microsoft Office 365 Personal - 1 User - 1 Year Subscription

Think of it as your familiar Office, only better Business-class email and calendaring put you in sync Online conferencing puts everyone on the same page Extend your reach with simple, more secure file sharing Build your online presence, minus the hosting fees


          New system enables 88x speedup on common parallel-computing algorithms   
New system enables 88x speedup on common parallel-computing algorithmsThe chips in most modern desktop computers have four “cores,” or processing units, which can run different computational tasks in parallel. But the chips of the future could have dozens or even hundreds of cores, ... Read more

          Camera Day Shortbread   

PC hardware and computing

Intel SSD 545S Series 512GB review @ PC Perspective
Western Digital My Passport SSD mini-review @ AnandTech
ECS Z270H4-I Mini-ITX motherboard review @ Tom's Hardware
Toshiba XG5 NVMe SSD review – 3D BiCS 64-layer flash shines @ The SS Review
Overclocking the Core i9-7900X @ TechSpot
Adesso AKB-636UB typewriter keyboard review @ TechPowerUp
Intel SSD 545s 512GB SATA SSD review—64-layer TLC NAND @ Legit Reviews
QNAP TS-453B (8GB) 4-Bay NAS review @ KitGuru
Review: Acer Aspire VX 15 laptop @ Hexus

Read more...


          BT Mini Wi-Fi 600 Home Hotspot Powerline Adapter Kit, White, Pack of 3   
BT Mini Wi-Fi 600 Home Hotspot Powerline Adapter Kit, White, Pack of 3

BT Mini Wi-Fi 600 Home Hotspot Powerline Adapter Kit, White, Pack of 3

Get the best Wi-Fi anywhere in your house Works with any broadband New, easy to use, smaller set-up, works straight out of the box Connect any wired or wireless device Ideal for multiple wireless HD / 3D video streaming


          Combining Symantec Windows Protection with Microsoft Vista   
Fulton County is the most populous county in Georgia, encompassing 11 different cities, including Atlanta. The Information Technology department supports the computing needs for more than 7,000 government employees within 42 departments spread across 225 buildings. The IT department protects hundreds of data center servers, 7,000 desktops, 1,500 laptops, and 750 Blackberry devices with solution [...]
          InfiniBand And Proprietary Networks Still Rule Real HPC   

With the network comprising as much as a quarter of the cost of a high performance computing system and being absolutely central to the performance of applications running on parallel systems, it is fair to say that the choice of network is at least as important as the choice of compute engine and storage hierarchy. That’s why we like to take a deep dive into the networking trends present in each iteration of the Top 500 supercomputer rankings as they come out.

It has been a long time since the Top 500 gave a snapshot of pure HPC centers that

InfiniBand And Proprietary Networks Still Rule Real HPC was written by Timothy Prickett Morgan at The Next Platform.


          [Interview] Net Neutrality - Chat with Kelly Gotlieb   
This is the next interview in the continuing conversation series with Kelly Gotlieb.Computing pioneer and CIPS Fellow FCIPS, Kelly ... tags: IbarakiInterviewsstephen[Interview] Net Neutrality - Chat with Kelly Gotlieb
Canadian IT Managers
          大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理   

大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

作者:Stephen Cui

一、大数据分析在商业上的应用

1、体育赛事预测

世界杯期间,谷歌、百度、微软和高盛等公司都推出了比赛结果预测平台。百度预测结果最为亮眼,预测全程64场比赛,准确率为67%,进入淘汰赛后准确率为94%。现在互联网公司取代章鱼保罗试水赛事预测也意味着未来的体育赛事会被大数据预测所掌控。

“在百度对世界杯的预测中,我们一共考虑了团队实力、主场优势、最近表现、世界杯整体表现和博彩公司的赔率等五个因素,这些数据的来源基本都是互联网,随后我们再利用一个由搜索专家设计的机器学习模型来对这些数据进行汇总和分析,进而做出预测结果。”—百度北京大数据实验室的负责人张桐


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

2、股票市场预测

去年英国华威商学院和美国波士顿大学物理系的研究发现,用户通过谷歌搜索的金融关键词或许可以金融市场的走向,相应的投资战略收益高达326%。此前则有专家尝试通过Twitter博文情绪来预测股市波动。

理论上来讲股市预测更加适合美国。中国股票市场无法做到双向盈利,只有股票涨才能盈利,这会吸引一些游资利用信息不对称等情况人为改变股票市场规律,因此中国股市没有相对稳定的规律则很难被预测,且一些对结果产生决定性影响的变量数据根本无法被监控。

目前,美国已经有许多对冲基金采用大数据技术进行投资,并且收获甚丰。中国的中证广发百度百发100指数基金(下称百发100),上线四个多月以来已上涨68%。

和传统量化投资类似,大数据投资也是依靠模型,但模型里的数据变量几何倍地增加了,在原有的金融结构化数据基础上,增加了社交言论、地理信息、卫星监测等非结构化数据,并且将这些非结构化数据进行量化,从而让模型可以吸收。

由于大数据模型对成本要求极高,业内人士认为,大数据将成为共享平台化的服务,数据和技术相当于食材和锅,基金经理和分析师可以通过平台制作自己的策略。

http://v.youku.com/v_show/id_XMzU0ODIxNjg0.html

3、市场物价预测

CPI表征已经发生的物价浮动情况,但统计局数据并不权威。但大数据则可能帮助人们了解未来物价走向,提前预知通货膨胀或经济危机。最典型的案例莫过于马云通过阿里B2B大数据提前知晓亚洲金融危机,当然这是阿里数据团队的功劳。

4、用户行为预测

基于用户搜索行为、浏览行为、评论历史和个人资料等数据,互联网业务可以洞察消费者的整体需求,进而进行针对性的产品生产、改进和营销。《纸牌屋》选择演员和剧情、百度基于用户喜好进行精准广告营销、阿里根据天猫用户特征包下生产线定制产品、亚马逊预测用户点击行为提前发货均是受益于互联网用户行为预测。

购买前的行为信息,可以深度地反映出潜在客户的购买心理和购买意向:例如,客户 A 连续浏览了 5 款电视机,其中 4 款来自国内品牌 S,1 款来自国外品牌 T;4 款为 LED 技术,1 款为 LCD 技术;5 款的价格分别为 4599 元、5199 元、5499 元、5999 元、7999 元;这些行为某种程度上反映了客户 A 对品牌认可度及倾向性,如偏向国产品牌、中等价位的 LED 电视。而客户 B 连续浏览了 6 款电视机,其中 2 款是国外品牌 T,2 款是另一国外品牌 V,2 款是国产品牌 S;4 款为 LED 技术,2 款为 LCD 技术;6 款的价格分别为 5999 元、7999 元、8300 元、9200 元、9999 元、11050 元;类似地,这些行为某种程度上反映了客户 B 对品牌认可度及倾向性,如偏向进口品牌、高价位的 LED 电视等。

http://36kr.com/p/205901.html

5、人体健康预测

中医可以通过望闻问切手段发现一些人体内隐藏的慢性病,甚至看体质便可知晓一个人将来可能会出现什么症状。人体体征变化有一定规律,而慢性病发生前人体已经会有一些持续性异常。理论上来说,如果大数据掌握了这样的异常情况,便可以进行慢性病预测。

6、疾病疫情预测

基于人们的搜索情况、购物行为预测大面积疫情爆发的可能性,最经典的“流感预测”便属于此类。如果来自某个区域的“流感”、“板蓝根”搜索需求越来越多,自然可以推测该处有流感趋势。

Google成功预测冬季流感:
2009年,Google通过分析5000万条美国人最频繁检索的词汇,将之和美国疾病中心在2003年到2008年间季节性流感传播时期的数据进行比较,并建立一个特定的数学模型。最终google成功预测了2009冬季流感的传播甚至可以具体到特定的地区和州。

7、灾害灾难预测

气象预测是最典型的灾难灾害预测。地震、洪涝、高温、暴雨这些自然灾害如果可以利用大数据能力进行更加提前的预测和告知便有助于减灾防灾救灾赈灾。与过往不同的是,过去的数据收集方式存在着死角、成本高等问题,物联网时代可以借助廉价的传感器摄像头和无线通信网络,进行实时的数据监控收集,再利用大数据预测分析,做到更精准的自然灾害预测。

8、环境变迁预测

除了进行短时间微观的天气、灾害预测之外,还可以进行更加长期和宏观的环境和生态变迁预测。森林和农田面积缩小、野生动物植物濒危、海岸线上升,温室效应这些问题是地球面临的“慢性问题“。如果人类知道越多地球生态系统以及天气形态变化数据,就越容易模型化未来环境的变迁,进而阻止不好的转变发生。而大数据帮助人类收集、储存和挖掘更多的地球数据,同时还提供了预测的工具。

9、交通行为预测

基于用户和车辆的LBS定位数据,分析人车出行的个体和群体特征,进行交通行为的预测。交通部门可预测不同时点不同道路的车流量进行智能的车辆调度,或应用潮汐车道;用户则可以根据预测结果选择拥堵几率更低的道路。

百度基于地图应用的LBS预测涵盖范围更广。春运期间预测人们的迁徙趋势指导火车线路和航线的设置,节假日预测景点的人流量指导人们的景区选择,平时还有百度热力图来告诉用户城市商圈、动物园等地点的人流情况,指导用户出行选择和商家的选点选址。

多尔戈夫的团队利用机器学习算法来创造路上行人的模型。无人驾驶汽车行驶的每一英里路程的情况都会被记录下来,汽车电脑就会保持这些数据,并分析各种不同的对象在不同的环境中如何表现。有些司机的行为可能会被设置为固定变量(如“绿灯亮,汽车行”),但是汽车电脑不会死搬硬套这种逻辑,而是从实际的司机行为中进行学习。

这样一来,跟在一辆垃圾运输卡车后面行驶的汽车,如果卡车停止行进,那么汽车可能会选择变道绕过去,而不是也跟着停下来。谷歌已建立了70万英里的行驶数据,这有助于谷歌汽车根据自己的学习经验来调整自己的行为。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

http://www.5lian.cn/html/2014/chelianwang_0522/42125_4.html

10、能源消耗预测

加州电网系统运营中心管理着加州超过80%的电网,向3500万用户每年输送2.89亿兆瓦电力,电力线长度超过25000英里。该中心采用了Space-Time Insight的软件进行智能管理,综合分析来自包括天气、传感器、计量设备等各种数据源的海量数据,预测各地的能源需求变化,进行智能电能调度,平衡全网的电力供应和需求,并对潜在危机做出快速响应。中国智能电网业已在尝试类似大数据预测应用。

二、大数据分析种类 按照数据分析的实时性,分为实时数据分析和离线数据分析两种。

实时数据分析一般用于金融、移动和互联网B2C等产品,往往要求在数秒内返回上亿行数据的分析,从而达到不影响用户体验的目的。要满足这样的需求,可以采用精心设计的传统关系型数据库组成并行处理集群,或者采用一些内存计算平台,或者采用HDD的架构,这些无疑都需要比较高的软硬件成本。目前比较新的海量数据实时分析工具有EMC的Greenplum、SAP的HANA等。

对于大多数反馈时间要求不是那么严苛的应用,比如离线统计分析、机器学习、搜索引擎的反向索引计算、推荐引擎的计算等,应采用离线分析的方式,通过数据采集工具将日志数据导入专用的分析平台。但面对海量数据,传统的ETL工具往往彻底失效,主要原因是数据格式转换的开销太大,在性能上无法满足海量数据的采集需求。互联网企业的海量数据采集工具,有Facebook开源的Scribe、LinkedIn开源的Kafka、淘宝开源的Timetunnel、Hadoop的Chukwa等,均可以满足每秒数百MB的日志数据采集和传输需求,并将这些数据上载到Hadoop中央系统上。

按照大数据的数据量,分为内存级别、BI级别、海量级别三种。

这里的内存级别指的是数据量不超过集群的内存最大值。不要小看今天内存的容量,Facebook缓存在内存的Memcached中的数据高达320TB,而目前的PC服务器,内存也可以超过百GB。因此可以采用一些内存数据库,将热点数据常驻内存之中,从而取得非常快速的分析能力,非常适合实时分析业务。图1是一种实际可行的MongoDB分析架构。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

图1 用于实时分析的MongoDB架构

MongoDB大集群目前存在一些稳定性问题,会发生周期性的写堵塞和主从同步失效,但仍不失为一种潜力十足的可以用于高速数据分析的NoSQL。

此外,目前大多数服务厂商都已经推出了带4GB以上SSD的解决方案,利用内存+SSD,也可以轻易达到内存分析的性能。随着SSD的发展,内存数据分析必然能得到更加广泛的应用。

BI级别指的是那些对于内存来说太大的数据量,但一般可以将其放入传统的BI产品和专门设计的BI数据库之中进行分析。目前主流的BI产品都有支持TB级以上的数据分析方案。种类繁多。

海量级别指的是对于数据库和BI产品已经完全失效或者成本过高的数据量。海量数据级别的优秀企业级产品也有很多,但基于软硬件的成本原因,目前大多数互联网企业采用Hadoop的HDFS分布式文件系统来存储数据,并使用MapReduce进行分析。本文稍后将主要介绍Hadoop上基于MapReduce的一个多维数据分析平台。

三、大数据分析一般过程

3.1 采集

大数据的采集是指利用多个数据库来接收发自客户端(Web、App或者传感器形式等)的 数据,并且用户可以通过这些数据库来进行简单的查询和处理工作。比如,电商会使用传统的关系型数据库mysql和Oracle等来存储每一笔事务数据,除 此之外,Redis和MongoDB这样的NoSQL数据库也常用于数据的采集。

在大数据的采集过程中,其主要特点和挑战是并发数高,因为同时有可能会有成千上万的用户 来进行访问和操作,比如火车票售票网站和淘宝,它们并发的访问量在峰值时达到上百万,所以需要在采集端部署大量数据库才能支撑。并且如何在这些数据库之间 进行负载均衡和分片的确是需要深入的思考和设计。

3.2 导入/预处理

虽然采集端本身会有很多数据库,但是如果要对这些海量数据进行有效的分析,还是应该将这 些来自前端的数据导入到一个集中的大型分布式数据库,或者分布式存储集群,并且可以在导入基础上做一些简单的清洗和预处理工作。也有一些用户会在导入时使 用来自Twitter的Storm来对数据进行流式计算,来满足部分业务的实时计算需求。
导入与预处理过程的特点和挑战主要是导入的数据量大,每秒钟的导入量经常会达到百兆,甚至千兆级别。

3.3 统计/分析

统计与分析主要利用分布式数据库,或者分布式计算集群来对存储于其内的海量数据进行普通 的分析和分类汇总等,以满足大多数常见的分析需求,在这方面,一些实时性需求会用到EMC的GreenPlum、Oracle的Exadata,以及基于 MySQL的列式存储Infobright等,而一些批处理,或者基于半结构化数据的需求可以使用Hadoop。
统计与分析这部分的主要特点和挑战是分析涉及的数据量大,其对系统资源,特别是I/O会有极大的占用。

3.4 挖掘

与前面统计和分析过程不同的是,数据挖掘一般没有什么预先设定好的主题,主要是在现有数 据上面进行基于各种算法的计算,从而起到预测(Predict)的效果,从而实现一些高级别数据分析的需求。比较典型算法有用于聚类的Kmeans、用于 统计学习的SVM和用于分类的NaiveBayes,主要使用的工具有Hadoop的Mahout等。该过程的特点和挑战主要是用于挖掘的算法很复杂,并 且计算涉及的数据量和计算量都很大,常用数据挖掘算法都以单线程为主。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
四、大数据分析工具

4.1 Hadoop

Hadoop 是一个能够对大量数据进行分布式处理的软件框架。但是 Hadoop 是以一种可靠、高效、可伸缩的方式进行处理的。Hadoop 是可靠的,因为它假设计算元素和存储会失败,因此它维护多个工作数据副本,确保能够针对失败的节点重新分布处理。Hadoop 是高效的,因为它以并行的方式工作,通过并行处理加快处理速度。Hadoop 还是可伸缩的,能够处理 PB 级数据。此外,Hadoop 依赖于社区服务器,因此它的成本比较低,任何人都可以使用。

Hadoop是一个能够让用户轻松架构和使用的分布式计算平台。用户可以轻松地在Hadoop上开发和运行处理海量数据的应用程序。它主要有以下几个优点:

高可靠性。Hadoop按位存储和处理数据的能力值得人们信赖。 高扩展性。Hadoop是在可用的计算机集簇间分配数据并完成计算任务的,这些集簇可以方便地扩展到数以千计的节点中。 高效性。Hadoop能够在节点之间动态地移动数据,并保证各个节点的动态平衡,因此处理速度非常快。 高容错性。Hadoop能够自动保存数据的多个副本,并且能够自动将失败的任务重新分配。

Hadoop带有用 Java 语言编写的框架,因此运行在 linux 生产平台上是非常理想的。Hadoop 上的应用程序也可以使用其他语言编写,比如 C++。

4.2 HPCC

HPCC,High Performance Computing and Communications(高性能计算与通信)的缩写。1993年,由美国科学、工程、技术联邦协调理事会向国会提交了“重大挑战项目:高性能计算与 通信”的报告,也就是被称为HPCC计划的报告,即美国总统科学战略项目,其目的是通过加强研究与开发解决一批重要的科学与技术挑战问题。HPCC是美国 实施信息高速公路而上实施的计划,该计划的实施将耗资百亿美元,其主要目标要达到:开发可扩展的计算系统及相关软件,以支持太位级网络传输性能,开发千兆 比特网络技术,扩展研究和教育机构及网络连接能力。

该项目主要由五部分组成:

高性能计算机系统(HPCS),内容包括今后几代计算机系统的研究、系统设计工具、先进的典型系统及原有系统的评价等; 先进软件技术与算法(ASTA),内容有巨大挑战问题的软件支撑、新算法设计、软件分支与工具、计算计算及高性能计算研究中心等; 国家科研与教育网格(NREN),内容有中接站及10亿位级传输的研究与开发; 基本研究与人类资源(BRHR),内容有基础研究、培训、教育及课程教材,被设计通过奖励调查者-开始的,长期 的调查在可升级的高性能计算中来增加创新意识流,通过提高教育和高性能的计算训练和通信来加大熟练的和训练有素的人员的联营,和来提供必需的基础架构来支 持这些调查和研究活动; 信息基础结构技术和应用(IITA),目的在于保证美国在先进信息技术开发方面的领先地位。

4.3 Storm

Storm是自由的开源软件,一个分布式的、容错的实时计算系统。Storm可以非常可靠的处理庞大的数据流,用于处理Hadoop的批量数据。Storm很简单,支持许多种编程语言,使用起来非常有趣。Storm由Twitter开源而来,其它知名的应用企业包括Groupon、淘宝、支付宝、阿里巴巴、乐元素、Admaster等等。

Storm有许多应用领域:实时分析、在线机器学习、不停顿的计算、分布式RPC(远过程调用协议,一种通过网络从远程计算机程序上请求服务)、 ETL(Extraction-Transformation-Loading的缩写,即数据抽取、转换和加载)等等。Storm的处理速度惊人:经测 试,每个节点每秒钟可以处理100万个数据元组。Storm是可扩展、容错,很容易设置和操作。

4.4 Apache Drill

为了帮助企业用户寻找更为有效、加快Hadoop数据查询的方法,Apache软件基金会近日发起了一项名为“Drill”的开源项目。Apache Drill 实现了 Google’s Dremel.

据Hadoop厂商MapRTechnologies公司产品经理Tomer Shiran介绍,“Drill”已经作为Apache孵化器项目来运作,将面向全球软件工程师持续推广。

该项目将会创建出开源版本的谷歌Dremel Hadoop工具(谷歌使用该工具来为Hadoop数据分析工具的互联网应用提速)。而“Drill”将有助于Hadoop用户实现更快查询海量数据集的目的。

“Drill”项目其实也是从谷歌的Dremel项目中获得灵感:该项目帮助谷歌实现海量数据集的分析处理,包括分析抓取Web文档、跟踪安装在Android Market上的应用程序数据、分析垃圾邮件、分析谷歌分布式构建系统上的测试结果等等。

通过开发“Drill”Apache开源项目,组织机构将有望建立Drill所属的API接口和灵活强大的体系架构,从而帮助支持广泛的数据源、数据格式和查询语言。

4.5 RapidMiner

RapidMiner是世界领先的数据挖掘解决方案,在一个非常大的程度上有着先进技术。它数据挖掘任务涉及范围广泛,包括各种数据艺术,能简化数据挖掘过程的设计和评价。

功能和特点

免费提供数据挖掘技术和库 100%用Java代码(可运行在操作系统) 数据挖掘过程简单,强大和直观 内部XML保证了标准化的格式来表示交换数据挖掘过程 可以用简单脚本语言自动进行大规模进程 多层次的数据视图,确保有效和透明的数据 图形用户界面的互动原型 命令行(批处理模式)自动大规模应用 Java API(应用编程接口) 简单的插件和推广机制 强大的可视化引擎,许多尖端的高维数据的可视化建模 400多个数据挖掘运营商支持

耶鲁大学已成功地应用在许多不同的应用领域,包括文本挖掘,多媒体挖掘,功能设计,数据流挖掘,集成开发的方法和分布式数据挖掘。

4.6 Pentaho BI

Pentaho BI 平台不同于传统的BI 产品,它是一个以流程为中心的,面向解决方案(Solution)的框架。其目的在于将一系列企业级BI产品、开源软件、API等等组件集成起来,方便商务智能应用的开发。它的出现,使得一系列的面向商务智能的独立产品如Jfree、Quartz等等,能够集成在一起,构成一项项复杂的、完整的商务智能解决方案。

Pentaho BI 平台,Pentaho Open BI 套件的核心架构和基础,是以流程为中心的,因为其中枢控制器是一个工作流引擎。工作流引擎使用流程定义来定义在BI 平台上执行的商业智能流程。流程可以很容易的被定制,也可以添加新的流程。BI 平台包含组件和报表,用以分析这些流程的性能。目前,Pentaho的主要组成元素包括报表生成、分析、数据挖掘和工作流管理等等。这些组件通过 J2EE、WebService、SOAP、HTTP、Java、javascript、Portals等技术集成到Pentaho平台中来。 Pentaho的发行,主要以Pentaho SDK的形式进行。

Pentaho SDK共包含五个部分:Pentaho平台、Pentaho示例数据库、可独立运行的Pentaho平台、Pentaho解决方案示例和一个预先配制好的 Pentaho网络服务器。其中Pentaho平台是Pentaho平台最主要的部分,囊括了Pentaho平台源代码的主体;Pentaho数据库为 Pentaho平台的正常运行提供的数据服务,包括配置信息、Solution相关的信息等等,对于Pentaho平台来说它不是必须的,通过配置是可以用其它数据库服务取代的;可独立运行的Pentaho平台是Pentaho平台的独立运行模式的示例,它演示了如何使Pentaho平台在没有应用服务器支持的情况下独立运行;

Pentaho解决方案示例是一个Eclipse工程,用来演示如何为Pentaho平台开发相关的商业智能解决方案。

Pentaho BI 平台构建于服务器,引擎和组件的基础之上。这些提供了系统的J2EE 服务器,安全,portal,工作流,规则引擎,图表,协作,内容管理,数据集成,分析和建模功能。这些组件的大部分是基于标准的,可使用其他产品替换之。

4.7 SAS Enterprise Miner

§ 支持整个数据挖掘过程的完备工具集 § 易用的图形界面,适合不同类型的用户快速建模 § 强大的模型管理和评估功能 § 快速便捷的模型发布机制, 促进业务闭环形成 五、数据分析算法

大数据分析主要依靠机器学习和大规模计算。机器学习包括监督学习、非监督学习、强化学习等,而监督学习又包括分类学习、回归学习、排序学习、匹配学习等(见图1)。分类是最常见的机器学习应用问题,比如垃圾邮件过滤、人脸检测、用户画像、文本情感分析、网页归类等,本质上都是分类问题。分类学习也是机器学习领域,研究最彻底、使用最广泛的一个分支。

最近、Fernández-Delgado等人在JMLR(Journal of Machine Learning Research,机器学习顶级期刊)杂志发表了一篇有趣的论文。他们让179种不同的分类学习方法(分类学习算法)在UCI 121个数据集上进行了“大比武”(UCI是机器学习公用数据集,每个数据集的规模都不大)。结果发现Random Forest(随机森林)和SVM(支持向量机)名列第一、第二名,但两者差异不大。在84.3%的数据上、Random Forest压倒了其它90%的方法。也就是说,在大多数情况下,只用Random Forest 或 SVM事情就搞定了。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

https://github.com/linyiqun/DataMiningAlgorithm

KNN

K最近邻算法。给定一些已经训练好的数据,输入一个新的测试数据点,计算包含于此测试数据点的最近的点的分类情况,哪个分类的类型占多数,则此测试点的分类与此相同,所以在这里,有的时候可以复制不同的分类点不同的权重。近的点的权重大点,远的点自然就小点。详细介绍链接

Naive Bayes

朴素贝叶斯算法。朴素贝叶斯算法是贝叶斯算法里面一种比较简单的分类算法,用到了一个比较重要的贝叶斯定理,用一句简单的话概括就是条件概率的相互转换推导。详细介绍链接

朴素贝叶斯分类是一种十分简单的分类算法,叫它朴素贝叶斯分类是因为这种方法的思想真的很朴素,朴素贝叶斯的思想基础是这样的:对于给出的待分类项,求解在此项出现的条件下各个类别出现的概率,哪个最大,就认为此待分类项属于哪个类别。通俗来说,就好比这么个道理,你在街上看到一个黑人,我问你你猜这哥们哪里来的,你十有八九猜非洲。为什么呢?因为黑人中非洲人的比率最高,当然人家也可能是美洲人或亚洲人,但在没有其它可用信息下,我们会选择条件概率最大的类别,这就是朴素贝叶斯的思想基础。

SVM

支持向量机算法。支持向量机算法是一种对线性和非线性数据进行分类的方法,非线性数据进行分类的时候可以通过核函数转为线性的情况再处理。其中的一个关键的步骤是搜索最大边缘超平面。详细介绍链接

Apriori

Apriori算法是关联规则挖掘算法,通过连接和剪枝运算挖掘出频繁项集,然后根据频繁项集得到关联规则,关联规则的导出需要满足最小置信度的要求。详细介绍链接

PageRank

网页重要性/排名算法。PageRank算法最早产生于Google,核心思想是通过网页的入链数作为一个网页好快的判定标准,如果1个网页内部包含了多个指向外部的链接,则PR值将会被均分,PageRank算法也会遭到LinkSpan攻击。详细介绍链接

RandomForest

随机森林算法。算法思想是决策树+boosting.决策树采用的是CART分类回归数,通过组合各个决策树的弱分类器,构成一个最终的强分类器,在构造决策树的时候采取随机数量的样本数和随机的部分属性进行子决策树的构建,避免了过分拟合的现象发生。详细介绍链接

Artificial Neural Network

“神经网络”这个词实际是来自于生物学,而我们所指的神经网络正确的名称应该是“人工神经网络(ANNs)”。
人工神经网络也具有初步的自适应与自组织能力。在学习或训练过程中改变突触权重值,以适应周围环境的要求。同一网络因学习方式及内容不同可具有不同的功能。人工神经网络是一个具有学习能力的系统,可以发展知识,以致超过设计者原有的知识水平。通常,它的学习训练方式可分为两种,一种是有监督或称有导师的学习,这时利用给定的样本标准进行分类或模仿;另一种是无监督学习或称无为导师学习,这时,只规定学习方式或某些规则,则具体的学习内容随系统所处环境 (即输入信号情况)而异,系统可以自动发现环境特征和规律性,具有更近似人脑的功能。 六、 案例

6.1 啤酒与尿布


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

“啤酒与尿布”的故事产生于20世纪90年代的美国沃尔玛超市中,沃尔玛的超市管理人员分析销售数据时发现了一个令人难于理解的现象:在某些特定的情况下,“啤酒”与“尿布”两件看上去毫无关系的商品会经常出现在同一个购物篮中,这种独特的销售现象引起了管理人员的注意,经过后续调查发现,这种现象出现在年轻的父亲身上。

在美国有婴儿的家庭中,一般是母亲在家中照看婴儿,年轻的父亲前去超市购买尿布。父亲在购买尿布的同时,往往会顺便为自己购买啤酒,这样就会出现啤酒与尿布这两件看上去不相干的商品经常会出现在同一个购物篮的现象。如果这个年轻的父亲在卖场只能买到两件商品之一,则他很有可能会放弃购物而到另一家商店, 直到可以一次同时买到啤酒与尿布为止。沃尔玛发现了这一独特的现象,开始在卖场尝试将啤酒与尿布摆放在相同的区域,让年轻的父亲可以同时找到这两件商品,并很快地完成购物;而沃尔玛超市也可以让这些客户一次购买两件商品、而不是一件,从而获得了很好的商品销售收入,这就是“啤酒与尿布” 故事的由来。

当然“啤酒与尿布”的故事必须具有技术方面的支持。1993年美国学者Agrawal提出通过分析购物篮中的商品集合,从而找出商品之间关联关系的关联算法,并根据商品之间的关系,找出客户的购买行为。艾格拉沃从数学及计算机算法角度提 出了商品关联关系的计算方法——Aprior算法。沃尔玛从上个世纪 90 年代尝试将 Aprior算法引入到 POS机数据分析中,并获得了成功,于是产生了“啤酒与尿布”的故事。

6.2 数据分析帮助辛辛那提动物园提高客户满意度


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

辛辛那提动植物园成立于1873年,是世界上著名的动植物园之一,以其物种保护和保存以及高成活率繁殖饲养计划享有极高声誉。它占地面积71英亩,园内有500种动物和3000多种植物,是国内游客人数最多的动植物园之一,曾荣获Zagat十佳动物园,并被《父母》(Parent)杂志评为最受儿童喜欢的动物园,每年接待游客130多万人。

辛辛那提动植物园是一个非营利性组织,是俄亥州同时也是美国国内享受公共补贴最低的动植物园,除去政府补贴,2600万美元年度预算中,自筹资金部分达到三分之二以上。为此,需要不断地寻求增加收入。而要做到这一点,最好办法是为工作人员和游客提供更好的服务,提高游览率。从而实现动植物园与客户和纳税人的双赢。

借助于该方案强大的收集和处理能力、互联能力、分析能力以及随之带来的洞察力,在部署后,企业实现了以下各方面的受益: 帮助动植物园了解每个客户浏览、使用和消费模式,根据时间和地理分布情况采取相应的措施改善游客体验,同时实现营业收入最大化。 根据消费和游览行为对动植物园游客进行细分,针对每一类细分游客开展营销和促销活动,显著提高忠诚度和客户保有量。. 识别消费支出低的游客,针对他们发送具有战略性的直寄广告,同时通过具有创意性的营销和激励计划奖励忠诚客户。 360度全方位了解客户行为,优化营销决策,实施解决方案后头一年节省40,000多美元营销成本,同时强化了可测量的结果。 采用地理分析显示大量未实现预期结果的促销和折扣计划,重新部署资源支持产出率更高的业务活动,动植物园每年节省100,000多美元。 通过强化营销提高整体游览率,2011年至少新增50,000人次“游览”。 提供洞察结果强化运营管理。例如,即将关门前冰激淋销售出现高潮,动植物园决定延长冰激淋摊位营业时间,直到关门为止。这一措施夏季每天可增加2,000美元收入。 与上年相比,餐饮销售增加30.7%,零售销售增加5.9%。 动植物园高层管理团队可以制定更好的决策,不需要 IT 介入或提供支持。 将分析引入会议室,利用直观工具帮助业务人员掌握数据。

6.3 云南昭通警察打中学生事件舆情分析

起因:

5月20日,有网友在微博上爆料称:云南昭通鲁甸二中初二学生孔德政,对着3名到该校出警并准备上车返回的警察说了一句“打电话那个,下来”,车内的两名警员听到动静后下来,追到该学生后就是一顿拳打脚踢。

5月26日,昭通市鲁甸县公安局新闻办回应此事:鲁甸县公安局已对当事民警停止执行职务,对殴打学生的两名协警作出辞退处理,并将根据调查情况依法依规作进一步处理。同时,鲁甸县公安局将加大队伍教育管理力度,坚决防止此类事件的再次发生。

经过:


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

5月26日,事件的舆情热度急剧上升,媒体报道内容侧重于“班主任称此学生平时爱起哄学习成绩差”“被打学生的同学去派出所讨说法”“学校要求学生删除照片”等方面,而学校要求删除图片等行为的曝光让事件舆情有扩大化趋势。

5月26日晚间,新华网发布新闻《警方回应“云南一学生遭2名警察暴打”:民警停职协警辞退》,中央主流网络媒体公布官方处置结果,网易、新浪、腾讯等门户网站予以转发,从而让官方的处置得以较大范围传播。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

昭通警察打中学生事件舆论关注度走势(抽样条数:290条)

总结:

“警察打学生,而且有图有真相,在事发5天后,昭通市鲁甸县警方最终还是站在了舆论的风口浪尖。事发后当地官方积极回应,并于5月26日将涉事人予以处理,果断的责任切割较为有效地抚平了舆论情绪,从而较好地化解了此次舆论危机。

从事件的传播来看,事发时间是5月20日,舆论热议则出现在25日,4天的平静期让鲁甸警方想当然地以为事件就此了结,或许当事人都已淡忘此事。如果不是云南当地活跃网友“直播云南”于5月25日发布关于此事的消息,并被当地传统媒体《生活新报》关注的话,事情或许真的就此结束,然而舆情发展不允许假设的存在。这一点,至少给我们以警示,对微博等自媒体平台上的负面信息要实时监测,对普通草根要监测,对本地实名认证的活跃网友更需监测。从某种角度看,本地实名认证的网友是更为强大的“舆论发动机”,负面消息一旦经他们发布或者转发,所带来的传播和形成的舆论压力更大。

在此事件中,校方也扮演着极为重要的角色。无论是被打学生的班主任,还是学校层面,面对此事件的回应都欠妥当。学校层面的“删除照片”等指示极易招致网友和学生的反感,在此反感情绪下,只会加剧学生传播事件的冲动。班主任口中该学生“学习不好、爱起哄”等负面印象被理解成“该学生活该被打”,在教师整体形象不佳的背景下,班主任的这些言论是责任感缺失的一种体现。校方和班主任的不恰当行为让事件处置难度和舆论引导难度明显增加,实在不该。“ — 人民网舆情监测室主任舆情分析师朱明刚

七、大数据云图展示
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

End.

转载请注明来自36大数据(36dsj.com):36大数据 大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理


          Six Benefits of Cloud Computing with Amazon Web Services   

The post Six Benefits of Cloud Computing with Amazon Web Services appeared first on The Learning Catalyst.


          And it gives a red    
Nevada auto cover is unavoidable by the communicate. Drivers who are caught in operation a conveyance that is not insured are idea to penalties, fines and in remission licenses. There is slender fate that drivers can get away next to dynamic without security because the detail keeps tabs on the vehicles.Technology is full of character and this includes the application utilised to line vehicles that may or may not have protection. The enumerate of Nevada is very well cognisant of which cars and trucks have been updated and which ones have policies that have irreligious. The method of abidance path nearly new to be consuming but present it is slightly comfortable.The detail uses a computing machine programme to maintain track of transport guarantee sum. The scheme alerts the give when a canon has been concluded or canceled and it gives a red banner to ones that have not been reinstated or replace by different policy. The Department of Motor Vehicles plant tirelessly to haunt up on respectively red bunting.Post ads:nokia 7610 call recorder software free download / spy mouse boss 1 / free spy call software nokia 7610 / telephone voice recorder best buy / how to catch someone cheating in a long distance relationship / bf thinks i m cheating on him / spy stuff iphone / htc softwares for mobiles / signs of infidelity wife / mosquito repellent software for mobile phones / spy phone nokia 3310 / does cell phone spying work / i know you're cheating on me / spy call killer mobile / seven signs of a cheating spouse / google hidden phone book / free mobile spy software download samsungDrivers will get a time-sensitive text that requirements a riposte within twenty years. The missive requests facts of security of proof that the transport has been sold-out and relocated. Consumers can either displace a rejoinder via rhythmical communication or they can call round the DMV website to net a outcome.
          Account Manager - North - Captec Ltd - Field Based   
Expect this to be 60-75% of your working week. An exciting opportunity has arisen for a field based Account Manager position for specialised computing equipment... £27,000 - £33,000 a year
From Indeed - Fri, 30 Jun 2017 17:23:00 GMT - View all Field Based jobs
          Interview of Google CEO   

A Interview of Google ceo is found from faz-community as below.


- Sometimes video and display advertising works, sometimes it doesn't. Mobile works all the time.

- Mobile will be a larger business than the PC-Web. But it will take a few years.

- The web 2.0 architecture is not necessarily a revenue opportunity. This is not where the money is.

- It is obvious that people spend a lot of time in social networks. I believe there will be some new advertising products, that will work, but I don't think they are invented yet.

- Cloud computing is a business model for Google.

- Google may be the leader in text ads but Google is not in the online ad market in general or display advertising. Yahoo is the leader in display advertising.

- 4 years ago Google agreed to work with larry and sergey together at least 20 years. So we have at least 16 years to go.



(Source: faz-community)

          Re: WHY is everybody picking on Wii?   

As mentioned in another post, I was a bit of a late comer to the Wii, having purchased mine in 2011. At the time, I narrowed it down to the Xbox or the Wii. I knew the Wii had less power and fewer games (that would appeal to adults), but I went with the Wii for 2 reasons: first, it had enough games that appealed to me that I knew it would get used. Second, the motion sensing games required less space to be effective (installing into a smallish room) than it appeared the Xbox would require, so I went with the Wii. If I want to play a game that requires lots of computing horsepower, I'll play on the pc!
Craig


          15-AY082NA 4GB RAM, 500GB HDD 15.6" Notebook   
15-AY082NA  4GB RAM,  500GB HDD 15.6" Notebook

15-AY082NA 4GB RAM, 500GB HDD 15.6" Notebook

Tackle all your daily tasks with an affordable laptop that comes packed with all the features you need. Get all the power you want with confidence in a trusted name like HP. Windows 10 39.6 cm (15.6") display 4GB RAM and 500GB HDD storage


          870-185NA OMEN 16GB RAM, 3TB HDD, 256GB SSD, NVIDIA® GeForce® GTX 1080 Gaming Desktop PC   
870-185NA OMEN 16GB RAM, 3TB HDD, 256GB SSD, NVIDIA® GeForce® GTX 1080 Gaming Desktop PC

870-185NA OMEN 16GB RAM, 3TB HDD, 256GB SSD, NVIDIA® GeForce® GTX 1080 Gaming Desktop PC

Rank up or be forgotten Go from gamer to gaming legend with the thrilling level of power competition demands. This OMEN desktop combines cutting-edge design and the industry's latest hardware to deliver a performance monster, ready to handle intensive AAA games with ease, and look good doing it. Windows 10 Home 64 Intel® Core™ i7-6700 processor 16GB RAM with 256GB SSD and 3TB storage NVIDIA® GeForce® GTX 1080 (8 GB GDDR5X dedicated) Customisable LED lighting; Air cooling; Landing pad for easy port access; Tool-less access


          Découvrir et apprendre l'étendue de l'écosystème supporté par Click&Cloud, un tutoriel de Laura Valeye   
Chers membres du club, j'ai le plaisir de vous présenter ce tutoriel pour découvrir et apprendre l'étendue de l'écosystème supporté par Click&Cloud.

Click&Cloud est une plateforme Cloud PaaS qui intègre une large gamme de couches applicatives. Vous allez apprendre, dans ce tutoriel, comment cette solution peut couvrir plusieurs besoins des développeurs professionnels.
Bonne lecture et n'hésitez pas à apporter vos commentaires

Retrouvez les meilleurs cours et tutoriels pour apprendre...
          IBM Watson To Predict Best Wimbledon Matches   
AI is probably the most interesting field of tech that is emerging. Now that computing power is almost limitless (subject to budget) with access to massive amounts of compute through cloud services, there are all sorts of interesting applications possible. So, naturally, AI and whatever flavour of sports-ball you are into is a candidate for the AI treatment. IBM is putting Watson on the court at Wimbledon, to find the best matches. More »
   
 
 

          When Good Companies Are Scary Stock Picks   
Beware big companies trying to navigate even bigger transitions. Case in point: Oracle and the cloud-computing revolution.
          Press release: New method could enable more stable and scalable quantum computing, Penn physicists report   

          The Ultimate Data Infrastructure Architect Bundle for $36   
From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
Expires June 01, 2022 23:59 PST
Buy now and get 94% off

Learning ElasticSearch 5.0


KEY FEATURES

Learn how to use ElasticSearch in combination with the rest of the Elastic Stack to ship, parse, store, and analyze logs! You'll start by getting an understanding of what ElasticSearch is, what it's used for, and why it's important before being introduced to the new features of Elastic Search 5.0.

  • Access 35 lectures & 3 hours of content 24/7
  • Go through each of the fundamental concepts of ElasticSearch such as queries, indices, & aggregation
  • Add more power to your searches using filters, ranges, & more
  • See how ElasticSearch can be used w/ other components like LogStash, Kibana, & Beats
  • Build, test, & run your first LogStash pipeline to analyze Apache web logs

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Ethan Anthony is a San Francisco based Data Scientist who specializes in distributed data centric technologies. He is also the Founder of XResults, where the vision is to harness the power of data to innovate and deliver intuitive customer facing solutions, largely to non-technical professionals. Ethan has over 10 combined years of experience in cloud based technologies such as Amazon webservices and OpenStack, as well as the data centric technologies of Hadoop, Mahout, Spark and ElasticSearch. He began using ElasticSearch in 2011 and has since delivered solutions based on the Elastic Stack to a broad range of clientele. Ethan has also consulted worldwide, speaks fluent Mandarin Chinese and is insanely curious about human cognition, as related to cognitive dissonance.

Apache Spark 2 for Beginners


KEY FEATURES

Apache Spark is one of the most widely-used large-scale data processing engines and runs at extremely high speeds. It's a framework that has tools that are equally useful for app developers and data scientists. This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup.

  • Access 45 lectures & 5.5 hours of content 24/7
  • Learn the Spark programming model through real-world examples
  • Explore Spark SQL programming w/ DataFrames
  • Cover the charting & plotting features of Python in conjunction w/ Spark data processing
  • Discuss Spark's stream processing, machine learning, & graph processing libraries
  • Develop a real-world Spark application

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.

Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.

Designing AWS Environments


KEY FEATURES

Amazon Web Services (AWS) provides trusted, cloud-based solutions to help businesses meet all of their needs. Running solutions in the AWS Cloud can help you (or your company) get applications up and running faster while providing the security needed to meet your compliance requirements. This course leaves no stone unturned in getting you up to speed with administering AWS.

  • Access 19 lectures & 2 hours of content 24/7
  • Familiarize yourself w/ the key capabilities to architect & host apps, websites, & services on AWS
  • Explore the available options for virtual instances & demonstrate launching & connecting to them
  • Design & deploy networking & hosting solutions for large deployments
  • Focus on security & important elements of scalability & high availability

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Wayde Gilchrist started moving customers of his IT consulting business into the cloud and away from traditional hosting environments in 2010. In addition to consulting, he delivers AWS training for Fortune 500 companies, government agencies, and international consulting firms. When he is not out visiting customers, he is delivering training virtually from his home in Florida.

Learning MongoDB


KEY FEATURES

Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

  • Access 64 lectures & 40 hours of content 24/7
  • Master data management, queries, post processing, & essential enterprise redundancy requirements
  • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
  • Delve into SSL security & programmatic access using various languages
  • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

Learning Hadoop 2


KEY FEATURES

Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousand of machines. In this course, you'll learn Hadoop 2, introducing yourself to the powerful system synonymous with Big Data.

  • Access 19 lectures & 1.5 hours of content 24/7
  • Get an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, & Hive
  • Install & configure a Hadoop environment
  • Explore Hue, the graphical user interface of Hadoop
  • Discover HDFS to import & export data, both manually & automatically
  • Run computations using MapReduce & get to grips working w/ Hadoop's scripting language, Pig
  • Siphon data from HDFS into Hive & demonstrate how it can be used to structure & query data sets

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children.

ElasticSearch 5.x Cookbook eBook


KEY FEATURES

ElasticSearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. Through this ebook, you'll be guided through comprehensive recipes covering what's new in ElasticSearch 5.x as you create complex queries and analytics. By the end, you'll have an in-depth knowledge of how to implement the ElasticSearch architecture and be able to manage data efficiently and effectively.

  • Access 696 pages of content 24/7
  • Perform index mapping, aggregation, & scripting
  • Explore the modules of Cluster & Node monitoring
  • Understand how to install Kibana to monitor a cluster & extend Kibana for plugins
  • Integrate your Java, Scala, Python, & Big Data apps w/ ElasticSearch

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.

In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).

In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDBengine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.

Fast Data Processing with Spark 2 eBook


KEY FEATURES

Compared to Hadoop, Spark is a significantly more simple way to process Big Data at speed. It is increasing in popularity with data analysts and engineers everywhere, and in this course you'll learn how to use Spark with minimum fuss. Starting with the fundamentals, this ebook will help you take your Big Data analytical skills to the next level.

  • Access 274 pages of content 24/7
  • Get to grips w/ some simple APIs before investigating machine learning & graph processing
  • Learn how to use the Spark shell
  • Load data & build & run your own Spark applications
  • Discover how to manipulate RDD
  • Understand useful machine learning algorithms w/ the help of Spark MLlib & R

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at http://cadenttech.tv/, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit, Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order, Big Data Analytics—Best of the Worst, predicting NFL, Spark, Data Science, Machine Learning, Social Media Analysis as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at https://doubleclix.wordpress.com/. His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.

MongoDB Cookbook: Second Edition eBook


KEY FEATURES

MongoDB is a high-performance, feature-rich, NoSQL database that forms the backbone of the systems that power many organizations. Packed with easy-to-use features that have become essential for a variety of software professionals, MongoDB is a vital technology to learn for any aspiring data scientist or systems engineer. This cookbook contains many solutions to the everyday challenges of MongoDB, as well as guidance on effective techniques to extend your skills and capabilities.

  • Access 274 pages of content 24/7
  • Initialize the server in three different modes w/ various configurations
  • Get introduced to programming language drivers in Java & Python
  • Learn advanced query operations, monitoring, & backup using MMS
  • Find recipes on cloud deployment, including how to work w/ Docker containers along MongoDB

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Amol Nayak is a MongoDB certified developer and has been working as a developer for over 8 years. He is currently employed with a leading financial data provider, working on cutting-edge technologies. He has used MongoDB as a database for various systems at his current and previous workplaces to support enormous data volumes. He is an open source enthusiast and supports it by contributing to open source frameworks and promoting them. He has made contributions to the Spring Integration project, and his contributions are the adapters for JPA, XQuery, MongoDB, Push notifications to mobile devices, and Amazon Web Services (AWS). He has also made some contributions to the Spring Data MongoDB project. Apart from technology, he is passionate about motor sports and is a race official at Buddh International Circuit, India, for various motor sports events. Earlier, he was the author of Instant MongoDB, Packt Publishing.

Cyrus Dasadia always liked tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB started in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects are written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. He likes spending his spare time trying to reverse engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.

Learning Apache Kafka: Second Edition eBook


KEY FEATURES

Apache Kafka is simple describe at a high level bust has an immense amount of technical detail when you dig deeper. This step-by-step, practical guide will help you take advantage of the power of Kafka to handle hundreds of megabytes of messages per second from multiple clients.

  • Access 120 pages of content 24/7
  • Set up Kafka clusters
  • Understand basic blocks like producer, broker, & consumer blocks
  • Explore additional settings & configuration changes to achieve more complex goals
  • Learn how Kafka is designed internally & what configurations make it most effective
  • Discover how Kafka works w/ other tools like Hadoop, Storm, & more

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Nishant Garg has over 14 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data R&D Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

Nishant has also undertaken many speaking engagements on big data technologies and is also the author of HBase Essestials, Packt Publishing.

Apache Flume: Distributed Log Collection for Hadoop: Second Edition eBook


KEY FEATURES

Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It's used to stream logs from application servers to HDFS for ad hoc analysis. This ebook start with an architectural overview of Flume and its logical components, and pulls everything together into a real-world, end-to-end use case encompassing simple and advanced features.

  • Access 178 pages of content 24/7
  • Explore channels, sinks, & sink processors
  • Learn about sources & channels
  • Construct a series of Flume agents to dynamically transport your stream data & logs from your systems into Hadoop

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (http://orbitz.com/).

          Cyber Security Volume II: Network Security for $15   
Discuss Network Security, Firewalls, & Learn the Best Password Managers On the Market
Expires May 20, 2022 23:59 PST
Buy now and get 87% off

KEY FEATURES

Over this course you'll learn network hacking techniques and vulnerability scanning to discover security risks across an entire network, learning skills for which companies are willing to pay top dollar. Whether you want to protect your own network or protect corporate networks professionally, this course will get you up to speed.

  • Access 106 lectures & 12.5 hours of content 24/7
  • Architect your network for maximum security & prevent local & remote attacks
  • Understand the various types of firewalls available, including layer 4 firewalls like Iptables & PF
  • Discuss firewalls on all platforms, including Windows, Mac OS, & Linux
  • Explore wireless security & learn how WiFi is hacked
  • Use tools like Wireshark, Tcpdump, & Syslog to monitor your network
  • Dive into search engine privacy & tracking, learning how to mitigate tracking & privacy issues

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Nathan House has over 24 years experience in cyber security where he has advised some of largest companies in the world, assuring security on multi-million and multi-billion pound projects. He is CEO of Station X, a cyber security consultancy. More recently Nathan acted as the lead security consultant on a number of the UK's mobile banking and payment solutions helping secure to date over £71Bn in transactions.

His clients have included; BP, ExxonMobil, Shell, Vodafone, VISA, T-mobile, GSK, COOP Banking Group, Royal Bank of Scotland, Natwest, Yorkshire bank, BG Group, BT, London 2012.

Over the years he has spoken at a number of security conferences, developed free security tools, and discovered serious security vulnerabilities in leading applications. Nathan's qualifications and education include:

  • BSc. (Hons) Computing 'Networks & Communication' 1st Class Honors
  • SCF : SABSA Charted Architect Foundation
  • CISSP : Certified Information Systems Security Professional
  • CISA : Certified Information Systems Auditor
  • CISM : Certified Information Security Manager
  • ISO 27001 Certified ISMS Lead Auditor
  • CEH : Certified Ethical Hacker
  • OSCP : Offensive Security Certified Professional

          Cyber Security Volume I: Hackers Exposed for $15   
Learn How to Stop Hackers, Prevent Tracking, & Counter Government Surveillance
Expires May 20, 2022 23:59 PST
Buy now and get 87% off

KEY FEATURES

Internet security has never been as important as it is today with more information than ever being handled digitally around the globe. In the first course of this four volume bundle, you'll get an introduction to hacking and how to protect yourself and others. You'll develop an understanding of the threat and vulnerability landscape through threat modeling and risk assessments, and build a foundation for which to expand your security knowledge.

  • Access 117 lectures & 11 hours of content 24/7
  • Explore the Darknet, malware, exploit kits, phishing, zero day vulnerabilities, & more
  • Learn about global tracking & hacking infrastructures that nation states run
  • Understand the foundations of operating system security & privacy functionality
  • Get a crash course on encryption, how it can be bypassed, & what you can do to mitigate risks
  • Discover defenses against phishing, SMShing, vishing, identity theft, & other cons

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Nathan House has over 24 years experience in cyber security where he has advised some of largest companies in the world, assuring security on multi-million and multi-billion pound projects. He is CEO of Station X, a cyber security consultancy. More recently Nathan acted as the lead security consultant on a number of the UK's mobile banking and payment solutions helping secure to date over £71Bn in transactions.

His clients have included; BP, ExxonMobil, Shell, Vodafone, VISA, T-mobile, GSK, COOP Banking Group, Royal Bank of Scotland, Natwest, Yorkshire bank, BG Group, BT, London 2012.

Over the years he has spoken at a number of security conferences, developed free security tools, and discovered serious security vulnerabilities in leading applications. Nathan's qualifications and education include:

  • BSc. (Hons) Computing 'Networks & Communication' 1st Class Honors
  • SCF : SABSA Charted Architect Foundation
  • CISSP : Certified Information Systems Security Professional
  • CISA : Certified Information Systems Auditor
  • CISM : Certified Information Security Manager
  • ISO 27001 Certified ISMS Lead Auditor
  • CEH : Certified Ethical Hacker
  • OSCP : Offensive Security Certified Professional

          Podcast411: Going 1:1 – Planning for Success by Jake Heister   
This audio podcast is a recording of Jake Heister’s presentation on December 4, 2013, at the Interactive Learning Institute in Norman, Oklahoma, titled, “Going 1:1 – Planning for Success.” The ILI Conference is sponsored each year by the K-20 Center at the University of Oklahoma. The official session description was “Many well-intended 1:1 initiatives fall short of their potential due to lack of planning in a few key areas. This session will detail the planning process for our current implementation of a 1:1 Chromebook environment, tips for success, and lessons learned.” Although this description mentions Chromebooks specifically, Jake addresses planning issues for 1:1 projects applicable to all platforms and devices. This was an outstanding presentation, and one of the best the publisher (Wesley Fryer) has heard in the past five years about 1:1 computing initiatives. Please refer to the podcast shownotes for links to Jake’s slides as well as selective, republished tweets shared by @wfryer during Jake’s presentation. Follow Jake on Twitter: @jakeheister.
          Podcast391: Voices from Mobile Learning 2012   
This podcast is a combined series of recordings created by participants in Wesley Fryer's "Simple Ideas for Powerful Sharing" media scavenger hunt session at the Mobile Learning 2012 conference in Phoenix, Arizona, on April 12, 2012. These reflections were submitted as audio and video reflections in response to several questions provided in the media scavenger hunt instructions. Access these instructions from the podcast shownotes on a mobile-friendly website, as an ePUB ebook, a Kindle .MOBI eBook, or a PDF file. Please refer to the podcast shownotes for additional links, including the Posterous blog where all these responses were posted by participants. Many thanks to all participants, and to Peggy George whose encouragement led to this "voices from the conference" podcast. Lots of great ideas and important perceptions were shared here which school leaders should consider as we continue to explore what it can mean to be "21st century learners" in mobile computing environments.
          Podcast371: Cartooning Around in Language Arts by Malia Triggs   
This is a podcast recording of Malia Triggs' presentation "Cartooning Around in Language Arts" at the 2011 Mississippi Educational Computing Association annual conference in Jackson on February 8th. Malia is a 5th and 6th grade teacher in Forrest County School District. She discussed how elementary language arts teachers can use the website "Go Animate" with students to help them create short videos in conjunction with writing assignments meeting language arts standards. The official session description was: Learn how incorporating free online animation programs into your language arts lessons can guarantee engagement and excitement, while covering all your objectives in one project. You will explore such reading objectives as, setting, character, author's purpose, etc. You will explore language skills such as, vivid language, figurative language, steps in the writing process, etc. Student work samples will be available, as well as, a timeline of assignments and MS framework objectives met.
          Podcast 370: Cool Tools for the Classroom by Dr Carl Owens   
This is a podcast recording of Dr. Carl Owens' presentation, "Cool Tools for the Classroom," at the 2011 Mississippi Educational Computing Association annual conference in Jackson on February 8th. Carl is a Professor and the Director of Technology at the College of Education, Tennessee Technological University. The official description of Carl's session was: The session will explore many new and exciting tools for teachers who are using digital technology in the classroom. Participants will then select tools and explore their uses in the educational environment. These will include digital cameras, digital microscopes, podcasting tools, music creation tools, and a number of applications to enhance teaching and learning.
          Podcast365: Leadership Lessons for 1:1 Learning Projects from Leslie Wilson (The One-to-One Institute)   
This podcast is a recording of a presentation on November 9, 2009, by Leslie Wilson, chief executive officer of the One-to-One Institute and former co-leader of Michigan's Freedom to Learn Project. Leslie shared this session at the Great Lakes 1:1 Computing Conference on November 9, 2009. It was titled, "Leadership: The Critical Factor (for 1:1 success.)" As Leslie describes, the leadership factor, vision, and "the why" of one to one learning is one of the most critical pieces of a transformational learning initiative. A compelling, articulated vision for 1:1 learning is pivotal, along with the ability for leaders to connect that vision to the outside world. Refer to the podcast shownotes for links and more information about the One to One Institute and their outstanding annual conference.
          Podcast344: Technology Trends in Higher Education (April 2010)   
What are the technology trends in April 2010 which college faculty need to understand and leverage to extend opportunities for student learning? This podcast is a recording of a presentation shared by Wesley Fryer at Northeastern State University in Broken Arrow, Oklahoma, on 9 April 2010. This presentation addresses the following topics: Tablets, Cloud Computing, Social Media, Laptops / Mobile Devices, Online Publishing, Multimedia Texts, Online Video, Digital Footprints, Open Licensing / OER, and Visual Communication. Audio from shared videos has been edited out of this podcast recording. Original videos are available in the Google Presentation, linked in the podcast shownotes.
          Podcast339: Communicating in the Digital Age (Presentation for Pioneer Library System Librarians)   
This podcast is a recording of a presentation shared by Wesley Fryer with librarians and staff of the Pioneer Library System of Oklahoma on February 15, 2010, in Moore. Our digital communications landscape today includes far more than email. By working in the "cloud" using collaborative environments like Google Docs and Google Reader, we flexibly access as well as share information on a variety of computing platforms. This session is a practical overview of the communications landscape of the early 21st century, as well as tips for library media specialists about ways to constructively and powerfully utilize these capabilities.
          Podcast335: Classroom Basics for 1:1 Computing by Shawn Massey and Wynn Draper-Bryant   
This podcast is a recording of a presentation by Shawn Massey and Wynn Draper-Bryant of Flint Community Schools, Michigan, at the One to One Institute's annual conference in Chicago, Illinois, on November 10, 2009. The title of their session was Classroom Basics for 1:1 Computing. Shawn was the project director for the Flint Community Schools “Freedom to Learn Project,” and Wynn has been a classroom teacher for 36 years. If they take her laptops away from her students, Wynn says she'd have to retire! In Flint Community Schools, select campuses have been implementing one to one laptop learning projects for almost eight years. Shawn and Wynn shared a wide variety of perspectives and ideas in this presentation, including many practical tips for other educators currently implementing 1:1 or considering the implementation of 1:1 learning projects. I particularly enjoyed and appreciated the way Shawn and Wynn integrated student comments and quotations into their presentation. I will include a link to my own textual notes from this presentation in the podcast shownotes, along with additional resource links referenced by Shawn and Wynn. Shawn and Wynn's messages about how important it is to keep moving forward, support the people who solve the problems, and celebrate the victories of everyone involved as you walk down this road of one to one computing together are 100% on target. We can learn a great deal from these passionate Michigan educators about ways to most effectively solicit community buy-in for one to one learning and support one to one projects for the long term. The PD model, the "dine and dialog" events, and the constant dialog, showcasting, and celebrations which were a part of their Freedom to Learn Project implementation plan are exemplary and can be used as models for other 1:1 programs. As Shawn says, however, remember "one size does NOT fit all." It's critical to be flexible, adaptable, and LISTEN to all the stakeholders as you move forward with 1:1 project implementation.
          Podcast334: One to One Learning with Open Source Netbooks is Practical, Affordable and Powerful - Learn Why   
One to one learning with wireless, digital devices in the hands of every learner in the classroom is the future. With netbooks running over 100 free educational applications on Ubuntu Linux, that dream can be a reality in your classroom and school district today, not tomorrow. As I explain in the introduction to this podcast featuring two interviews, I have lost NONE of my enthusiasum for Apple and Macintosh computers, but I think it would be foolish to ignore the powerful and affordable computing and learning opportunities now offered by netbooks as well as open source software. After sharing a plug for the upcoming FREE K-12 Online Conference in December and an introduction to these interviews, this podcast includes an interview with Warren Luebkeman. Warren is a co-founder of the Open 1:1 Nonprofit organization, which is based in Maine and provides a FREE Ubuntu image for netbooks loaded with over 100 educational and productivity applications. That recording was made at the ACTEM 2009 conference in Augusta, Maine in October. The second interview is with Alex Inman, who has been implementing and supporting 1:1 initiatives for over 8 years in Milwaukee and St Louis. Alex shared a presentation at the One to One Institute's November 2009 conference called "Saving Money on Your One-to-One Program." In this interview Alex specifically addresses the viability and power of Ubuntu as a platform on netbook computers for student learning. He discusses powerful open source solutions like iTalc (for desktop monitoring) and iFolder (for cross-platform remote file sharing.) Additionally, he addresses the importance of support for "cultural change" in schools for 1:1 laptop learning initiatives. That buy-in from top leadership all the way down the classroom is even more important for laptop initiative success than the platform / hardware.
          Podcast323: R U In My Space? Y Have A Social Media Policy Guideline? (NECC09 Preso by Karen Montgomery and Wesley Fryer)   
This podcast is a recording of the presentation "R U In My Space? Y Have A Social Media Policy Guideline?" at the NECC 2009 conference in Washington D.C. on July 1, 2009. Karen Montgomery and Wesley Fryer shared this presentation, along with Gina Hartman who joined us via Skype. Gina and Karen collaborated with others to create social media guidelines in spring 2009 for the Francis Howell School District in Saint Louis, Missouri. The official session description at NECC was: As school districts explore the use of social computing throughout the school day and as an approach to extend instruction, many educators are making the decision to create a wiki, publish video online, or to participate in blogging, social networking or virtual worlds. Social media guidelines encourage educators to participate in social computing and strive to create an atmosphere of trust and individual accountability. Teachers who must hide their online activity because of nonexistent social media guidelines risk losing their jobs and reputations. A better approach is to collaboratively develop a policy that is acceptable to administrators, school board members, teachers and parents allowing for involvement in the global conversation in which many are contributing. (end of description) Please join our Facebook group, linked in the podcast shownotes. This is an important conversation which needs to take place with students, teachers, and parents in all our schools.
          Podcast312: Reinventing Education for the 21st Century (Designing School 2.0)   
This podcast is a recording of my opening keynote session at the 2009 eTechOhio conference, held in Columbus, Ohio, on February 2, 2009. This is the audio-only mp3 version, a video podcast version is available on the eTechOhio09 portal in iTunesU Ohio. Check the podcast shownotes for a direct link to iTunes. The official conference program description for this session was: As Thomas Friedman persuasively argued in this book "The World is Flat," we live in a very different and rapidly changing economic and cultural environment. Schools need to change to prepare students for the dynamic opportunities of the 21st century workforce. Collaboration in most of our schools today is still called "cheating." Our factory model of transmission-based education must be transformed into one where learners regularly collaborate, access and "remix" digital information, and extend their learning beyond the traditional bell schedule. One to one laptop initiatives, where every student and teacher have wireless computing devices; schools and libraries becoming community learning hubs offering public wireless and wired connectivity to the Internet; and the deregulation of education which frees learners to spend time in real-world, problem-based and project-based learning need to become hallmarks of education in the 21st century. This presentation shares this vision for reinventing education: Designing School 2.0, and offers suggestions for how civic leaders can move toward this vision at local levels.
          Introduction to Hadoop for $49   
Get Familiar with One of the Top Big Data Frameworks In the World
Expires January 02, 2022 23:59 PST
Buy now and get 28% off

KEY FEATURES

Hadoop is one of the most commonly used Big Data frameworks, supporting the processing of large data sets in a distributed computing environment. This tool is becoming more and more essential to big business as the world becomes more data-driven. In this introduction, you'll cover the individual components of Hadoop in detail and get a higher level picture of how they interact with one another. It's an excellent first step towards mastering Big Data processes.

  • Access 30 lectures & 5 hours of content 24/7
  • Install Hadoop in Standalone, Pseudo-Distributed, & Fully Distributed mode
  • Set up a Hadoop cluster using Linux VMs
  • Build a cloud Hadoop cluster on AWS w/ Cloudera Manager
  • Understand HDFS, MapReduce, & YARN & their interactions

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: beginner
  • IDE like IntelliJ or Eclipse required (free to download)

Compatibility

  • Internet required

THE EXPERT

Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

          Podcast306: Voices of COSN 2009 (Grantwrangler, a handheld data projector, and cloud-based computing)   
This podcast from the 2009 COSN conference in Austin, Texas, features interviews with three individuals focusing on the website Grantwrangler, the 3M Micro Professional Projector MPro110, and the cloud-based computing model embraced by the company Stoneware Inc.
          Wi-Fi Hacking with Kali for $15   
Come to Grips with One of the Most Popular Ethical Hacking Tools Around
Expires August 12, 2021 23:00 PST
Buy now and get 92% off

KEY FEATURES

Network security is an essential to any home or corporate internet connection, which is why ethical hackers are paid big bucks to identify gaps and threats that can take a network down. In this course, you'll learn how to protect WEP, WPA, and WPA2 networks by using Kali Linux, one of the most popular tools for ethical hackers. By course's end, you'll have the know-how to protect network environments like a pro.

  • Access 22 lectures & 1.5 hours of content 24/7
  • Set up a penetration testing environment
  • Learn 4 different ways to install & use Kali Linux
  • Understand how to hack WEP-protect WiFi & learn countermeasures
  • Discover how to hack WiFi using Hydra, a keylogger, or by removing devices

PRODUCT SPECS

Details & Requirements:

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: beginner

Compatibility:

  • Internet required

THE EXPERT

Amit Huddar is an Internet Entrepreneur and Software Engineer. He runs his own software company "Softdust," which develops products for new technologies like wearables and other gadgets. He opted for computer science engineering in 2013 at SSIT and started his software company in his first year of engineering.

His skills include: Android app development, HTML, CSS, PHP, C, C++, JAVA, Linux, Building Custom Linux OS, Cloud Computing. Penetration testing, Kali Linux and Hacking.

          Packer Junior Earns Computing Award   
National Center for Women & Information Technology honors students for aspirations in computing
          Barcelona estrena el tercer supercomputador más rápido de europa   
El Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS) estrena hoy el MareNostrum 4, el tercer supercomputador más rápido de Europa y el decimotercero del mundo, multiplicando su potencia por diez en relación al MareNostrum 3.



http://www.efefuturo.com/wp-content/blogs.dir/2/files_mf/cache/th_b4ad7bd0e72f1f1901536d6d4f1312d8_TNWebU8DRKP02.jpg

Aspecto del nuevo MareNostrum-4 EFE/Andreu Dalmau



Según ha informado el director del BSC durante el acto de presentación, Mateo Valero, el MareNostrum 4 tiene una capacidad de 11,1 petaflops, es decir, puede realizar 11.100 billones de operaciones por segundo, esto significa que el Marenostrum 4 hará en un día, lo que el primer Marenostrum, de 2004, hacía en un año, ha ejemplificado el director.



Los superordenadores sirven a los científicos e ingenieros para la investigación básica y aplicada por su capacidad de realizar grandes cálculos, ejecutar grandes simulaciones complejas y analizar un gran volumen de datos, por lo que se utilizan en prácticamente todas las disciplinas, desde la astrofísica, pasando por la biomedicina hasta la industria.



La innovación del supercomputador



La secretaria de Estado de Investigación, Desarrollo e Innovación, Carmen Vela, ha detallado que esta renovación del supercomputador ha supuesto una inversión estatal de 34 millones de euros, y ha asegurado que es un dinero bien invertido ya que genera rendimiento y crea relaciones internacionales.



Este nuevo equipamiento estará formado por cuatro máquinas, aunque actualmente solo hay una en funcionamiento, la fabricada por Lenovo, pero las otras tres empresas adjudicatarias del supercomputador, IBM, Intel y Fujitsu, ya están terminando de construir el resto, según ha informado Valero.



Un comité externo evalúa los proyectos de los investigadores cada cuatro meses, y si resultan seleccionados, pueden utilizar las instalaciones sin ningún coste, ha explicado Mateo Valero.



Los investigadores con acceso al supercomputador proceden del BSC, de la Red Española de Supercomputación (RES) o de la institución europea Partnership for Advanced Computing in Europe (PRACE).



Uno de los proyectos en los que estamos trabajando ahora mismo consiste en extraer y analizar información del ADN para curar enfermedades de manera personalizada, ha ejemplificado Mateo Valero, que ha incidido en que trabajan con gente de primer nivel como el oncólogo Josep Tabernero.



Además, investigadores de empresas también tienen acceso al supercomputador, pero tienen que pasar por el comité evaluador y pagan por el servicio, una de ellas es Repsol, que está analizando el fondo marino del Golfo de México para hacer una representación 3D y poder ampliar la efectividad de su búsqueda de petróleo, según ha explicado Valero.



Con ello, y gracias a la inversión de las instituciones europeas, Mateo Valero ha expresado su deseo de que el Marenostrum 5 no tenga que ser financiado íntegramente por el Gobierno, aunque ha especificado que la Generalitat y la Universitat Politècnica de Catalunya (UPC) también han participado en la financiación del BSC.



De hecho, el Secretario de Universitats y Recerca de la Generalitat, Arcadi Navarro, ha anunciado que el año que viene el Govern pretende ampliar el presupuesto en investigación a través de un programa de apoyo, del cual se beneficiará el BSC.



El BSC es un centro que cuenta con 517 trabajadores, de los cuales unos 400 son investigadores y tiene un presupuesto anual de 34,3 millones de euros, provenientes de varias instituciones y del sector privado, de los cuales a partir de ahora, 1,6 se tendrá que destinar a pagar la energía consumida por el supercomputador, según Mateo Valero.





Ver información original al respecto en Fuente:

http://www.efefuturo.com/noticia/barcelona-supercomputador-europa/
          The top 50 dream companies for engineering and IT students around the world   

Google logo office man selfie

Business students can't wait to work at Google, and their engineering and IT classmates couldn't agree more.

Universum, a global research and advisory firm, surveyed 149,226 undergraduates studying engineering or IT in the 12 countries with the world's largest GDPs — Germany, France, UK, Italy, Russia, US, China, Japan, Brazil, India, Canada, and South Korea.

The engineering and IT students were asked to choose the companies and organizations they'd most like to work for.

Universum then put together a ranking of the most desirable employers, based on the number of undergraduate engineering and IT students who chose a company as one of their dream employers.

Google, which specializes in online advertising technologies, cloud computing, software, and, of course, search, landed at the top of the list for a second consecutive year.

Microsoft, Apple, GE, and BMW rounded out the top five.

Here are the top 50:

SEE ALSO: The top 50 dream companies for business students around the world

DON'T MISS: The 50 best places to work in 2017, according to employees

50. Novartis

Based in Basel, Switzerland, Novartis is a pharmaceutical company.



49. EY (Ernst & Young)

EY is one of the "Big Four" audit firms and is based in London.



48. BP

Headquartered in London, oil and gas giant BP was first founded in 1908.



See the rest of the story at Business Insider
          Is Water-Free Cooling the Real Future of HPC?   

In this video from ISC 2017, Olivier de Laet from Calyos describes the company’s innovative cooling technology for high performance computing. "The HPC industry is ever facing is facing the challenge of ever-increasing cooling requirements. While liquid cooling cooling looks to be the best solution, what if you could achieve the same efficiencies without out using water and pumps?"

The post Is Water-Free Cooling the Real Future of HPC? appeared first on insideHPC.


          Podcast: Why HPC is Vital at NASA   

"At NASA, what we're doing is we're really looking at ways of making aircraft more efficient. We're trying to make them quieter. We're trying to make them get to cruise altitude faster, which saves the taxpayer or the people who are using airplanes a lot of money, and we look at really complex problems. We look at things like rotorcraft. If you think about how that model looks, it's a very complex model. What do we do with supercomputing? Pretty much everything across the board."

The post Podcast: Why HPC is Vital at NASA appeared first on insideHPC.


          Thinking out loud about coding bootcamps, nanodegrees, & alternative credentials   

“The CanCode program will invest $50 million over two years, starting in 2017-18, to support initiatives providing educational opportunities for coding and digital skills development to Canadian youth from kindergarten to grade 12 (K-12).

The program aims to equip youth, including traditionally underrepresented groups, with the skills and study incentives they need to be prepared for the jobs of today and the future. Canada’s success in the digital economy depends on leveraging our diverse talent and providing opportunity for all to participate—investing in digital skills development will help to achieve this.”

The CanCode program is a new funding opportunity in Canada. Similar initiatives have occurred globally. The investment in coding to prepare youth and adults for the jobs of the future is an interesting phenomenon. In a past project for example, we worked with over fifty high schools and developed a dual enrolment course focused on computational thinking and the presence of computing in daily life. The ability to read, write, and tinker with code is one aspect of this course. Our course was about introducing students to computer science – and though coding is an aspect of it, computer science is not coding.

But, coding is a central feature of an ever-expanding market of emerging credentials. Badges. Nanodegrees. Certs. And so on. Providers offer these in many different ways, both in terms of modality (e.g.,  online courses vs. face-to-face coding bootcamps) and pacing (e.g., self-paced vs. cohort-based). Some highlight experiential components (e.g., industry partnerships) while others highlight the flexibility of adjusting to learner’s life circumstances.

In short, providers make a case that their credentials promise employment opportunities in a rapidly changing global economy where coding is in demand. This space seems to be an example of what certain aspects of unbundling may look like. The space configures alternative credentials, digital learning, for-profit education, skills training, and re-training in unique ways. I have a lot of questions around this space

  • What are learners’ experiences with coding bootcamps and nanodegrees?
  • Who enrols? Who succeeds?
    • To what extent do these programs broaden participation in computing?
    • To what degree and in what ways do these programs democratize learning and participation? Do they?
  • What do learners expect from these offerings and how do they judge the quality of their experience and credential?
  • What are the dominant pedagogical practices (within and across providers) in teaching people how to code?
  • What is the role of technology in these programs?
  • What do outcomes look like, and how do those align with providers’ promises? For instance, what proportion of participants find gainful employment and what does that employment look like?
  • What are instructors’ roles in these offerings? Who are they? What is their pedagogical background? Is this their main employment? Are there connections to the gig economy and precarious employment here?
  • How diverse are these offerings in terms of gender and race with respect to students (who enrols?), instructors (who teaches?) and content (are minorities represented in curricular materials? in what ways?)

I’ve been looking for some answers to my questions, but I’m not finding much.

Additional reading

http://hackeducation.com/2015/11/23/bootcamps-the-new-for-profit-higher-ed

https://www.wired.com/2017/02/programming-is-the-new-blue-collar-job

https://www.nytimes.com/2017/04/04/education/edlife/where-non-techies-computer-programming-coding.html

https://www.geekwire.com/2015/dear-geekwire-a-coding-bootcamp-is-not-a-replacement-for-a-computer-science-degree/

https://news.slashdot.org/story/16/08/22/0521230/four-code-bootcamps-are-now-eligible-for-government-financial-aid

http://www.chronicle.com/article/Coding-Boot-Camps-Come-Into/239673?cid=cp21

http://hackeducation.com/2011/10/28/codecademy-and-the-future-of-not-learning-to-code

Industry report: https://www.coursereport.com/reports/2016-coding-bootcamp-job-placement-demographics-reportRead the rest


          Microsoft must launch a Surface phone — and get it right the first time   
Smartphones are the gateways to many tech companies' broader ecosystems. Sadly, history proves any attempt by Microsoft to fill that void in its ecosystem is a huge gamble. Still, the absence of the most personal of computing devices from Microsoft's lineup is detrimental to its present and future relevance. Microsoft must launch a Surface phone. ...
          The iPhone, Xerox PARC, and the IBM PC compatible   
Ars has started a series on the advent of the IBM PC, and today they published part one. The machine that would become known as the real IBM PC begins, of all places, at Atari. Apparently feeling their oats in the wake of the Atari VCS' sudden Space Invaders-driven explosion in popularity and the release of its own first PCs, the Atari 400 and 800, they made a proposal to IBM's chairman Frank Cary in July of 1980: if IBM wished to have a PC of its own, Atari would deign to build it for them. Fascinating history of the most influential computing platform in history, a statement that will surely ruffle a lot of feathers. The IBM PC compatible put a computer on every desk and in every home, and managed to convince hundreds of millions of people of the need of a computer - no small feat in a world where a computer was anything but a normal household item. In turn, this widespread adoption of the IBM PC compatible platform paved the way for the internet to become a success. With yesterday's ten year anniversary of the original iPhone going on sale, a number of people understandably went for the hyperbole, such as proclaiming the iPhone the most important computer in history, or, and I wish I was making this up, claiming the development of the iPhone was more important to the world than the work at Xerox PARC - and since this was apparently a competition, John Gruber decided to exaggerate the claim even more. There's no denying the iPhone has had a huge impact on the world, and that the engineers at Apple deserve all the credit and praise they're getting for delivering an amazing product that created a whole new category overnight. However, there is a distinct difference between what the iPhone achieved, and what the people at Xerox PARC did, or what IBM and Microsoft did. The men and women at PARC literally invented and implemented the graphical user interface, bitmap graphics, Ethernet, laser printing, object-oriented programming, the concept of MVC, the personal computer (networked together!), and so much more - and all this in an era when computers were gigantic mainframes and home computing didn't exist. As for the IBM PC compatible and Wintel - while nowhere near the level of PARC, it did have a profound and huge impact on the world that in my view is far greater than that of the iPhone. People always scoff at IBM and Microsoft when it comes to PCs and DOS/Windows, but they did put a computer on every desk and in every home, at affordable prices, on a relatively open and compatible platform (especially compared to what came before). From the most overpaid CEO down to the most underpaid dock worker - everybody could eventually afford a PC, paving the way for the internet to become as popular and ubiquitous as it is. The iPhone is a hugely important milestone and did indeed have a huge impact on the world - but developing and marketing an amazing and one-of-a-kind smartphone in a world where computing was ubiquitous, where everybody had a mobile phone, and where PDAs existed, is nowhere near the level of extraordinary vision and starting-with-literally-nothing that the people at PARC had, and certainly not as impactful as the rise of the IBM PC compatible and Wintel. It's fine to be celebratory on the iPhone's birthday - Apple and its engineers deserve it - but let's keep at least one foot planted in reality.
          Grab this MATLAB mastery bundle for just $27!   
As far as numerical computing environments and programming languages go, Matrix Laboratory (MATLAB) is a favorite amongst engineering and science students. Learning to use MATLAB is no easy task, however, and there are multiple angles needed to be taken in order to become proficient. Go from beginner to pro in MATLAB with this mastery bundle! Learn more Instead of tracking down the courses required to become a MATLAB pro, Windows Central Digital Offers right now has a deal on a MATLAB mastery bundle. Instead of paying the regular price of $200, you'll instead pay just $27. That's 86% off the regular price. ...
          Telecomputing is on the rise   

Telecommuting on the riseTelecomputing is on the rise

More Americans tele-computing and working longer hours from home and on the road.

The BLS found that 24% of all U.S. workers did some or all of their work from home last year, compared with 19% who worked from home in 2003.

Management, business and financial occupations led all categories of remote workers, at 38%, with those working in the professions right behind at 35%.

Not only are there more Americans working from home, but the average time they do so has increased, up 40 minutes to 3.2 hours a week in 2015.

Order Telecommuting PolicyDownload Selected Pages


          Coming Microsoft Reorg to Support Cloud-First Strategy: Report   
Microsoft is expected to announce a business reorganization plan on July 5

read more


          How to unbox your dynamic operations   

Today’s smartphones frequently offer larger screens with smaller bezels. Their manufacturers claim an uninterrupted view of the world. Conceptually, this “unboxing” is exactly what’s needed to manage dynamic infrastructure and services. Looking back for a moment, service delivery used to happen at a slower pace. IT teams would spend a lot of time designing services, […]

The post How to unbox your dynamic operations appeared first on Cloud computing news.


          3 tricks to better manage your public cloud services   

Some people call them “cloud hacks,” which is perhaps more accurate than “cloud tricks,” but the enterprises I work with don’t like the term “hack.”

Whatever you prefer to call them, here are three shortcuts you can create to achieve specific end states.

Cloud trick No. 1: Customize your console

Both Amazon Web Services and Microsoft have consoles that provide a master control view of resources on their clouds. With them, you can see what’s available and what you have already provisioned.

To read this article in full or to leave a comment, please click here


          Microsoft Acquires Cloudyn for Cloud Analytics and Usage Optimization on Azure   
According to Cloudyn's website it currently supports cloud services on Microsoft Azure but also for Amazon, Google, and Openstack.

read more


          Announcing IBM Cloud private for Microservice Builder   

Last week IBM announced the launch of Microservice Builder, a powerful new technology stack squarely aimed at simplifying development of microservices. Microservice Builder enables developers to easily learn about the intricacies of microservice app development and quickly compose and build innovative services. Then developers can rapidly deploy microservices across environments by using a pre-integrated DevOps […]

The post Announcing IBM Cloud private for Microservice Builder appeared first on Cloud computing news.


          Microservice Builder: Software delivery goes from days to minutes   

As the world becomes more connected than ever, your business must be ready to face rising challenges. In a study, Gartner predicts that by 2020 there will be more than 20 billion connected “things,” and the total will grow at an astonishing rate of 5.5 million new devices coming online each day. So how does […]

The post Microservice Builder: Software delivery goes from days to minutes appeared first on Cloud computing news.


          C++ Developer - Distributed Computing - Morgan Stanley - Montréal, QC   
Comfortable programming in a Linux environment, familiar with ksh and bash. Morgan Stanley is a global financial services firm and a market leader in investment...
From Morgan Stanley - Wed, 28 Jun 2017 00:14:01 GMT - View all Montréal, QC jobs
          Sven Vermeulen: Structuring infrastructural deployments   

Many organizations struggle with the all-time increase in IP address allocation and the accompanying need for segmentation. In the past, governing the segments within the organization means keeping close control over the service deployments, firewall rules, etc.

Lately, the idea of micro-segmentation, supported through software-defined networking solutions, seems to defy the need for a segmentation governance. However, I think that that is a very short-sighted sales proposition. Even with micro-segmentation, or even pure point-to-point / peer2peer communication flow control, you'll still be needing a high level overview of the services within your scope.

In this blog post, I'll give some insights in how we are approaching this in the company I work for. In short, it starts with requirements gathering, creating labels to assign to deployments, creating groups based on one or two labels in a layered approach, and finally fixating the resulting schema and start mapping guidance documents (policies) toward the presented architecture.

As always, start with the requirements

From an infrastructure architect point of view, creating structure is one way of dealing with the onslaught in complexity that is prevalent within the wider organizational architecture. By creating a framework in which infrastructural services can be positioned, architects and other stakeholders (such as information security officers, process managers, service delivery owners, project and team leaders ...) can support the wide organization in its endeavor of becoming or remaining competitive.

Structure can be provided through various viewpoints. As such, while creating such framework, the initial intention is not to start drawing borders or creating a complex graph. Instead, look at attributes that one would assign to an infrastructural service, and treat those as labels. Create a nice portfolio of attributes which will help guide the development of such framework.

The following list gives some ideas in labels or attributes that one can use. But be creative, and use experienced people in devising the "true" list of attributes that fits the needs of your organization. Be sure to describe them properly and unambiguously - the list here is just an example, as are the descriptions.

  • tenant identifies the organizational aggregation of business units which are sufficiently similar in areas such as policies (same policies in use), governance (decision bodies or approval structure), charging, etc. It could be a hierarchical aspect (such as organization) as well.
  • location provides insight in the physical (if applicable) location of the service. This could be an actual building name, but can also be structured depending on the size of the environment. If it is structured, make sure to devise a structure up front. Consider things such as regions, countries, cities, data centers, etc. A special case location value could be the jurisdiction, if that is something that concerns the organization.
  • service type tells you what kind of service an asset is. It can be a workstation, a server/host, server/guest, network device, virtual or physical appliance, sensor, tablet, etc.
  • trust level provides information on how controlled and trusted the service is. Consider the differences between unmanaged (no patching, no users doing any maintenance), open (one or more admins, but no active controlled maintenance), controlled (basic maintenance and monitoring, but with still administrative access by others), managed (actively maintained, no privileged access without strict control), hardened (actively maintained, additional security measures taken) and kiosk (actively maintained, additional security measures taken and limited, well-known interfacing).
  • compliance set identifies specific compliance-related attributes, such as the PCI-DSS compliancy level that a system has to adhere to.
  • consumer group informs about the main consumer group, active on the service. This could be an identification of the relationship that consumer group has with the organization (anonymous, customer, provider, partner, employee, ...) or the actual name of the consumer group.
  • architectural purpose gives insight in the purpose of the service in infrastructural terms. Is it a client system, a gateway, a mid-tier system, a processing system, a data management system, a batch server, a reporting system, etc.
  • domain could be interpreted as to the company purpose of the system. Is it for commercial purposes (such as customer-facing software), corporate functions (company management), development, infrastructure/operations ...
  • production status provides information about the production state of a service. Is it a production service, or a pre-production (final testing before going to production), staging (aggregation of multiple changes) or development environment?

Given the final set of labels, the next step is to aggregate results to create a high-level view of the environment.

Creating a layered structure

Chances are high that you'll end up with several attributes, and many of these will have multiple possible values. What we don't want is to end in an N-dimensional infrastructure architecture overview. Sure, it sounds sexy to do so, but you want to show the infrastructure architecture to several stakeholders in your organization. And projecting an N-dimensional structure on a 2-dimensional slide is not only challenging - you'll possibly create a projection which leaves out important details or make it hard to interpret.

Instead, we looked at a layered approach, with each layer handling one or two requirements. The top layer represents the requirement which the organization seems to see as the most defining attribute. It is the attribute where, if its value changes, most of its architecture changes (and thus the impact of a service relocation is the largest).

Suppose for instance that the domain attribute is seen as the most defining one: the organization has strict rules about placing corporate services and commercial services in separate environments, or the security officers want to see the commercial services, which are well exposed to many end users, be in a separate environment from corporate services. Or perhaps the company offers commercial services for multiple tenants, and as such wants several separate "commercial services" environments while having a single corporate service domain.

In this case, part of the infrastructure architecture overview on the top level could look like so (hypothetical example):

Top level view

This also shows that, next to the corporate and commercial interests of the organization, a strong development support focus is prevalent as well. This of course depends on the type of organization or company and how significant in-house development is, but in this example it is seen as a major decisive factor for service positioning.

These top-level blocks (depicted as locations, for those of you using Archimate) are what we call "zones". These are not networks, but clearly bounded areas in which multiple services are positioned, and for which particular handling rules exist. These rules are generally written down in policies and standards - more about that later.

Inside each of these zones, a substructure is made available as well, based on another attribute. For instance, let's assume that this is the architectural purpose. This could be because the company has a requirement on segregating workstations and other client-oriented zones from the application hosting related ones. Security-wise, the company might have a principle where mid-tier services (API and presentation layer exposures) are separate from processing services, and where data is located in a separate zone to ensure specific data access or more optimal infrastructure services.

This zoning result could then be depicted as follows:

Detailed top-level view

From this viewpoint, we can also deduce that this company provides separate workstation services: corporate workstation services (most likely managed workstations with focus on application disclosure, end user computing, etc.) and development workstations (most likely controlled workstations but with more open privileged access for the developer).

By making this separation explicit, the organization makes it clear that the development workstations will have a different position, and even a different access profile toward other services within the company.

We're not done yet. For instance, on the mid-tier level, we could look at the consumer group of the services:

Mid-tier explained

This separation can be established due to security reasons (isolating services that are exposed to anonymous users from customer services or even partner services), but one can also envision this to be from a management point of view (availability requirements can differ, capacity management is more uncertain for anonymous-facing services than authenticated, etc.)

Going one layer down, we use a production status attribute as the defining requirement:

Anonymous user detail

At this point, our company decided that the defined layers are sufficiently established and make for a good overview. We used different defining properties than the ones displayed above (again, find a good balance that fits the company or organization that you're focusing on), but found that the ones we used were mostly involved in existing policies and principles, while the other ones are not that decisive for infrastructure architectural purposes.

For instance, the tenant might not be selected as a deciding attribute, because there will be larger tenants and smaller tenants (which could make the resulting zone set very convoluted) or because some commercial services are offered toward multiple tenants and the organizations' strategy would be to move toward multi-tenant services rather than multiple deployments.

Now, in the zoning structure there is still another layer, which from an infrastructure architecture point is less about rules and guidelines and more about manageability from an organizational point of view. For instance, in the above example, a SAP deployment for HR purposes (which is obviously a corporate service) might have its Enterprise Portal service in the Corporate Services > Mid-tier > Group Employees > Production zone. However, another service such as an on-premise SharePoint deployment for group collaboration might be in Corporate Services > Mid-tier > Group Employees > Production zone as well. Yet both services are supported through different teams.

This "final" layer thus enables grouping of services based on the supporting team (again, this is an example), which is organizationally aligned with the business units of the company, and potentially further isolation of services based on other attributes which are not defining for all services. For instance, the company might have a policy that services with a certain business impact assessment score must be in isolated segments with no other deployments within the same segment.

What about management services

Now, the above picture is missing some of the (in my opinion) most important services: infrastructure support and management services. These services do not shine in functional offerings (which many non-IT people generally look at) but are needed for non-functional requirements: manageability, cost control, security (if security can be defined as a non-functional - let's not discuss that right now).

Let's first consider interfaces - gateways and other services which are positioned between zones or the "outside world". In the past, we would speak of a demilitarized zone (DMZ). In more recent publications, one can find this as an interface zone, or a set of Zone Interface Points (ZIPs) for accessing and interacting with the services within a zone.

In many cases, several of these interface points and gateways are used in the organization to support a number of non-functional requirements. They can be used for intelligent load balancing, providing virtual patching capabilities, validating content against malware before passing it on to the actual services, etc.

Depending on the top level zone, different gateways might be needed (i.e. different requirements). Interfaces for commercial services will have a strong focus on security and manageability. Those for the corporate services might be more integration-oriented, and have different data leakage requirements than those for commercial services.

Also, inside such an interface zone, one can imagine a substructure to take place as well: egress interfaces (for communication that is exiting the zone), ingress interfaces (for communication that is entering the zone) and internal interfaces (used for routing between the subzones within the zone).

Yet, there will also be requirements which are company-wide. Hence, one could envision a structure where there is a company-wide interface zone (with mandatory requirements regardless of the zone that they support) as well as a zone-specific interface zone (with the mandatory requirements specific to that zone).

Before I show a picture of this, let's consider management services. Unlike interfaces, these services are more oriented toward the operational management of the infrastructure. Software deployment, configuration management, identity & access management services, etc. Are services one can put under management services.

And like with interfaces, one can envision the need for both company-wide management services, as well as zone-specific management services.

This information brings us to a final picture, one that assists the organization in providing a more manageable view on its deployment landscape. It does not show the 3rd layer (i.e. production versus non-production deployments) and only displays the second layer through specialization information, which I've quickly made a few examples for (you don't want to make such decisions in a few hours, like I did for this post).

General overview

If the organization took an alternative approach for structuring (different requirements and grouping) the resulting diagram could look quite different:

Alternative general overview

Flows, flows and more flows

With the high-level picture ready, it is not a bad idea to look at how flows are handled in such an architecture. As the interface layer is available on both company-wide level as well as the next, flows will cross multiple zones.

Consider the case of a corporate workstation connecting to a reporting server (like a Cognos or PowerBI or whatever fancy tool is used), and this reporting server is pulling data from a database system. Now, this database system is positioned in the Commercial zone, while the reporting server is in the Corporate zone. The flows could then look like so:

Flow example

Note for the Archimate people: I'm sorry that I'm abusing the flow relation here. I didn't want to create abstract services in the locations and then use the "serves" or "used by" relation and then explaining readers that the arrows are then inverse from what they imagine.

In this picture, the corporate workstation does not connect to the reporting server directly. It goes through the internal interface layer for the corporate zone. This internal interface layer can offer services such as reverse proxies or intelligent load balancers. The idea here is that, if the organization wants, it can introduce additional controls or supporting services in this internal interface layer without impacting the system deployments themselves much.

But the true flow challenge is in the next one, where a processing system connects to a data layer. Here, the processing server will first connect to the egress interface for corporate, then through the company-wide internal interface, toward the ingress interface of the commercial and then to the data layer.

Now, why three different interfaces, and what would be inside it?

On the corporate level, the egress interface could be focusing on privacy controls or data leakage controls. On the company-wide internal interface more functional routing capabilities could be provided, while on the commercial level the ingress could be a database activity monitoring (DAM) system such as a database firewall to provide enhanced auditing and access controls.

Does that mean that all flows need to have at least three gateways? No, this is a functional picture. If the organization agrees, then one or more of these interface levels can have a simple pass-through setup. It is well possible that database connections only connect directly to a DAM service and that such flows are allowed to immediately go through other interfaces.

The importance thus is not to make flows more difficult to provide, but to provide several areas where the organization can introduce controls.

Making policies and standards more visible

One of the effects of having a better structure of the company-wide deployments (i.e. a good zoning solution) is that one can start making policies more clear, and potentially even simple to implement with supporting tools (such as software defined network solutions).

For instance, a company might want to protect its production data and establish that it cannot be used for non-production use, but that there are no restrictions for the other data environments. Another rule could be that web-based access toward the mid-tier is only allowed through an interface.

These are simple statements which, if a company has a good IP plan, are easy to implement - one doesn't need zoning, although it helps. But it goes further than access controls.

For instance, the company might require corporate workstations to be under heavy data leakage prevention and protection measures, while developer workstations are more open (but don't have any production data access). This not only reveals an access control, but also implies particular minimal requirements (for the Corporate > Workstation zone) and services (for the Corporate interfaces).

This zoning structure does not necessarily make any statements about the location (assuming it isn't picked as one of the requirements in the beginning). One can easily extend this to include cloud-based services or services offered by third parties.

Finally, it also supports making policies and standards more realistic. I often see policies that make bold statements such as "all software deployments must be done through the company software distribution tool", but the policies don't consider development environments (production status) or unmanaged, open or controlled deployments (trust level). When challenged, the policy owner might shrug away the comment with "it's obvious that this policy does not apply to our sandbox environment" or so.

With a proper zoning structure, policies can establish the rules for the right set of zones, and actually pin-point which zones are affected by a statement. This is also important if a company has many, many policies. With a good zoning structure, the policies can be assigned with meta-data so that affected roles (such as project leaders, architects, solution engineers, etc.) can easily get an overview of the policies that influence a given zone.

For instance, if I want to position a new management service, I am less concerned about workstation-specific policies. And if the management service is specific for the development environment (such as a new version control system) many corporate or commercially oriented policies don't apply either.

Conclusion

The above approach for structuring an organization is documented here in a high-level manner. It takes many assumptions or hypothetical decisions which are to be tailored toward the company itself. In my company, a different zoning structure is selected, taking into account that it is a financial service provider with entities in multiple countries, handling several thousand of systems and with an ongoing effort to include cloud providers within its infrastructure architecture.

Yet the approach itself is followed in an equal fashion. We looked at requirements, created a layered structure, and finished the zoning schema. Once the schema was established, the requirements for all the zones were written out further, and a mapping of existing deployments (as-is) toward the new zoning picture is on-going. For those thinking that it is just slideware right now - it isn't. Some of the structures that come out of the zoning exercise are already prevalent in the organization, and new environments (due to mergers and acquisitions) are directed to this new situation.

Still, we know we have a large exercise ahead before it is finished, but I believe that it will benefit us greatly, not only from a security point of view, but also clarity and manageability of the environment.


          A look at the 8 most revolutionary features the iPhone introduced to the world   

A look at the 8 most revolutionary features the iPhone introduced to the worldTen years ago today, Apple released the original iPhone and forever changed the way the world uses and interacts with technology. Without question, the iPhone is one of the most revolutionary products ever created, and the impact it's had on our day-to-day lives cannot be overstated. Today, millions upon millions of people all over the world use the iPhone -- and smartphones in general -- in ways that would have seemed impossible, of  just ten short years ago. With the iPhone's 10th anniversary now upon us, we thought it would be a good time to take a look back at how Apple's iconic device has impacted the smartphone industry at large. From the mainstream adoption of multitouch displays to the introduction of the App Store, even loyal Android users would have to concede that the modern smartphone era we enjoy today wouldn't have been possible had it not been for the iPhone. That said, listed below are the 8 of the biggest and game-changing innovations Apple's iPhone introduced to the smartphone market over the last 10 years. Multitouch While Steve Jobs may have boasted that Apple invented multitouch, it's no secret that multitouch technology existed long before the original iPhone hit store shelves. Nonetheless, there's no escaping the fact that the iPhone was the first device to bring the technology into the mainstream. Say what you will about the iPhone vs Android debate, the reality is that the multitouch UI introduced by the original iPhone immediately became the blueprint upon which all other smartphones were based. Touch ID Much like multitouch, Apple didn't invent fingerprint recognition technology on mobile devices, but its implementation of Touch ID on the iPhone 5s brought it into the mainstream. What was once a feature that only seemed possible in the distant future instantly became accessible thanks to Touch ID. Additionally, Apple's implementation of Touch ID was incredibly intuitive and ultimately spurred other phone manufacturers to incorporate similar functionality into their own devices. The App Store Anyone old enough to remember what mobile apps were like before Apple introduced the App Store can certainly appreciate how revolutionary the App Store was when it went live in July of 2008. In one fell swoop, the App Store made it easy and affordable for users to download apps. What's more, the iPhone SDK gave developers the necessary tools to create what were previously unthinkable and downright magical apps. To date, Apple has doled out more than $70 billion to developers over the last 9 years. Doing some quick math, that means iOS users have spent more than $100 billion on apps alone. Retina Display The Retina Display that Apple introduced on the iPhone 4 was an instant game-changer. Sporting an impressive 960x640 resolution, Apple's Retina Display effectively doubled the resolution of existing iOS devices, making them appear practically ancient in comparison. 64-bit Processor When Apple rolled out a 64-bit A7 processor with the iPhone 5s, it caught the entire tech world off-guard. 64-bit processors, at the time, were certainly on the horizon, but Apple managed to beat everyone to the punch. Of course, it was only a matter of time before Android handset manufacturers followed suit. Apple Pay Building off of the success of Touch ID, Apple with the iPhone 6 introduced the world to Apple Pay, a new technology that made it incredibly easy for users to authorize transactions straight from their device. Again, what made Apple Pay particularly useful is that it was incredibly easy to set up and use. What's more, Apple Pay is beyond secure and therefore provides users with peace of mind that they can engage in secure transactions and not have their sensitive financial information compromised. No more bloatware When developing the original iPhone, Apple demanded complete control over the user experience, AT&T's objections notwithstanding. This, of course, wasn't something carriers like AT&T were accustomed to dealing with. On the contrary, carriers at the time were prone to littering phones with horrible third-party apps that most people had little to no interest in, from NASCAR apps to unwanted streaming TV apps. While some Android devices still ship with their fair share of bloatware, the original iPhone demonstrated that it's never a good idea to let carriers dictate the user experience. Siri The introduction of Siri with the iPhone 4s gave us a glimpse into what the future of computing was going to look like. Even though Siri's functionality was a bit stunted upon release, and even though Siri has arguably been lapped by competing services from the likes of Google, the impact Siri had on the smartphone market cannot be overlooked.



          Getting started with Amazon Web Services (AWS)–Create an EC2 Instance   
As you are already aware of Amazon Web Services (AWS) that offer on-demand cloud computing solution with a pay-per-use approach. As Amazon offer 12 months free trial to check out their services, we exactly going to do that and also recommend you sign up for AWS Free tier. If you already did then let’s get started and learn together on how to create an EC2 instance in AWS.
sign-in aws

First, take a quick look at Amazon Free Tier benefits:

Amazon EC2 (Compute)
  • 750 hours per month of Linux, RHEL, or SLES t2.micro instance usage
  • 750 hours per month of Windows t2.micro instance usage
Amazon S3
(Storage)
  • 5GB of standard storage (20,000 Get Requests, 2,000 Put Requests)
Amazon RDS
(Database)
  • 750 Hours per month of database usage
  • 20 GB of DB Storage: any combination of General Purpose (SSD) or Magnetic
  • 20 GB for Backups (with RDS Magnetic storage; I/Os' on General Purpose [SSD] are not separately billed)
  • 10,000,000 I/Os'
Amazon QuickSight
(Analytics)
  • 1 GB of SPICE capacity 1 user perpetual free tier
  • 10GB of SPICE capacity the first 2 months for free for a total of 4 users
Offers that won’t expire after 12 months
  • AWS Lambda, DynamoDB, AWS Database Migration Service etc..

A. Log in to your AWS-console;
aws console

B. Go to Services on top left corner and select EC2, to enter in EC2 dashboard;


ec2_launch instance

C. Now click on “Launch Instance”


To successfully launch an EC2 instance you have to follow 7 steps:


1. Choose Amazon Machine Image (AMI)

aws choose ami
  • You can choose any AMI from amazon marketplace as per your requirement but in this tutorial, I will focus only on free tier eligible products and services. Therefore, I am selecting Amazon Linux AMI (64-bit)
2. Choose an Instance type
 aws instance type
  • Select General purpose t2.micro (free tier eligible) instance and click on “next Configure Instance Details”
3. Configure Instance Details
 configure instance aws
  • You may leave the setting to default but I want you to look into the settings:
    • Subnet: you may choose any one default subnet available or may create a new subnet;
    • Auto-assign Public IP: keep it as default (use subnet setting (enable). If needed, you may enable elastic IP to connect it to your domain;
    • Enable termination protection: check this to protect against accidental termination;
  • Click on “Add Storage” button.
4. Add Storage
 add storage aws
  • You may go ahead with the default setting by clicking on “Add Tags” button or you may increase storage size up to 30GB for Volume type General Purpose SSD or Magnetic storage. (increase storage as per your requirement, make sure don’t decrease it from 8GB)
5. Add Tags 
 add tags ec2 aws
  • Skip this step and continue with clicking on “Configure Security Group”
6. Configure Security Group
 security group ec2
configure security group ec2
  • In Assign a security group: Select “create a new security group” (if you don’t have any) or “choose from existing one”;
  • Provide Security Group name or leave it as default ( launch-wizard-1) – I recommend you to add name which may later help you in identifying the security group;
  • Add description – enter description which makes this security group explanatory for you;
  • In SSH select source My IP (recommended option), so it will allow only your IP to connect to SSH or you can assign custom IP address or just select “Anywhere” (least secure option);
  • Add HTTP to connect to server port to make your instance accessible;
  • For the last step click on “Review Instance Launch”.
7. Review Instance Launch
 review ec2 instance
  • You may go through the summary of your instance. In case you want to make any changes, just do it.
  • Once you go through all the details available on Review instance launch, click on “Launch” button, new window pop-up - “Select an existing key pair or create a new pair window”.
  • Now create a new key pair (never proceed without a key pair, otherwise you won’t be able to connect your instance) and enter key pair name;
ec2 key pair
  • Download the key pair and keep it in a secure location as you will NOT be able to download the file again after it's created.
  • Finally, click Launch Instances.
ec2 instance launched 

You have successfully launched an EC2 instance on AWS, now go to your EC2 dashboard and view running instance. This guide may be used for creating a different EC2 instance with different machine image.
On next post, we will move further any see how we can connect to our instance and install WordPress.


          Acer updating AcerCloud with support for Android, iOS   
acercloud

Acer announced cross-platform support for AcerCloud, the company's file sharing and media management solution, free to Acer customers. Consumers can now share, retrieve and enjoy their multimedia and data files using a variety of computing devices, regardless of which operating system they are running – Windows, Android or iOS.

Additionally, a new "Remote Files" feature let you access any of your Windows PCs from Android and iOS device. Other enhanced features of AcerCloud include- AcerCloud Docs, lets you push their Microsoft Office documents across different devices, while PicStream feature does the same thing with your photos. The new version of AcerCloud will be available for online update starting in January 2013, and will be bundled on all Acer consumer PCs starting in Q2 2013.


          Lenovo Unleashes New Windows 8 Touch Devices   
lenovo IdeapadU310

Lenovo announced the new additions to the ultra portable Ideapad U Series Ultrabook and the multimedia intensive IdeaPad Z Series laptops as well as new models of IdeaCentre all-in-one desktops, from the sleek and stylish A Series to the affordable and space-saving C Series. Lenovo also introduced an extreme performance gaming desktop PC, the Lenovo Erazer X700 and for small businesses, the ThinkPad Edge E431 and E531 laptops.

Premier Thin and Light IdeaPad U and Mainstream Z Series

Lenovo’s signature IdeaPad U310 and U410 mainstream Ultrabook devices take thin and light to the next level with new 10-finger touch support. These stylishly thin and light consumer Ultrabooks are incredibly mobile at just 18mm thin and are ultra-responsive, waking up from sleep in just one second with Instant Resume. They come with up to a 3rd gen Intel Core i7 processor, the latest NVIDIA GeForce graphics with DirectX 11 and an extended battery life for all day computing. 

The IdeaPad Z400 and Z500 laptops are the latest-generation of the Z Series and optimized for the Windows 8 touch-based user interface. They support 10-point touch, feature specialized stereo speakers for extra bass with Dolby Home Theatre v4 for an immersive sound experience, come with up to 3rd gen standard voltage Intel Core i7 processors and include the latest NVIDIA GeForce graphics technology. These laptops give consumers large 14-15.6-inch touch screen real estate to experience the full power of Windows 8. Thebacklit AccuType keyboard even lets users comfortably see what they’re typing when they’re in the dark.

Erazer X700 - Extreme Gaming with One Click

The Lenovo Erazer X700 with its eye-catching diamond-cut and cold blue lighting design caters to enthusiast-level gamers and those who run intensive multimedia applications. The desktop comes with Lenovo’s exclusive Onekey Overclocking feature that allows users to unleash extreme processor performance, ie, “overclock,” at the click of a button while simultaneously preventing the PC from overheating thanks to its liquid cooling system. Many similar gaming PCs require users to adjust the setting in the BIOS.  Because the Erazer X700 leverages AMD Eyefinity technology, users can connect up to six monitors to enjoy a panoramic screen display and add up to 4 TB of storage without tools while the PC is still running. The desktop also comes with 32GB of memory, dual graphics support (NVIDIA GeForce or ATI CrossFireX1, up to dual AMD Radeon HD and the latest Intel Core family of processors.

IdeaCentre A730 Slimmest 27-inch AIO Plus Affordable C Series AIO Desktops

The IdeaCentre A730 all-in-one desktop (AIO) combines an optional 27-inch Quad HD (2560x1440) or 27-inch Full HD (1920x1080) frameless display with 10-point multi-touch into an less-than-an-inch thin frame with a widely adjustable screen angle from -5° to 90° so people can use it comfortably in any position. The A730 supports up to Windows 8 Pro, includes choices of the 3rd gen Intel Core family of processors and features large storage options up to 1TB.

The Lenovo C540 AIO with an optional touch screen is one of the most affordable and space saving 23- inch touch AIOs for family entertainment. It bundles powerful technologies including a 3rd gen Intel Core i3 processor and NVIDIA GeForce discrete graphics for an affordable computing solution.

ThinkPad Edge Laptops Pioneer OneLink Technology

The ThinkPad Edge E431 and E531 are the first ThinkPad laptops to include Lenovo’s all new OneLink technology. Designed to offer simplicity through a single cable connection, the new unique interface eliminates cable clutter without compromising performance. The first device to support the new technology is the new ThinkPad OneLink Dock. Offering superior lag-free graphics and audio performance through native video with dedicated HDMI and audio ports, it can also play host to a number of accessories through four USB ports. This industry unique technology will also charge laptop and mobile devices. Lenovo plans to offer additional OneLink devices in 2013. In addition to touch functionality, the slim ThinkPad Edge E431 and E531 laptops feature improved graphics options for more vivid content on their displays up to full HD and a five-button ClickPad lets users control Windows 8 features from the keyboard.

The Wired/Wireless Mobile Touch Screen Companion

The slim and sleek ThinkVision LT1423p Mobile Monitor Touch is Lenovo's next generation mobile monitor that offers extra screen real estate and touch functionality for users who demand more productivity on-the-go. Improving from its award winning predecessor's innovative design, the LT1423p will be available in wired or wireless editions and boasts a thin design with a 13.3 inch 1600 x 900 AH-IPS panel protected by Gorilla Glass for an incredibly wide viewing angle and offers a great Windows 8 touch experience. Consumers can even experience touch control gestures on non-touch PCs, and as an added benefit, business customers can take advantage of 10-point touch capability or adopt paper-less commercial transactions by using the electro-magnetic stylus.

To keep users up and running, Lenovo offers a full suite of services including In-Home Warranty upgrades for service at the owner’s home or business, and Accidental Damage Protection on select products to help insure against damage from accidents like drops, spills, electrical surges, or screen malfunctions. Additionally, Lenovo Premium Support’s expert technicians are available when needed with convenient over the phone or remote session support from the comfort of home.

Pricing and Availability

  • The IdeaPad Z500 Touch will be available starting in April. Models start at approximately $699.
  • The IdeaPad Z400 Touch will be available starting in March. Models start at approximately $699.
  • The IdeaPad U310 Touch will be available starting in March. Models start at approximately $779.
  • The IdeaPad U410 Touch will be available starting in April. Models start at approximately $850.
  • The Lenovo Erazer X700 will be available starting in June. Models start at approximately $1,499.
  • The IdeaCentre A730 will be available starting this in June. Models start at approximately $1,499.
  • The IdeaCentre C540 will be available starting in February. Non-touch models start at approximately $549. Touch models will be available in June.
  • The ThinkPad Edge E431 and E531 will be available starting in May. Models start at approximately $539.

The ThinkPad OneLink Dock will be available starting in May. Price will be approximately $99. The ThinkVision LT1423p Mobile Monitor Touch will be available during Q2 2013 and the price will be approximately $449.


          What is Your Best Approach to Decision Making?   

Thanks to computer technology, software and apps, more and more companies rely on big data to drive their business models. Leaders develop strategies using information compiled and analyzed by computers. Despite all of these advances, there still needs to be a human element behind decision-making in corporations. Experts touted by Harvard Business Review detail the best approach when figuring out how to move forward with a particular strategy.

Elements That Come Together

Three main elements come together when decision-making happens with all of these technology tools in the workplace. First, computers must compile the raw information. Sales figures, revenue, time to market, overhead, supply costs and labor expenses are all raw figures that computers work well with since those machines are great at doing math. Second, a program must analyze the numbers and organize them into usable information. This is when analytics software pays for itself.

The third aspect is that a human must employ decision-making to know what to do with the data. The program created a beautiful chart showing how price affects revenue over time, but what do company leaders do with that information? Business leaders must understand how the software compiles the information and what the numbers mean. People who can't figure out how to handle the data suffer from "big data, little brain," according to HBR.

How to Mitigate the Problem

Experts believe the best approach to alleviate any problems that come from relying on data too much is that business leaders should go with their gut and experience every once in a while. Finding this balance doesn't mean eschewing information altogether, but it's more about knowing what data to pay attention to when decision-making comes into play.

Ironically, there is a way to figure this out using a different kind of computer program. An agent-based simulation, or ABS, examines the behavior of humans, crunches the numbers and then makes a predictive model of what may happen. The U.S. Navy developed ABS in the 1970s when it examined the everyday behavior of 30,000 sailors. This type of program is gaining more widespread use due to better computing power.

There is a ton of information that computers must take into account to make these predictive models. When ABS first started, simulations ran for a year before getting results. In 2017, that timeframe is down to one day by comparison.

ABS uses information from the analytics software and applies it to customer behavior models and decision-making to predict what may happen in the future. This type of program answers what happens when a company changes prices of products, if a competitor adapts to market forces in a certain way and what customers may do when you change a product.

ABS can't predict everything, but it does take into account human expertise. ABS, like any analytics software, is only as good as the data it collects. It makes decisions more transparent because it supports the notion that if a company moves in one direction, then a certain scenario is likely to happen. You must remember to take a risk no matter what path you're on.

Decision-making shouldn't be all data or all going with your guts. However, data gathering certainly makes it easier thanks to the technological tools available to businesses.


Photo courtesy of Stuart Miles at FreeDigitalPhotos.net


          Russell Microcap Index Marks Fusion   
Cloud computing has reached the big time, and sitting at the fore of this revolution is Fusion, a leading provider of cloud services. This cloud communications provider is pushing the envelope of innovation, and earned very positive attention with the announcement this week that Fusion (NASDAQ: FSNN) has been added to the Russell Microcap Index.
          Girl Scouts and Palo Alto Networks: Preparing Girls for the Future of Cybersecurity    
There’s no escaping it. As time goes on our personal and professional lives will be even more dependent on the skills of cybersecurity experts to avoid everything from computer viruses to identity theft. Girl Scouts and Palo Alto Networks recognize that we all must work together to prepare for these technical challenges by creating the innovative cybersecurity problem solvers of tomorrow, through education today. According to the Computing Technology Industry Association, 69 percent of U.S. women who do not have a career in information technology cited not knowing what opportunities were available to them as reasons they did not pursue one.

To encourage girls to become the experts who can meet future cybersecurity challenges, GSUSA and Palo Alto Networks are teaming up to deliver the first-ever national cybersecurity badges for girls in grades K–12. In September 2018, eighteen badges will introduce cybersecurity education to millions of girls across the United States through compelling programming designed to increase their interest and instill in them a valuable 21st century skillset. This national effort is a huge step toward eliminating traditional barriers to industry access, such as gender and geography, and will target girls as young as five years old, ensuring that even the youngest girls have a foundation primed for future life and career success.

Mark Anderson, President of Palo Alto Networks and Sylvia Acevedo, CEO of Girl Scouts of the USA

When asked about the partnership, GSUSA CEO Sylvia Acevedo said, "We recognize that in our increasingly tech-driven world, future generations must possess the skills to navigate the complexities and inherent challenges of the cyber realm. From arming our older girls with the tools to address this reality to helping younger girls protect their identities via Internet safety, the launch of our national Cybersecurity badge initiative represents our advocacy of cyber preparedness―and our partnership with Palo Alto Networks makes a natural fit for our efforts.”

Together, GSUSA and Palo Alto Networks will provide cybersecurity education to more than a million U.S. girls while helping them develop their problem-solving and leadership skills.


You can learn more about this partnership via the press release


          Vista Challenges   

So I'm writing this blog entry from Phillip, the now Vista Build 5308 computer.

The Glass interface is very addictive - its all those little things that make the computing experience better. The quality of the type face, the simplicity of the default window appearance, it all lends itself to a better computing experience.

You get a real sense that things are different in Vista, although the changes are subtle for the most part.

I think the most interesting experiences so far have been the failures - and really, the only place I think I've had problems is around the video driver.

I realized the problem the first morning after installing Vista (call it the morning after hang over if you must... "what did I DO?"). The screen was blank, which isn't surprising, I'd turned on the blank screen screensaver. So I hit a shift key and things started twitching.

At first I thought the machine was hung. Then the display lit up showing the "machine locked" screen. Which is reasonable, that's how I configured the screen saver.

Then I thought the keyboard was hung, but the NumLock key seemed to work. And the mouse appeared to function fine, but clicking on things did nothing.

The screen went blank again, and when it came back, the accessibility controls were up.

It took me awhile to figure out what was going on - it seemed that the machine would freeze for several seconds, then do every keypress and mouse click that I tried.

And the repeated tapping of Shift and NumLock had triggered the accessibility stuff, which looks really cool in Vista.

Finally I clued in: what was actually happening was that the video drivers were repeatedly dying, and Vista was restarting them over and over again. Hence the constantly blank screens.

So, very slowly, one click at a time, I rebooted the machine. And everything came back to normal.

It wasn't until the next day that I figured out it wasn't the screen saver doing this, but rather Vista's default behaviour of sending the machine to sleep after an hour. Likely the ATI display driver doesn't recover properly from sleep.

So I've disabled sleep mode. Hopefully that will solve that.

Next up, DivX. For some reason, DivX just doesn't work on this machine, not in its own player or in Media Player. I've found blog entries where people said this was no problem, but its a problem for me, and an annoying one at that. Audio works, but video doesn't.


          Intermediate Angular JS / Web Software Developer - Maplewood Computing Ltd - London, ON   
Maplewood is evolving its existing Student Information System (SIS) into a new technology stack to meet the demands of our interactive customer experience....
From Indeed - Fri, 16 Jun 2017 14:14:24 GMT - View all London, ON jobs
          Lintcode 43 - Maximum Subarray III   
Related: Lintcode 41,42 - Maximum Subarray I,II
http://www.lintcode.com/en/problem/maximum-subarray-iii/
Given an array of integers and a number k, find k non-overlapping subarrays which have the largest sum.
The number in each subarray should be contiguous.
Return the largest sum.
 Notice
The subarray should contain at least one number
Example
Given [-1,4,-2,3,-2,3]k=2, return 8
Time: O(k * n^2)
Space: O(k * n)
Using sums[i][j] to denote the maximum total sum of choosing i subarrays from the first j numbers.
We can update by sums[i][j] = max(sums[i - 1][t] + maxSubarraySum(nums[t+1...j])), which means using the first t numbers to choose i - 1 subarrays, and plus the maximum sum from remaining numbers(nums[t]...nums[j-1]). We want to try all possible split point, so t ranges from nums[i-1] to nums[j-1].
In the most inner loop, it will try to examine the max sum from the subarray nums[t] to nums[j-1], where t goes from j-1 down to i-1. We can compute for each t the maximum sum. However, if we scan from right to left instead of left to right, we only needs to update the maximum value incrementally. For example, t's range is [1..5], so at first, the max sum is pick from [5], then it's picked from [4...5], ..., finally picked from [1...5]. By scanning from right to left, we are able to include the new number into computing on the fly.
  public int maxSubArray(ArrayList<Integer> nums, int k) {
if (nums == null || nums.size() < k) {
return 0;
}
int len = nums.size();
int[][] sums = new int[k + 1][len + 1];
for (int i = 1; i <= k; i++) {
for (int j = i; j <= len; j++) { // at least need one number in each subarray
sums[i][j] = Integer.MIN_VALUE;
int sum = 0;
int max = Integer.MIN_VALUE;
for (int t = j - 1; t >= i - 1; t--) {
sum = Math.max(nums.get(t), sum + nums.get(t));
max = Math.max(max, sum);
sums[i][j] = Math.max(sums[i][j], sums[i - 1][t] + max);
}
}
}
return sums[k][len];
}
d[i][j]代表0->i-1元素中j个subarray的maxsum  (注意不包含元素i)
d[i][j] = max(d[i][j], d[m][j-1] + max) (m = j-1 .... i-1; max需要单独求,是从元素i-1到m的max subarray, 用求max subarray的方法,需要从后往前算)
  1. public int maxSubArray(ArrayList<Integer> nums, int k) {  
  2.     int n = nums.size();  
  3.     int[][] d = new int[n+1][k+1];  
  4.     for (int j = 1; j <= k; j++) {  
  5.         for (int i = j; i <= n; i++) {  
  6.             d[i][j] = Integer.MIN_VALUE;  
  7.             int max = Integer.MIN_VALUE;  
  8.             int localMax = 0;  
  9.             for (int m = i-1; m >= j-1; m--) {  
  10.                 localMax = Math.max(nums.get(m), nums.get(m)+localMax);  
  11.                 max = Math.max(localMax, max);  
  12.                 d[i][j] = Math.max(d[i][j], d[m][j-1] + max);  
  13.             }  
  14.         }  
  15.     }  
  16.     return d[n][k];  
  17. }  
http://www.cnblogs.com/lishiblog/p/4183917.html
DP. d[i][j] means the maximum sum we can get by selecting j subarrays from the first i elements.
d[i][j] = max{d[p][j-1]+maxSubArray(p+1,i)}
we iterate p from i-1 to j-1, so we can record the max subarray we get at current p, this value can be used to calculate the max subarray from p-1 to i when p becomes p-1.

 7     public int maxSubArray(ArrayList<Integer> nums, int k) {
8 if (nums.size()<k) return 0;
9 int len = nums.size();
10 //d[i][j]: select j subarrays from the first i elements, the max sum we can get.
11 int[][] d = new int[len+1][k+1];
12 for (int i=0;i<=len;i++) d[i][0] = 0;
13
14 for (int j=1;j<=k;j++)
15 for (int i=j;i<=len;i++){
16 d[i][j] = Integer.MIN_VALUE;
17 //Initial value of endMax and max should be taken care very very carefully.
18 int endMax = 0;
19 int max = Integer.MIN_VALUE;
20 for (int p=i-1;p>=j-1;p--){
21 endMax = Math.max(nums.get(p), endMax+nums.get(p));
22 max = Math.max(endMax,max);
23 if (d[i][j]<d[p][j-1]+max)
24 d[i][j] = d[p][j-1]+max;
25 }
26 }
27
28 return d[len][k];
31 }
Use one dimension array.
 7     public int maxSubArray(ArrayList<Integer> nums, int k) {
8 if (nums.size()<k) return 0;
9 int len = nums.size();
10 //d[i][j]: select j subarrays from the first i elements, the max sum we can get.
11 int[] d = new int[len+1];
12 for (int i=0;i<=len;i++) d[i] = 0;
13
14 for (int j=1;j<=k;j++)
15 for (int i=len;i>=j;i--){
16 d[i] = Integer.MIN_VALUE;
17 int endMax = 0;
18 int max = Integer.MIN_VALUE;
19 for (int p=i-1;p>=j-1;p--){
20 endMax = Math.max(nums.get(p), endMax+nums.get(p));
21 max = Math.max(endMax,max);
22 if (d[i]<d[p]+max)
23 d[i] = d[p]+max;
24 }
25 }
26
27 return d[len];
30 }

X.
http://hehejun.blogspot.com/2015/01/lintcodemaximum-subarray-iii.html
跟之前一样这里我们维护两个东西,localMax[i][j]为进行了i次partition前j个数字的最大subarray,并且最后一个subarray以A[j - 1](A为输入的数组)结尾;globalMax[i][j]为进行了i次partition前j个数字的最大subarray(最后一个subarray不一定需要以j - 1结尾)。类比之前的DP方程,我们推出新的DP方程:
  • globalMax[i][j] = max(globalMax[i][j - 1], localMax[i][j]);
  • localMax[i][j] = max(globalMax[i - 1][k] + sumFromTo(A[k + 1], A[j])) 其中 0< k < j;
第一眼看上去对于第二个DP方程我们需要每次构建localMax[i][j]的时候把后面都扫一遍,这样总的复杂度会达到O(n^2),但是我们有方法把这一步优化到O(n)。
设想如下例子:
  • globalMax[i - 1]: 3, 5, -1, 8, 7
  • A[]:                    1, 2, 6, -2, 0
我们可以看到当算完A[2] max(globalMax[i - 1][k] + sumFromTo(A[k + 1], A[j])) = 11。当我们算到A[3]的时候,如果我们不考虑新加的globalMax[i - 1][2] + A[3]的组合, 之前的所有组合(globalMax[i - 1][0] + sumFromTo(A[1], A[2]), globalMax[i - 1][1] + sumFromTo(A[2], A[2]))都只需要再加一个A[3]就是新的globalMax[i - 1][k] + sumFromTo(A[k + 1], A[j])的值,所以之前的最大的值,到新的还是原来所组合的最大值,只需要和新加的比一下就可以了,所以这个值我们从最左边向右边扫一遍一直维护是可行的,并且只需要维护一个变量localMax而不是数组。时间复杂度O(k * n),空间复杂度O(k * n),空间应该可以维护一个滚动数组来优化到n不过这道题的逻辑比较复杂,为了维护逻辑的清晰性,我们不优化空间

https://zhengyang2015.gitbooks.io/lintcode/maximum_subarray_iii_43.html
local[i][k]表示前i个元素取k个子数组并且必须包含第i个元素的最大和。
global[i][k]表示前i个元素取k个子数组不一定包含第i个元素的最大和。
local[i][k]的状态函数:
max(global[i-1][k-1], local[i-1][k]) + nums[i-1]
有两种情况,第一种是第i个元素自己组成一个子数组,则要在前i-1个元素中找k-1个子数组,第二种情况是第i个元素属于前一个元素的子数组,因此要在i-1个元素中找k个子数组(并且必须包含第i-1个元素,这样第i个元素才能合并到最后一个子数组中),取两种情况里面大的那个。
global[i][k]的状态函数:
max(global[i-1][k],local[i][k])
有两种情况,第一种是不包含第i个元素,所以要在前i-1个元素中找k个子数组,第二种情况为包含第i个元素,在i个元素中找k个子数组且必须包含第i个元素,取两种情况里面大的那个
    public int maxSubArray(int[] nums, int k) {
if(nums.length < k){
return 0;
}

int len = nums.length;

//local[i][k]表示前i个元素取k个子数组并且必须包含第i个元素的最大和。
int[][] localMax = new int[len + 1][k + 1];
//global[i][k]表示前i个元素取k个子数组不一定包含第i个元素的最大和。
int[][] globalMax = new int[len + 1][k + 1];

for(int j = 1; j <= k; j++){
//前j-1个元素不可能找到不重叠的j个子数组,因此初始化为最小值,以防后面被取到
localMax[j - 1][j] = Integer.MIN_VALUE;
for(int i = j; i <= len; i++){
localMax[i][j] = Math.max(globalMax[i - 1][j - 1], localMax[i - 1][j]) + nums[i - 1];
if(i == j){
globalMax[i][j] = localMax[i][j];
}else{
globalMax[i][j] = Math.max(globalMax[i - 1][j], localMax[i][j]);
}
}
}

return globalMax[len][k];
}

https://leilater.gitbooks.io/codingpractice/content/dynamic_programming/maximum_subarray_iii.html
dp[i][j]表示从前i个数( i.e., [0, i-1]) 中取j个subarray得到的最大值,那么要求的结果就是dp[n][k]:从前n个数中取k个subarray得到的最大和。
状态转移:从前i个中取j个,那么我们可以从前p个数(i.e., [0, p-1])中取j-1个subarray,再加上从P到i-1这个subarray中的最大和(也就是Maximum Subarray问题)。从这句话里我们也可以读出来p的范围应该是j-1~i-1
    int maxSubArray(vector<int> nums, int k) {
if(nums.empty()) return 0;
int n = nums.size();
if(n < k) return 0;
vector<vector<int> > max_sum(n+1, vector<int>(k+1, INT_MIN));
//max_sum[i][j]: max sum of j subarrays generated from nums[0] ~ nums[i-1]
//note that j is always <= i
//init
for(int i=0; i<=n; i++) {
max_sum[i][0] = 0;
}
for(int i=1; i<=n; i++) {
for(int j=1; j<=min(i,k); j++) {
max_sum[i][j] = max_sum[i - 1][j];
//max_sum[i][j] equals to 1) max sum of j-1 subarrays generated from nums[0] ~ nums[p-1]
//plus 2) max sum of the subarry from nums[p] ~ nums[i-1]
//p can be j-1 ~ i-1
for(int p = i-1; p >= j-1; p--) {
//compute the max of of the subarry from nums[p] ~ nums[i-1]
int global = maxSubArray(nums, p, i-1);
max_sum[i][j] = max(max_sum[i][j], max_sum[p][j-1] + global);
}
}

          Essential Computer Security: Everyone's Guide to Email, Internet, and Wireless Security   

Essential Computer Security provides the vast home user and small office computer market with the information they must know in order to understand the risks of computing on the Internet and what they can do to protect themselves. Tony Bradley is the Guide for the About.com site for Internet Network Security. In his role managing the content for a site that has over 600,000 page views per month and a weekly newsletter with 25,000 subscribers, Tony has learned how to talk to people, everyday people, about computer security. Intended for the security illiterate, Essential Computer Security is a source of jargon-less advice everyone needs to operate their computer securely. * Written in easy to understand non-technical language that novices can comprehend * Provides detailed coverage of the essential security subjects that everyone needs to know * Covers just enough information to educate without being overwhelming


          Micron Technology reports profit on strong demand for memory chips   
June 29- Micron Technology Inc reported a quarterly profit, compared with a loss a year earlier, helped by improved prices of memory chips used in computing systems and smartphones amid tight supply. Net income attributable to Micron was $1.65 billion, or $1.40 per share, in the third quarter ended June 1, compared with a net loss of $215 million, or 21 cents per share,...
          Actually, The iPhone May Not Look Much Different In 2027   

For many of us, Apple’s iconic device will still sit at the center of the personal computing universe.

On a day like June 29–the iPhone launch’s 10th anniversary–it’s natural to wonder where Apple’s iconic device will be in another 10 years. It’s also tempting to prognosticate that the iPhone will be radically different by 2027, or that it will be overrun by new devices and approaches to personal technology.

Read Full Story


          Designing for Performance, and the Future of Computing   
So, you might have heard that Google released a web browser. One of the features of this web browser is its JavaScript engine, called v8, which is designed for performance. Designing for performance is something that Google does often. Now, designing for performance usually leads to complexity. So, being a major supporter of software simplicity, … Continue reading
          Esker Will Pay 0.30€ per Share as Dividend for 2016   
English

Lyon, France — June 29, 2017 — Esker, a worldwide leader in document process automation solutions and pioneer in cloud computing, today announced that, during its annual meeting held on June 22, 2017, Esker shareholders approved a 0.30€ per share dividend payment for the 2016 financial year, stable compared to the previous year.

The record date was on June 28, 2017, with the payment set to be completed on July 4, 2017. Shareholders having held their investment for more than two years in nominative form will receive a 10 percent bonus in addition to the regular dividend amount. This rule will apply for the first time for this dividend payment.

“We are happy to be able to continue our policy to reward shareholders through a consistent annual dividend payment,” said Jean-Michel Bérard, founder and CEO of Esker. “This policy is designed to not only recognize their commitment to Esker but also to reaffirm our confidence in the future of Esker and its continued success in the years to come."

Strong Financial Position

Esker’s cash level reached 24.1 million of euros (11.7 million net of financial debt) as of March 31, 2017. In addition, Esker owns 140,000 treasury shares that can be used immediately for a potential acquisition. This puts Esker in a favorable condition to continue its strategy of combining organic growth and acquisitions.

2017 Outlook

Esker confirms that it expects to see double digit organic growth in its revenue in 2017. The strong recurring nature of its revenue (more than 77 percent of total revenue for Q1 2017) allows the company to be confident in its performance. In addition, Esker continues to see many customers choosing Esker for their document automation needs. These new contracts will feed Esker’s growth in the quarters to come.
 

About Esker

Esker is a worldwide leader in cloud-based document process automation software. Esker solutions, including the acquisition of the TermSync accounts receivable solution in 2015, help organizations of all sizes to improve efficiencies, accuracy, visibility and costs associated with business processes. Esker provides on-demand and on-premises software to automate accounts payable, order processing, accounts receivable, purchasing and more.

Founded in 1985, Esker operates in North America, Latin America, Europe and Asia Pacific with global headquarters in Lyon, France and U.S. headquarters in Madison, Wisconsin. In 2016, Esker generated 66 million euros in total sales revenue. For more information on Esker and its solutions, visit www.esker.com. Follow Esker on Twitter @EskerInc and join the conversation on the Esker blog at blog.esker.com.

Date: 
Thursday, June 29, 2017
Page Header: 
Header Title (H1): 
Esker Will Pay 0.30€ per Share as Dividend for 2016
Header Subheadline: 

Long-standing shareholders to receive a 10 percent bonus, in addition

Header Background Image: 
Header Text Color: 
Dark
PR Category: 

          Nexecur Chooses Esker to Automate Its Customer Invoices and Provide Compliance with E-invoicing for Public Administrations Middleton,   
English

Middleton, Wis. — June 27, 2017 — Esker, a worldwide leader in document process automation solutions and pioneer in cloud computing, today announced it is working with Nexecur, a French security subsidiary of the Crédit Agricole bank that specializes in video security and alarm systems, to automate its billing process using Esker’s Accounts Receivable automation solution. Nexecur’s goal is to increase its e-invoice volume in anticipation of legislation requiring vendors to send e-invoices to the French public administration via Chorus, the e-invoicing platform established by the French government.

With three business units and over 118,000 protected sites across France, Nexecur’s accounting team previously utilized two employees for two days each month for monthly billing of customer invoices and four employees for more than a week for annual billing. In order to decrease processing time, reduce operating costs, increase flexibility and improve customer satisfaction, Nexecur was looking for a solution to automate the processing of its customer invoices.

“We needed a scalable solution that would allow us to improve in stages — automating the processing of invoices while progressively moving to e-invoicing,” said Stéphane Poirier, project manager at Nexecur. “Esker’s solution is scalable and easy to use. Solution implementation was flexible — something we have yet to see offered by any other competitor on the market. Esker’s teams are dedicated, professional and have a perfect understanding of our needs.”

A progressive transition to e-invoicing

Nexecur implemented Esker in early 2013, starting with the outsourcing and automation of its 350,000 customer invoices, followed by collection letters and additional administrative letters (an extra 54,000 documents). As early as 2014, Nexecur transitioned some of its customers to e-invoicing, which, today, represents 70 percent of invoice volume (roughly 250,000 e-invoices).
To encourage its customers to switch to e-invoicing, Nexecur has taken multiple actions, including:

  • Sending emails and raising awareness at the time of annual invoicing for existing customers
  • Integrating a clause into new customers’ contracts, resulting in a 90 percent adoption rate

“We are very pleased with the high adoption rates: More than 40,000 of our customers have agreed to switch to e-invoicing, which is about half of our customer base,” said Poirier. “We have done a lot to educate customers on the benefits of e-invoicing and it’s really paying off, particularly with new customers who opt for e-invoicing as soon as the contract is signed.” (continued)

To meet the Nexecur’s specific needs, Esker splits the initial batch of invoices, attaches the appropriate proof of services when necessary, and groups documents into the same envelope destined for the same customer. With the upcoming Jan. 1, 2018, public administrative compliance deadline rapidly approaching, Nexecur must ensure e-invoice delivery readiness to Chorus.

Invoice processing significantly reduced

Integrated with Nexecur’s Microsoft Dynamics NAV™ ERP, Esker has delivered many benefits:

  • Ability to absorb an increase in activity thanks to faster invoice processing (a few minutes a day instead of 17 hours a month)
  • Reduced processing costs
  • Increased efficiency (e.g., fewer interruptions in the workflow thanks to the virtual printer for sending paper mail, the ability to send mail in a few clicks, etc.)
  • Increased invoicing frequency, from bi-weekly to daily
  • Reduced paper volumes with electronic archiving
  • New users quickly operational

About Nexecur

Nexecur was created in 1986 at the initiative of the Crédit Agricole Regional Banks for the security of its bank branches. Today, it is a French national group that has several business units: Nexecur Protection (residential and professional security), Nexecur Assistance (home living assistance) and Nexecur Nexecur Sécurité bancaire and Telsud (security for large enterprises). In addition to alarm systems connected to its five remote monitoring centers, Nexecur offers solutions for video protection, access control, fire detection and external protection.

About Esker

Esker is a worldwide leader in cloud-based document process automation software. Esker solutions, including the acquisition of the TermSync accounts receivable solution in 2015, help organizations of all sizes to improve efficiencies, accuracy, visibility and costs associated with business processes. Esker provides on-demand and on-premises software to automate accounts payable, order processing, accounts receivable, purchasing and more.

Founded in 1985, Esker operates in North America, Latin America, Europe and Asia Pacific with global headquarters in Lyon, France and U.S. headquarters in Madison, Wisconsin. In 2016, Esker generated 66 million euros in total sales revenue. For more information on Esker and its solutions, visit www.esker.com. Follow Esker on Twitter @EskerInc and join the conversation on the Esker blog at blog.esker.com.

Date: 
Tuesday, June 27, 2017
Page Header: 
Header Title (H1): 
Nexecur Chooses Esker to Automate Its Customer Invoices and Provide Compliance with E-invoicing for Public Administrations Middleton,
Header Background Image: 
Header Text Color: 
Dark
PR Category: 

          Why are playgrounds compelling?   

Hand in hand with Swift, Apple introduced the idea of a programming playground: a text editor with an integrated evaluation engine that facilities the visual inspection of program values and states. It isn’t a new idea. After all, Emacs had lisp interaction buffers since the dawn of modern computing.

However, Emacs buffers do lack the visual component, and Bret Victor discussed in great detail why visualisation and an ability for interactive manipulation of program states is so important. Nevertheless, the mere transition from a REPL to an interaction buffer (that is, text editor with integrated evaluation) is already a big deal.

Why? It makes an interactive session more functional and less stateful. Instead of being faced with the state of the REPL after a particular command, the playground is a buffer with all commands. A REPL state corresponds to a single line in the playground, as the playground reifies the REPL history. In other words, the move from REPL to playground is much like the move from mutable variable to time-varying value — a shift in perspective from an instant to the entire timeline.


          The future of array-oriented computing in Haskell — The Result!   

I recently posted a survey concerning The future of array-oriented computing in Haskell. Here is a summary of the responses.

It is not surprising that basically everybody (of the respondents — who surely suffer from grave selection bias) is interested in multicore CPUs, but I’m somewhat surprised to see about 2/3 to be interested in GPGPU. The most popular application areas are data analytics, machine learning, and scientific computing with optimisation problems and physical simulations following close up.

The most important algorithmic patterns are iterative numeric algorithms, matrix operations, and —the most popular— standard aggregate operations, such as maps, folds, and scans. (This result most surely suffers from selection bias!)

I am very happy to see that most people who tried Repa or Accelerate got at least some mileage out of them. The most requested backend feature for Repa are SIMD instructions (aka vector instructions) and the most requested feature for Accelerate is support for high-performance CPU execution. I did suspect that and we really like to provide that functionality, but it is quite a bit of work (so will take a little while). The other major request for Accelerate is OpenCL support — we really need some outside help to realise that, as it is a major undertaking.

As far as extending the expressiveness of Accelerate goes, there is strong demand for nested data parallelism and sparse data structures. This also requires quite a bit of work (and is conceptual very hard!), but the good news is that PLS has got a PhD student working on just that!

NB: In the multiple choice questions permitting multiple answers, the percentages given by the Google Docs summary is somewhat misleading.


          HTC Keynote @ Uplinq   
Peter Chou, the CEO of HTC, started the second day at Uplinq. He too says that we’re in a “new era” of computing, during which computers will be in “every pocket”. Like Paul Jacos, he described how mobile phones are enabling social changes as well. In 2010 HTC shipped 25M smartphones, and in just Q1 of 2011, HTC already shipped 9.7M units – that’s a huge year to year increase. […]
          Qualcomm Keynote @ Uplinq 2011   
[Uplinq] Dr. Paul Jacobs, Qualcomm’s CEO, has introduced mobile computing as a “force of social change”, as he referred to the recent events in the middle east during which people “armed only with mobile phones” could document facts on the ground and share them with the world, instantly. And that’s only the beginning. Qualcomm estimates that data usage will grow by up to 12X by 2015. The future will be […]
          Essential Computer Security: Everyone's Guide to Email, Internet, and Wireless Security   

Essential Computer Security provides the vast home user and small office computer market with the information they must know in order to understand the risks of computing on the Internet and what they can do to protect themselves. Tony Bradley is the Guide for the About.com site for Internet Network Security. In his role managing the content for a site that has over 600,000 page views per month and a weekly newsletter with 25,000 subscribers, Tony has learned how to talk to people, everyday people, about computer security. Intended for the security illiterate, Essential Computer Security is a source of jargon-less advice everyone needs to operate their computer securely. * Written in easy to understand non-technical language that novices can comprehend * Provides detailed coverage of the essential security subjects that everyone needs to know * Covers just enough information to educate without being overwhelming


          Apple Pencil vs. Surface Pen: Stylus Over Substance?   
surface-pen-apple-pencil

The lowly stylus has grown up. Two of the biggest tech giants — Apple and Microsoft — are currently doing battle over their respective pointing devices. Today we’ll take a look at the Apple Pencil and Microsoft Surface Pen, and what makes each stylus unique. It’s hard to disagree that the future of computing is mobile. Anyone who wants to use a stylus for sketching, drawing, note taking, and more has a difficult decision to make when it comes to purchasing a tablet. Apple and Microsoft each have unique stylus options that work on each respective platform: the iPad Pro and Surface lineup. Let’s take a look at...

Read the full article: Apple Pencil vs. Surface Pen: Stylus Over Substance?


          CHM Live │Original iPhone Software Team Leader Scott Forstall (Part Two)   
[Recorded June 20, 2017] This is part two of two from the CHM Live show “Putting Your Finger On It: Creating the iPhone.” Watch Part 1—Original iPhone Engineers Nitin Ganatra, Scott Herz & Hugo Fiennes: http://bit.ly/2tluoLN Watch the Full Show—http://bit.ly/2sVP10E During 2006, the year before the iPhone was introduced, it seemed that innovation in mobile devices was beginning to slip away from Silicon Valley. Wireless computing was advancing more quickly in Europe than it was in the United States. That all changed abruptly when Steve Jobs stepped onstage at Moscone Center in San Francisco and asserted he was introducing “three revolutionary products” in one package—the iPhone. How did iPhone come to be? On June 20, four members of the original development team will join historian and journalist John Markoff to discuss the secret Apple project, which In the past decade has remade the computer industry, changed the business landscape, and become a tool in the hands of more than a billion people around the world. Lot number: X8247.2017 Catalog number: 102738283 © Computer History Museum === Original video: https://www.youtube.com/watch?v=IiuVggWNqSA Downloaded by http://huffduff-video.snarfed.org/ on Fri, 30 Jun 2017 00:57:36 GMT Available for 30 days after download
          Adjunct Faculty – Engineering Technology - RCC Institute of Technology (RCCIT) - Canada   
Fundamental concepts of Robotics. RCC Institute of Technology is looking for adjunct faculty within the Faculty of Engineering Technology and Computing to teach...
From RCC Institute of Technology (RCCIT) - Tue, 06 Jun 2017 06:07:32 GMT - View all Canada jobs
          UCAR Deploys ADVA FSP 3000 CloudConnect™ Solution in Supercomputing Network   
NewswireToday (newswire) - ADVA Optical Networking’s DCI Technology Provides 200G Connectivity to Atmosphere Research Facility - AdvaOptical.com
          Veeam a Nutanix urychľujú digitálnu transformáciu vďaka neustálej dostupnosti   
Společnost Veeam Software, inovativní poskytovatel řešení dostupnosti pro velké podniky (Availability for the Always-On Enterprise™), dnes oznámil rozšíření partnerství s firmou Nutanix, významným hráčem v oblasti firemního cloudového computingu a hyperkonvergované infrastruktury, v němž se Veeam stává prémiovým dodavatelem řešení dostupnosti pro virtualizovaná prostředí Nutanix. Kromě toho Veeam přidává do svého nevýznamnějšího řešení Veeam Availability Suite [...]
          Microsoft Programming Certification Training Bundle (98% discount)   
Microsoft services go well beyond just Windows, into enterprise where their tools and services are used to build applications efficiently, manage cloud computing environments, and much more. Programming with Microsoft is a consistently in-demand skill, and this 4-course bundle will introduce you to some of the most important programs in the Microsoft arsenal. You’ll discuss…
          The Complete MATLAB Mastery Bundle (86% discount)   
MATLAB (Matrix Laboratory) is a multi-paradigm numerical computing environment and programming language that is frequently used by engineering and science students. In this course, you will be introduced to MATLAB at a beginner level, and will gradually move into more advanced topics. The key benefit of MATLAB is how it makes programming accessible to everyone,…
          NJIT Ranked Third in the Nation for Graduating Hispanic Engineers   

NJIT graduate and undergrad degree programs in engineering technologies and engineering-related fields are racking up kudos when it comes to educating minority students, according to the most recent rankings by Diverse: Issues in Higher Education. 

Tagged: newark college of engineering, college of science and liberal arts, college of architecture and design, nce, coad, csla, college of computing sciences, ccs, mathematical sciences, mechanical and industrial engineering, rankings, educational opportunity program, engineering technology, angelo perna, mathematics, mcnair achievement program, diverse issues in higher education, national center for education statistics, minority students, undergraduate degree program, graduate degree, statistics



          Geek News Central Podcast: Windows 10 Combats Ransomware #1210 - Geek News Central Audio   

The latest update allows Windows 10 to Combat Ransomware in a pretty significant way. Microsoft is doing everything in their power to battle this ongoing issue with Ransomware threat that is taking companies out on a weekly basis. For those companies that do not update their software on a regular basis maybe this will be the incentive they need. Be safe over the 4th, no show on Monday back with you a week from today.

My New Personal YouTube Channel
Geek News Central Facebook Page.
Download the Audio Show File

Support my Show Sponsor:
30% off on New GoDaddy Orders cjcgnc30
$.99 for a New or Transferred .com cjcgnc99 @ GoDaddy.com
$1.00 / mo Economy Hosting with a free domain. Promo Code: cjcgnc1hs
$1.00 / mo Managed WordPress Hosting with free Domain. Promo Code: cjcgncwp1

Show Notes:

Windows 10 Combats Ransomware #1210


          Geek News Central Video: Windows 10 Combats Ransomware #1210 - Geek News Central (Video)   

The latest update allows Windows 10 to Combat Ransomware in a pretty significant way. Microsoft is doing everything in their power to battle this ongoing issue with Ransomware threat that is taking companies out on a weekly basis. For those companies that do not update their software on a regular basis maybe this will be the incentive they need. Be safe over the 4th, no show on Monday back with you a week from today.

My New Personal YouTube Channel
Geek News Central Facebook Page.
Download the Audio Show File

Support my Show Sponsor:
30% off on New GoDaddy Orders cjcgnc30
$.99 for a New or Transferred .com cjcgnc99 @ GoDaddy.com
$1.00 / mo Economy Hosting with a free domain. Promo Code: cjcgnc1hs
$1.00 / mo Managed WordPress Hosting with free Domain. Promo Code: cjcgncwp1

Show Notes:

Windows 10 Combats Ransomware #1210


          COMMERCIAL CONSTRUCTION ESTIMATOR   
This COMMERCIAL CONSTRUCTION ESTIMATOR Position Features:
? Great Benefits
? Great Pay to $80K

Job Description:

Job Description
Underground Pipeline Company seeking seasoned Estimator to bid public and private works. Bids include but not limited to: underground utilities, wet and dry utilities.
?Reviewing data to determine material and labor requirements and, prepares itemized lists
?Computing cost factors and prepares estimates used for management purposes such as planning, organizing and scheduling work, preparing bids, selecting vendors or
subcontractors and determining cost
?Conducting special studies to develop and establish standard hour and related cost data or effect cost reductions
?Consulting with management, vendors and other individuals to discuss and formulate estimates and resolve issues
?Interfaces with other individuals in the organization to obtain support and commitment to the cost estimates
?Organizing and managing a centralized cost estimating database and a formal process to support cost estimating to ensure historical data is utilized
?Analyzing completed projects to compare estimated costs to actual costs to determine the reason for an discrepancies
?Providing improvement recommendations to cost estimating procedures to reduce future discrepancies between estimated and actual costs
?Identifying cost trends to assist management in cost reduction and process improvement efforts

Responsibilities:
?Check bid source files to ensure latest information available prior to bid day
?Generate RFI?s necessary to establish competitive baseline
?Attend job walks to establish and build relationships/acquire necessary information
?Review project bid documents
?Perform Material take-offs and mathematical calculations accurately
?Determine type of materials, equipment, labor and subcontractors required
?Interface with vendors and subcontractors for relevant information/quotations
?Identify potential risks, opportunities, and alternative solutions
?Track and record bid results for continuous improvement and history tracking

**Call or email TODAY - Need to schedule Interviews STAT!!!

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          How will IoT change your life?   



An IoT (Internet of Things) setup has devices connected to the internet catering to multiple use case scenarios. These can be monitoring of assets, executing tasks and services to support day to day human requirements, ensuring life & safety through alerts and responses, city infrastructure management through command control centers for emergency response, enabling efficient governance through process automation, provisioning healthcare and enabling sustainable energy management thereby addressing environment conservation concerns.

A platform which caters to all above use cases from devices and sensors to management functionalities qualifies to be a Smart city platform.

Cloud computing is a popular service that comes with many characteristics and advantages. Basically, cloud computing is like DIY (Do It Yourself) service wherein a user/consumer can subscribe to computing resources based on demand/requirement whilst the services are delivered entirely over the Internet.

IoT and Cloud computing go hand-in-hand though they are two different technologies which are already part of our life. Both being pervasive qualifies them as the internet of future. Cloud merged with IoT together is foreseen as new and disruptive paradigm.

Just like cloud is available as a service, whether it is infrastructure, platform or software .Similarly IoT is seen as every(thing) as a service for future since it also fulfills the smart city initiative. First and foremost requirements of any IoT platform for smart city is on demand self-service which enables usage based subscription to computing resources(hardware) that manage and run automation , platform functions, software features and algorithms that form part of city management infrastructure.

Characteristics of such an IoT on Cloud scenario are …

·     Broad network access- to enable any device connectivity whether it is laptop, tablet, nano/micro/pico gateways or actuators or sensors.
       Resource pooling - for on demand access of compute resources like assign identity to device in pool.
      Rapid elasticity - to enable quickly edit software features providing elastic computing- storage & networking demands.
      Measured service - pay for only resources and or services used based on duration/volume/quantity of usage.   

Advantage of any IoT Cloud setup is that it doesn’t involve upfront CAPEX from a service consumer point of view in terms of building entire infrastructure from ground zero. Rather it is based on subscribe-operate-scale-pay model. This enables stakeholders and decision maker’s instant access to actual environment which helps them gauge prospective investment and expenditure at the same time technology teams are geared up to anticipate which component of the IoT setup needs to be scaled rather than replicating entire setup to fulfill growing demands.

Dockers (which are basically containers having associated compute, storage and software module with runtime environments required to run software module of overall software) and Micro services (are independent services having own data persistence and dependency software elements which can run independently or provide service to monolithic systems) are some of the features that help manage scalability aspect of IoT platform on cloud catering to smart city use case. Individual modules and components within IoT platform can be preconfigured as Dockers and Micro services.  Once there is traction on the platform, respective Docker or Micro services gets provisioned to handle surge in data traffic thus individual functionality of the platform becomes horizontally scalable. Hence to address such ad-hoc scalability requirements, unlike monolithic systems wherein entire platform needs replication, here only individual module of the platform can be scaled which saves substantial resources and OPEX for stakeholders.

This platform architecture can be implemented on a cloud infrastructure reusing legacy hardware or over commodity computing infrastructure.

Any smart city deployment of IoT platform demands fail safe high availability setup. As a result computing infrastructure has to be clustered (grouping of similar functional units/modules within software system).With the surge in number of clusters and Dockers of each functional modules, managing such disparate clustered environments becomes a challenge. Technologies such as Kubernetes and Mesos address these challenges.

Mesos and Kubernetes enable efficient management, scalability and optimization of Dockers micro services and APIs which are exposed as PaaS or SaaS over cloud infrastructure, thereby fulfilling on- the- fly auto scaling demands from service consumers.

Pacific controlsGalaxy2021 platform built using open source technologies has adopted most of the above mentioned technologies and best practices. This forms a unique value proposition enabling early adoption to latest technology innovations in the open source world that is either related to IoT or cloud computing. Galaxy2021 platform is horizontally scalable and is capable to manage disparate IoT applications of various stakeholders. It can handle high volume data communication originating at a high frequency from various sensors devices and gateways installed across various smart city assets.

Galaxy2021 Platformhas been deployed and available on different cloud infrastructures in public, private and hybrid models catering to customers ranging from Government, Utility companies to OEMs across Middle-East, US and Oceana.




          How to create a Smart City Strategy and its Implementation?    

Smart city strategy and implementation evolves with technological advancements achieved over rapidly developing global spectrum of research and developments in the field of Machine 2 Machine (M2M), Internet of Things (IoT) and moving onto Internet of Everything (IoE).

Through various online resources available it is a clear indication of the direction which a technology company needs to take if it wants to be in the league of the highly lucrative IoE domain, which considering economies of scale, is a prime index to determine the return on investments.  End of the day, the initiatives undertaken by pioneers in the IoE spectrum is to ensure that they benefit commercially at a global scale, gaining with the first mover’s advantage. As it is in the case of many internationally acclaimed corporates, IoE is the next phase they foresee for their rapid growth and diversification of business with a global footprint. 

Companies must embrace latest and evolving technologies accessible in order to remain competitive in the coming years. Enhanced hybrid internet connectivity, redundant and high available big data solutions, robust platforms run on cloud computing and integrating to the social media with a wide reach may be identified as major forces to ensure competing capabilities.

As per statistics published by Gartner; in comparison with year 2009, it is estimated that the total IoT units devices globally will reach 26 billion by the year 2020.  This directly accounts for an exponential surge in the smart devices and would account to the need for high available and reliable infrastructure and bandwidth to handle the IoT growth.

Key stakeholder in smart city strategy formulation and implementation would be the end user and is a focus while designing smart city platforms to ensure appealing user interfaces which provide complex data structures analyzed through intelligent analytic tools and portrayed onto dashboards which would be easy to use and interpret. Collation of all sub-systems in a city-level deployment of smart city ecosystem is an evolution of large scale Information Communication Technology (ICT) solutions catering various needs and functionalities that demand enhancement in efficiency and sustainable resource utilization yet available readily to use by each and every resident of the city.
Happiness index factor is a key measure and target of development of policy makers who undertake strategic decisions of a sustainable city level development initiative. This is highlighted in the World Happiness Report 2013. 

On March 05th 2014, Dubai formulated a strong far-sighted strategy with tangible milestones to transform itself into a “Smart City”. Considering that there are more than 180 nationalities living in Dubai, the transformation to adopt latest technologies directly carry huge impact in enhancing the life standards of the people residing in Dubai. The role of smart city initiative and its related developments keeping in view the Expo 2020 is vital to future developments in the region.  Introduction of ministries for happiness, tolerance and future is a clear and indicative direction to ensure that the milestones and vision of Dubai is met. A three-year timeline was set by the governance to ensure that Dubai Smart City project will be in track and following which Dubai will be the world’s smartest city by the year 2017.

Pacific Controls has pioneered the artificial intelligence framework developed for virtualization of managed services and for the delivery of real time business intelligence with its “Galaxy 2021” the Internet of Things platform. This ground breaking technology from Pacific Controls offers the world the opportunity to leverage the ubiquitous platform, Galaxy 2021 for IoT infrastructure and Smart City management applications. Pacific Controls’ Galaxy2021 is the world’s first enterprise platform delivering city centric services for management of its ecosystem comprising of Agriculture, Airports and Aviation, Education, Healthcare, Government, Energy, Financial Services, Hospitality, Manufacturing, Ground Transport, Logistics, Marine Ports, Oil and Gas and Residential.


Visit for more details : http://pacificcontrols.net/





           The moment you uploaded pictures on Facebook, you started Cloud Computing   



We often hear statements like, we are moving to the Cloud, In the Cloud, Planning for Cloud, Cloud..Cloud…Cloud.  So what is this cloud?

Does it mean a person has to work from a mountain with a computer along with a moving cloud? And what happens if it rains? Is your data going to be stored somewhere so far, maybe on a different planet? This might seem funny to you but questions like these are frequently asked concerning the cloud.

Cloud sounds like magic, but it’s really not. It’s quite simple and easy. It’s the fastest changing technology shaping the economy. To make it simple, cloud is a technology where you can store and access your data from anywhere using the internet. Making life simpler and more accessible.

For example - unknowingly, people all over the globe are already using cloud computing by uploading pictures on Facebook, OneDrive, Picasa or Flickr etc., which can be accessed at any given time and are secured by the trusted service provider. As soon as a photo is uploaded, it lives in the cloud and can be accessed from anywhere around the world using the internet. From a laptop, desktop, phone or any other device.

The global cloud computing market is expected to grow at 30% CAGR reaching $270 billion by 2020 according to the Latest research reports which covers cloud computing products, technologies and services. It’s cost effective and can also reduce the stress on administrating hardware and software services.

Companies aim to be efficient and profitable. Business applications which are usually very expensive are forced to enter into a universe of complexity. With so much complexity and data flooding in, they require a Data Center with maximum security, storage, cooling & power. It should also include staging, testing, production, as well as fail over and disaster recovery features. Instead of purchasing applications they can be hosted in a shared environment and can be accessible just like any other utility service.

With Cloud, you have a gold mine waiting to be explored for new business streamlines and modules. You just need to know how and when. Pacific Controls Cloud Services, Middle East’s largest Data Center provides its Cloud Services from a structured & reliable infrastructure which is a TIER III certified Green Data Center. A reliable Cloud Service provider for your business. 

No matter what data it is, personal or business, it’s yours and it’s important. The service provider has to be reliable and efficient to take care of it.

Pacific Controls Cloud Services offers security, reliability, efficiency and data integrity coupled with immediate scalability.

Contact us for Cloud consultation -
Sales
800 XaaS (9227)
Email










          Cloud computing is as important to the 21st century as the telephone was to the 20th   

Meet Alistair Graham, Sr. Solution Architect at Pacific Controls Cloud Services

“For most organisations, digital transformation is no longer a question of if , it ‘s when. Cloud computing is as important to the 21st century as the telephone was to the 20th. It revolutionizes the way we do business, in an era when practically everyone is exposed to data security risk at some point, anytime, anywhere from any internet enabled device. Pacific Controls Cloud Services (PCCS) has grown and matured , its portfolio is easy to implement and use. The revenue gains, efficiency increases and satisfaction spikes that cloud and IoT makes possible are too compelling for companies to ignore.

My role here at PCCS, using core tested principles, methodologies and team expertise helps organisations decide what to do and how to do it securely, across cloud, network, physical and virtual systems utilizing multiple security industry solutions.

PCCS provides me with innovative experience of provisioning security services that hold the prospect of improving the overall security of many organisations when they subscribe. At the same time PCCS offers advanced facilities for supporting security, privacy and data leakage due to its extensive economies of scale and automation capabilities across its datacenter infrastructure.

The journey to cloud need not be arduous, PCCS helps companies rethink, reorganize and ultimately realize the full value that can be obtained from cloud, helping them into position for sustainable, profitable growth.”

Know more about Pacific Controls Cloud Services at - http://bit.ly/1GzMkNS








          COMMERCIAL CONSTRUCTION ESTIMATOR   
This COMMERCIAL CONSTRUCTION ESTIMATOR Position Features:
? Great Benefits
? Great Pay to $80K

Job Description:

Job Description
Underground Pipeline Company seeking seasoned Estimator to bid public and private works. Bids include but not limited to: underground utilities, wet and dry utilities.
?Reviewing data to determine material and labor requirements and, prepares itemized lists
?Computing cost factors and prepares estimates used for management purposes such as planning, organizing and scheduling work, preparing bids, selecting vendors or
subcontractors and determining cost
?Conducting special studies to develop and establish standard hour and related cost data or effect cost reductions
?Consulting with management, vendors and other individuals to discuss and formulate estimates and resolve issues
?Interfaces with other individuals in the organization to obtain support and commitment to the cost estimates
?Organizing and managing a centralized cost estimating database and a formal process to support cost estimating to ensure historical data is utilized
?Analyzing completed projects to compare estimated costs to actual costs to determine the reason for an discrepancies
?Providing improvement recommendations to cost estimating procedures to reduce future discrepancies between estimated and actual costs
?Identifying cost trends to assist management in cost reduction and process improvement efforts

Responsibilities:
?Check bid source files to ensure latest information available prior to bid day
?Generate RFI?s necessary to establish competitive baseline
?Attend job walks to establish and build relationships/acquire necessary information
?Review project bid documents
?Perform Material take-offs and mathematical calculations accurately
?Determine type of materials, equipment, labor and subcontractors required
?Interface with vendors and subcontractors for relevant information/quotations
?Identify potential risks, opportunities, and alternative solutions
?Track and record bid results for continuous improvement and history tracking

**EMAIL TODAY - Need to schedule Interviews STAT!!!

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Fejlesztő R&D területre Service Management Budapest munkakörbe keresünk munkatársat. | Feladato...   
Fejlesztő R&D területre Service Management Budapest munkakörbe keresünk munkatársat. | Feladatok: Fejlesztési folyamatban való aktív részvétel: specifikáció, tervezés, implementálás, fejlesztői teszt, hibakeresés • Fejlesztési dokumentáció készítése • Agilis módszertanú fejlesztői csapatmunkában való aktív részvétel SCRUM módszertan, szakmai egyeztetés. | Elvárások: Legalább 2 év gyakorlati tapasztalat C# programnyelvben • Középfokú aktív angol nyelvtudás szóban és írásban • ASP.NET MVC ismerete • ORM ismerete, pl Entity Framework • Cloud computing tapasztalat, pl. Microsoft Azure •  Előny:MS SQL adatbázis ismeretek • Web design tapasztalat • Agilis fejlesztési metodológiák SCRUM ismerete • Jártasság verziókövető rendszerek használatában pl. TFS | További infó és jelentkezés itt: www.profession.hu/allas/1034292
          Centre for Development of Advanced Computing Pune Recruitment 2017   
Centre for Development of Advanced Computing ( CDAC ) Pune invites application for the post Project Engineer vacancies. CDAC Pune Recruitment Vacancy & Salary Details 2017 CDAC Pune Recruitment Eligibility Criteria 2017 Project Engineer Medical Informatics Doctor ( MBBS ) Educational Qualification : First Class in MBBS Degree of an Indian University recognized by the Medical Council of
          Noi italiani. Inconsapevoli consumatori di cloud computing   
Può darsi che molti di noi, parlando di cloud computing, esclamino “ah ma io so benissimo di cosa si tratta”. Tuttavia può darsi che molti altri rispondano “cloud computing? E cos’è, una cosa buona da mangiare?”. Una ricerca di Nextplora, commissionata da Microsoft Italia, e chiamata “Osservatorio Internet 2011” è arrivata alla conclusione che la […]
          Running an alt.business: being a good cause and doing good business @XRDS_acm @adafruit   
XRDS Crossroads – The ACM Magazine for Student (ACM – Association for Computing Machinery). “The summer issue of XRDS takes a candid look at startups and entrepreneurship. Whether you’re just dipping your toe in the waters or you’re ready to dive in, we’ve assembled a selection of articles to explore the unlimited opportunities, the challenges, […]
          Desktop Support Technician - Tilton - J.Jill - Tilton, NH   
Jill users support for technical issues related to the operation of their computing hardware (Macintosh, PC, Thin Client and mobile devices), software (Core...
From J.Jill - Mon, 05 Jun 2017 16:08:00 GMT - View all Tilton, NH jobs
          CHOReVOLUTION Platform at Open Cloud Forum Paris 2017   

Nikolaos Georgantas, INRIA Research Scientist presents the CHOReVOLUTION platform during the open cloud forum by OW2, in parallel of Cloud Computing World Expo, 22 March 2017, in Paris.
          CHOReVOLUTION at Open Cloud Forum Paris   

Sébastien Keller presents CHOReVOLUTION project at Open Cloud Forum by OW2, in parallel with Cloud Computing World Expo, 23 March 2016, Paris, Porte de Versailles.
          House Bill 638 Printer's Number 675   
An Act amending the act of June 3, 1937 (P.L.1333, No.320), known as the Pennsylvania Election Code, in district election officers, further providing for election officers to be sworn; in dates of elections and primaries and special elections, further providing for affidavits of candidates; in nomination of candidates, further providing for petition may consist of several sheets and affidavit of circulator, for affidavits of candidates, for examination of nomination petitions, certificates and papers and return of rejected nomination petitions, certificates and papers, for vacancy in party nomination by failure to pay filing fee or for failure to file loyalty oath, for affidavits of candidates, for filling of certain vacancies in public office by means of nomination certificates and nomination papers and for substituted nominations to fill certain vacancies for a November election; in ballots, further providing for form and printing of ballots; in returns of primaries and elections, further providing for manner of computing irregular ballots; and replacing references to "justice of the peace" with "magisterial district judge."...
          * The iPhone, Xerox PARC, and the IBM PC compatible *   
Ars has started a series on the advent of the IBM PC, and today they published part one. The machine that would become known as the real IBM PC begins, of all places, at Atari. Apparently feeling their oats in the wake of the Atari VCS' sudden Space Invaders-driven explosion in popularity and the release of its own first PCs, the Atari 400 and 800, they made a proposal to IBM's chairman Frank Cary in July of 1980: if IBM wished to have a PC of its own, Atari would deign to build it for them. Fascinating history of the most influential computing platform in history, a statement that will surely ruffle a lot of feathers. The IBM PC compatible put a computer on every desk and in every home, and managed to convince hundreds of millions of people of the need of a computer - no small feat in a world where a computer was anything but a normal household item. In turn, this widespread adoption of the IBM PC compatible platform paved the way for the internet to become a success. With yesterday's ten year anniversary of the original iPhone going on sale, a number of people understandably went for the hyperbole, such as proclaiming the iPhone the most important computer in history, or, and I wish I was making this up, claiming the development of the iPhone was more important to the world than the work at Xerox PARC - and since this was apparently a competition, John Gruber decided to exaggerate the claim even more. There's no denying the iPhone has had a huge impact on the world, and that the engineers at Apple deserve all the credit and praise they're getting for delivering an amazing product that created a whole new category overnight. However, there is a distinct difference between what the iPhone achieved, and what the people at Xerox PARC did, or what IBM and Microsoft did. The men and women at PARC literally invented and implemented the graphical user interface, bitmap graphics, Ethernet, laser printing, object-oriented programming, the concept of MVC, the personal computer (networked together!), and so much more - and all this in an era when computers were gigantic mainframes and home computing didn't exist. As for the IBM PC compatible and Wintel - while nowhere near the level of PARC, it did have a profound and huge impact on the world that in my view is far greater than that of the iPhone. People always scoff at IBM and Microsoft when it comes to PCs and DOS/Windows, but they did put a computer on every desk and in every home, at affordable prices, on a relatively open and compatible platform (especially compared to what came before). From the most overpaid CEO down to the most underpaid dock worker - everybody could eventually afford a PC, paving the way for the internet to become as popular and ubiquitous as it is. The iPhone is a hugely important milestone and did indeed have a huge impact on the world - but developing and marketing an amazing and one-of-a-kind smartphone in a world where computing was ubiquitous, where everybody had a mobile phone, and where PDAs existed, is nowhere near the level of extraordinary vision and starting-with-literally-nothing that the people at PARC had, and certainly not as impactful as the rise of the IBM PC compatible and Wintel. It's fine to be celebratory on the iPhone's birthday - Apple and its engineers deserve it - but let's keep at least one foot planted in reality. Read more on this exclusive OSNews article...
          Overclocker Claims Intel X299 VRM Temps are a ‘Disaster’ With Skylake-X   
core i9

According to one prominent member of the overclocking community, the VRM cooling on X299 motherboards isn't good enough to keep the boards stable -- and no overclocker should buy a board with just one 8-pin CPU power connector.

The post Overclocker Claims Intel X299 VRM Temps are a ‘Disaster’ With Skylake-X appeared first on ExtremeTech.


          Microsoft ‘Autopilot’ Management Tools to Debut in Fall Creators Update   
Windows 10 desktop

Microsoft is adding a new set of Autopilot deployment tools and security systems to the Fall Creators Update.

The post Microsoft ‘Autopilot’ Management Tools to Debut in Fall Creators Update appeared first on ExtremeTech.


          Nearly 25 Percent of Windows Users Will Switch to Mac Within 6 Months: Survey   
543974-apple-event-imac-pro

A new survey is claiming that up to 25 percent of the PC market could switch to Apple within the next 6-24 months, but data from independent analysis firms doesn't back up that conclusion.

The post Nearly 25 Percent of Windows Users Will Switch to Mac Within 6 Months: Survey appeared first on ExtremeTech.


          Windows 10 Has Halved Data Collection: Privacy Watchdog   

French data watchdog CNIL has closed its investigation of Microsoft and Windows 10 after updates to the OS cut its telemetry gathering by 50 percent.

The post Windows 10 Has Halved Data Collection: Privacy Watchdog appeared first on ExtremeTech.


          New CIA Leak Reveals Tool That Can Track Computers via Wi-Fi   
WiFi Hotspot

The latest CIA tool revealed online is rather straightforward -- malware that tracks a device's physical location. However, it doesn't need GPS, just Wi-Fi.

The post New CIA Leak Reveals Tool That Can Track Computers via Wi-Fi appeared first on ExtremeTech.


          AMD Unveils Ryzen Pro CPUs, Details on Ryzen 3 Chips   

AMD is refreshing its Pro lineup of desktop CPUs for business-class systems. It's also shared some data on what we can expect from the Ryzen 3 family when those chips debut later this year.

The post AMD Unveils Ryzen Pro CPUs, Details on Ryzen 3 Chips appeared first on ExtremeTech.


          FrOSCon 2013, or, why is there no MirBSD exhibit?   

FrOSCon is approaching, and all MirBSD developers will attend… but why’s there no MirBSD exhibit? The answer to that is a bit complex. First let’s state that of course we will participate in the event as well as the Open Source world. We’ll also be geocaching around the campus with other interested (mostly OSS) people (including those we won for this sport) and helping out other OSS projects we’ve become attached to.

MirOS BSD, the operating system, is a niche system. The conference on the other hand got “younger” and more mainstream. This means that almost all conference visitors do not belong to the target group of MirOS BSD which somewhat is an “ancient solution”: the most classical BSD around (NetBSD® loses because they have rc.d and PAM and lack sendmail(8), sorry guys, your attempt at being not reformable doesn’t count) and running on restricted hardware (such as my 486SLC with 12 MiB RAM) and exots (SPARCstation). It’s viable even as developer workstation (if your hardware is supported… otherwise just virtualise it) but its strength lies with SPARC support and “embedded x86”. And being run as virtual machine: we’re reportedly more stable and more performant than OpenBSD. MirBSD is not cut off from modern development and occasionally takes a questionable but justified choice (such as using 16-bit Unicode internally) or a weird-looking but beneficial one (such as OPTU encoding saving us locale(1) hassles) or even acts as technological pioneer (64-bit time_t on ILP32 platforms) or, at least, is faster than OpenBSD (newer GNU toolchain, things like that), but usually more conservatively, and yes, this is by design, not by lack of manpower, most of the time.

The MirPorts Framework, while technically superiour in enough places, is something that just cannot happen without manpower. I (tg@) am still using it exclusively, continuing to update ports I use and occasionally creating new ones (mupdf is in the works!), but it’s not something I’d recommend someone (other than an Mac OSX user) to use on a nōn-MirBSD system (Interix is not exactly thriving either, and the Interix support was only begun; other OSes are not widely tested).

The MirBSD Korn Shell is probably the one thing I will be remembered for. But I have absolutely no idea how one would present it on a booth at such an exhibition. A talk is much more likely. So no on that front too.

jupp, the editor which sucks less, is probably something that does deserve mainstream interest (especially considering Natureshadow is using it while teaching computing to kids) but probably more in a workshop setting. And booth space is precious enough in the FH so I think that’d be unfair.

All the other subprojects and side projects Benny and I have, such as mirₘᵢₙcⒺ, josef stalin, FreeWRT, Lunix Ewe, Shellsnippets, the fonts, etc. are interesting but share few, if any, common ground. Again, this does not match the vast majority of visitors. While we probably should push a number of these more, but a booth isn’t “it” here, either.

MirOS Linux (“MirLinux”) and MirOS Windows are, despite otherwise-saying rumours called W*k*p*d*a, only premature ideas that will not really be worked on (though MirLinux concepts are found in mirₘᵢₙcⒺ and stalin).

As you can see, despite all developers having full-time dayjobs, The MirOS Project is far from being obsolete. We hope that our website visitors understand our reasons to not have an exhibition booth of our own (even if the SPARCstation makes for a way cool one, it’s too heavy to lift all the time), and would like to point out that there are several other booths (commercial ones, as well as OSS ones such as AllBSD, Debian and (talking to) others) and other itineries we participate in. This year both Benny and I have been roped into helping out the conference itself, too (not exactly unvoluntarily though).

The best way to talk to us is IRC during regular European “geek” hours (i.e. until way too late into the night – which Americans should benefit from), semi-synchronously, or mailing lists. We sort of expect you to not be afraid to RTFM and look up acronyms you don’t understand; The MirOS Project is not unfriendly but definitely not suited for your proverbial Aunt Tilly, newbies, “desktop” users, and people who aren’t at least somewhat capable of using written English (this is by design).


          AetherStore, The Software-Defined Storage Solution by AetherWorks, Exceeds 10,000 Users in 100 Countries   
Since the launch just over two months ago, AetherStore has quickly been adopted throughout the world, providing reliable, onsite, cloud-like backup using only software NEW YORK, NY – June 28, 2017 — /BackupReview.info/ — AetherWorks, a software research and venture development firm based in New York, announces today that it has reached over 10,000 users [...] Related posts:
  1. Leonovus Launches Software-Defined Object Storage Solution Solving Security and Compliance Requirements for Enterprise Cloud Storage
  2. RingStor Launches New Release of Versatile Software Defined Storage Solution Incorporating Multiplatform, Multi-Repository, Traditional Backup and Recovery Capability
  3. StoneFly Incorporates HGST FlashMAX Enterprise-Class SSD Storage in its USS Hyper-Converged Appliances for Ultimate Software-Defined Virtual Computing Solution
  4. HP Makes Software-defined Storage Available to the World
  5. First Fully Integrated Software-Defined Data Center Solution Now Available From The EMC Federation Of Businesses

          Особенности использования Google Cloud Functions   

Сегодня речь пойдет об использовании Google Cloud Functions. Пару дней назад я решил попробовать подход serverless computing и сегодня готов поделиться с вами своим опытом. Итак, задача, которую я хотел решить – это сбор собственной базы курсов, которые предлагаются в этом блоге в дополнении к каждой статье. В настоящий момент при заходе каждого отдельного пользователя […]

The post Особенности использования Google Cloud Functions appeared first on Dev-Ops-Notes.RU.


          Machine Learning’s Mediocre Gains   
(Bloomberg) Hedge funds using vast amounts of data, computing power, and machine-learning techniques to make money are drawing investors’ attention. But their brief track records show they suffer the same shortcomings as their more traditional peers. The Eurekahedge AI Hedge […]
          What is Your Best Approach to Decision Making?   

Thanks to computer technology, software and apps, more and more companies rely on big data to drive their business models. Leaders develop strategies using information compiled and analyzed by computers. Despite all of these advances, there still needs to be a human element behind decision-making in corporations. Experts touted by Harvard Business Review detail the best approach when figuring out how to move forward with a particular strategy.

Elements That Come Together

Three main elements come together when decision-making happens with all of these technology tools in the workplace. First, computers must compile the raw information. Sales figures, revenue, time to market, overhead, supply costs and labor expenses are all raw figures that computers work well with since those machines are great at doing math. Second, a program must analyze the numbers and organize them into usable information. This is when analytics software pays for itself.

The third aspect is that a human must employ decision-making to know what to do with the data. The program created a beautiful chart showing how price affects revenue over time, but what do company leaders do with that information? Business leaders must understand how the software compiles the information and what the numbers mean. People who can't figure out how to handle the data suffer from "big data, little brain," according to HBR.

How to Mitigate the Problem

Experts believe the best approach to alleviate any problems that come from relying on data too much is that business leaders should go with their gut and experience every once in a while. Finding this balance doesn't mean eschewing information altogether, but it's more about knowing what data to pay attention to when decision-making comes into play.

Ironically, there is a way to figure this out using a different kind of computer program. An agent-based simulation, or ABS, examines the behavior of humans, crunches the numbers and then makes a predictive model of what may happen. The U.S. Navy developed ABS in the 1970s when it examined the everyday behavior of 30,000 sailors. This type of program is gaining more widespread use due to better computing power.

There is a ton of information that computers must take into account to make these predictive models. When ABS first started, simulations ran for a year before getting results. In 2017, that timeframe is down to one day by comparison.

ABS uses information from the analytics software and applies it to customer behavior models and decision-making to predict what may happen in the future. This type of program answers what happens when a company changes prices of products, if a competitor adapts to market forces in a certain way and what customers may do when you change a product.

ABS can't predict everything, but it does take into account human expertise. ABS, like any analytics software, is only as good as the data it collects. It makes decisions more transparent because it supports the notion that if a company moves in one direction, then a certain scenario is likely to happen. You must remember to take a risk no matter what path you're on.

Decision-making shouldn't be all data or all going with your guts. However, data gathering certainly makes it easier thanks to the technological tools available to businesses.


Photo courtesy of Stuart Miles at FreeDigitalPhotos.net


          Space-Filling Curves: An Introduction with Applications in Scientific Computing   

          Scientific Computing with MATLAB and Octave   

          Scientific Computing: An Introduction using Maple and MATLAB   

          Scientific Computing with MATLAB   

          Problems & Solutions in Scientific Computing with C++ and Java Simulations   

          Scientific Parallel Computing   

          Accuracy and Reliability in Scientific Computing   

          ONLINE INTERNET POLICE   
Internet police is a generic term for police and secret police departments and other organizations in charge of policing Internet in a number of countries. The major purposes of Internet police, depending on the state, are fighting cybercrime, as well as censorship, propaganda, and monitoring and manipulating the online public opinion.

It has been reported that in 2005, departments of provincial and municipal governments in mainland China began creating teams of Internet commentators from propaganda and police departments and offering them classes in Marxism, propaganda techniques, and the Internet. They are reported to guide discussion on public bulletin boards away from politically sensitive topics by posting opinions anonymously or under false names. "They are actually hiring staff to curse online", said Liu Di, a Chinese student who was arrested for posting her comments in blogs.
Chinese Internet police also erase anti-Communist comments and posts pro-government messages. Chinese Communist Party leader Hu Jintao has declared the party's intent to strengthen administration of the online environment and maintain the initiative in online opinion.

The Computer Emergency Response Team of Estonia (CERT Estonia), established in 2006, is an organisation responsible for the management of security incidents in .ee computer networks. Its task is to assist Estonian Internet users in the implementation of preventive measures in order to reduce possible damage from security incidents and to help them in responding to security threats. CERT Estonia deals with security incidents that occur in Estonian networks, are started there, or have been notified of by citizens or institutions either in Estonia or abroad.
Cyber Crime Investigation Cell is a wing of Mumbai Police, India, to deal with Cyber crimes, and to enforce provisions of India's Information Technology Law, namely, Information Technology Act 2000, and various cyber crime related provisions of criminal laws, including the Indian Penal Code. Cyber Crime Investigation Cell is a part of Crime Branch, Criminal Investigation Department of the Mumbai Police.
Andhra Pradesh Cyber Crime Investigation Cell is a wing of Hyderabad Police, India, to deal with Cyber crimes.

Dutch police were reported to have set up an Internet Brigade to fight cybercrime. It will be allowed to infiltrate Internet newsgroups and discussion forums for intelligence gathering, to make pseudo-purchase and to provide services
After the 2006 coup in Thailand, the Thai police has been active in monitoring and silencing dissidents online. Censorship of the Internet is carried out by the Ministry of Information and Communications Technology of Thailand and the Royal Thai Police, in collaboration with the Communications Authority of Thailand and the Telecommunication Authority of Thailand.

The Internet Watch Foundation (IWF) is the only recognised organisation in the United Kingdom operating an Internet ‘Hotline’ for the public and IT professionals to report their exposure to potentially illegal content online. It works in partnership with the police, Government, the public, Internet service providers and the wider online industry.

The Internet Crime Complaint Center, also known as IC3, is a multi-agency task force made up by the Federal Bureau of Investigation (FBI), the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA).

IC3's purpose is to serve as a central hub to receive, develop, and refer criminal complaints regarding the rapidly expanding occurrences of cyber-crime. The IC3 gives the victims of cybercrime a convenient and easy-to-use reporting mechanism that alerts authorities of suspected criminal or civil violations on the internet. IC3 develops leads and notifies law enforcement and regulatory agencies at the federal, state, local and international level, IC3 act as a central referral mechanism for complaints involving Internet related crimes.

Criminal threatening is the crime of intentionally or knowingly putting another person in fear of imminent bodily injury.

There is no legal definition in English law as to what constitutes criminal threatening behaviour, so it is up to the courts to decide on a case by case basis. However, if somebody threatens violence against somebody, then this may be a criminal offence. In most countries it is only an offence if it can be proven the person had the intention and equipment to carry out the threat. However if the threat involves the mention of a bomb it is automatically a crime.
In most U.S. jurisdictions, the crime remains a misdemeanor unless a deadly weapon is involved or actual violence is committed, in which case it is usually considered a felony.

Criminal threatening can be the result of verbal threats of violence, physical conduct (such as hand gestures or raised fists), actual physical contact, or even simply the placing of an object or graffiti on the property of another person with the purpose of coercing or terrorizing.

Criminal threatening is also defined by arson, vandalism, the delivery of noxious biological or chemical substances (or any substance that appears to be a toxic substance), or any other crime against the property of another person with the purpose of coercing or terrorizing any person in reckless disregard for causing fear, terror or inconvenience.

"Terrorizing" generally means to cause alarm, fright, or dread in another person or inducing apprehension of violence from a hostile or threatening event, person or object.

Crimint is a database run by the Metropolitan Police Service of Greater London which stores information on criminals, suspected criminals and protestors. It was created in 1994 and supplied by Memex Technology Limited. It supports the recording and searching of items of intelligence by both police officers and back office staff. As of 2005 it contained seven million information reports and 250,000 intelligence records. The database makes it much easier for police officers to find information on people, as one officer who used the system stated in 1996:

"With Crimint we are in a new world. I was recently asked if I knew something about a certain car. In the old days I would have had to hunt through my cards. I would probably have said, 'Yes, I do, but . . . '. With Crimint I was able to answer the question in about fifteen seconds. And with Crimint things just don't go missing.
People are able to request their information from the database under data protection laws. Requests have shown that the database holds large amounts of information on protesters who have not committed any crimes. Information is stored for at least seven years. Holding information on people who have never committed any offence may be against people's human rights. A police officer, Amerdeep Johal, used the database to contact sex offenders and threatened to disclose information about them from the database unless they paid him thousands of pounds.

Along with development of the Internet, state authorities in many parts of the world are moving forward to install mass surveillance of the electronic communications, establish Internet censorship to limit the flow of information, and persecute individuals and groups who express “inconvenient” political views in the Internet. Many cyber-dissidents have found themselves persecuted for attempts to bypass state controlled news media. Reporters Without Borders has released a Handbook For Bloggers and Cyber-Dissidents and maintains a roster of currently imprisoned cyber-dissidents

Chinese Communist Party leader Hu Jintao ordered to "maintain the initiative in opinion on the Internet and raise the level of guidance online, An internet police force - reportedly numbering 30,000 - trawls websites and chat rooms, erasing anti-Communist comments and posting pro-government messages." However, the number of Internet police personnel was challenged by Chinese authorities Amnesty International blamed several companies, including Google, Microsoft and Yahoo!, of collusion with the Chinese authorities to restrict access to information over the Internet and identify cyber-dissidents by hiring "big mamas" .
It was reported that departments of provincial and municipal governments in mainland China began creating "teams of internet commentators, whose job is to guide discussion on public bulletin boards away from politically sensitive topics by posting opinions anonymously or under false names" in 2005 Applicants for the job were drawn mostly from the propaganda and police departments. Successful candidates have been offered classes in Marxism, propaganda techniques, and the Internet. "They are actually hiring staff to curse online," said Liu Di, a Chinese student who was arrested for posting her comments in blogs

Internet censorship is control or suppression of the publishing or accessing of information on the Internet. The legal issues are similar to offline censorship.
One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. A government can try to prevent its citizens from viewing these even if it has no control over the websites themselves. Filtering can be based on a blacklist or be dynamic. In the case of a blacklist, that list is usually not published. The list may be produced manually or automatically.

Barring total control over Internet-connected computers, such as in North Korea, total censorship of information on the Internet is very difficult (or impossible) to achieve due to the underlying distributed technology of the Internet. Pseudonymity and data havens (such as Freenet) allow unconditional free speech, as the technology guarantees that material cannot be removed and the author of any information is impossible to link to a physical identity or organization.

In some cases, Internet censorship may involve deceit. In such cases the censoring authority may block content while leading the public to believe that censorship has not been applied. This may be done by having the ISP provide a fake "Not Found" error message upon the request of an Internet page that is actually found but blocked (see 404 error for details).

In November 2007, "Father of the Internet" Vint Cerf stated that he sees Government-led control of the Internet failing due to private ownership. Many internet experts use the term "splinternet" to describe some of the effects of national firewalls.

Some commonly used methods for censoring content are:

IP blocking. Access to a certain IP address is denied. If the target Web site is hosted in a shared hosting server, all websites on the same server will be blocked. This affects IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find proxies that have access to the target websites, but proxies may be jammed or blocked, and some Web sites, such as Wikipedia (when editing), also block proxies. Some large websites like Google have allocated additional IP addresses to circumvent the block, but later the block was extended to cover the new IPs.

DNS filtering and redirection. Don't resolve domain names, or return incorrect IP addresses. This affects all IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find a domain name server that resolves domain names correctly, but domain name servers are subject to blockage as well, especially IP blocking. Another workaround is to bypass DNS if the IP address is obtainable from other sources and is not blocked. Examples are modifying the Hosts file or typing the IP address instead of the domain name in a Web browser.

Uniform Resource Locator (URL) filtering. Scan the requested URL string for target keywords regardless of the domain name specified in the URL. This affects the HTTP protocol. Typical circumvention methods are to use escaped characters in the URL, or to use encrypted protocols such as VPN and TLS/SSL.

Packet filtering. Terminate TCP packet transmissions when a certain number of controversial keywords are detected. This affects all TCP-based protocols such as HTTP, FTP and POP, but Search engine results pages are more likely to be censored. Typical circumvention methods are to use encrypted connections - such as VPN and TLS/SSL - to escape the HTML content, or by reducing the TCP/IP stack's MTU/MSS to reduce the amount of text contained in a given packet.

Connection reset. If a previous TCP connection is blocked by the filter, future connection attempts from both sides will also be blocked for up to 30 minutes. Depending on the location of the block, other users or websites may also be blocked if the communication is routed to the location of the block. A circumvention method is to ignore the reset packet sent by the firewall.

Reverse surveillance. Computers accessing certain websites including Google are automatically exposed to reverse scanning from the ISP in an apparent attempt to extract further information from the "offending" system.

One of the most popular filtering software programmes is SmartFilter, owned by Secure Computing in California, which has recently been bought by McAfee. SmartFilter has been used by Tunisia, Saudi Arabia and Sudan, as well as in the US and the UK.

There are a number of resources that allow users to bypass the technical aspects of Internet censorship. Each solution has differing ease of use, speed, and security from other options. Most, however, rely on gaining access to an internet connection that is not subject to filtering, often in a different jurisdiction not subject to the same censorship laws. This is an inherent problem in internet censorship in that so long as there is one publicly accessible system in the world without censorship, it will still be possible to have access to censored material.

Proxy websites are often the simplest and fastest way to access banned websites in censored nations. Such websites work by being themselves un-banned but capable of displaying banned material within them. This is usually accomplished by entering a URL address which the proxy website will fetch and display. They recommend using the https protocol since it is encrypted and harder to block.

Java Anon Proxy is primarily a strong, free and open source anonymizer software available for all operating systems. As of 2004, it also includes a blocking resistance functionality that allows users to circumvent the blocking of the underlying anonymity service AN.ON by accessing it via other users of the software (forwarding client).

The addresses of JAP users that provide a forwarding server can be retrieved by getting contact to AN.ON's InfoService network, either automatically or, if this network is blocked, too, by writing an e-mail to one of these InfoServices. The JAP software automatically decrypts the answer after the user completes a CAPTCHA. The developers are currently planning to integrate additional and even stronger blocking resistance functions.

Using Virtual Private Networks, a user who experiences internet censorship can create a secure connection to a more permissive country, and browse the internet as if they were situated in that country. Some services are offered for a monthly fee, others are ad-supported.

Psiphon software allows users in nations with censored Internet such as China to access banned websites like Wikipedia. The service requires that the software be installed on a computer with uncensored access to the Internet so that the computer can act as a proxy for users in censored environments

In 1996, the United States enacted the Communications Decency Act, which severely restricted online speech that could potentially be seen by a minor – which, it was argued, was most of online speech. Free speech advocates, however, managed to have most of the act overturned by the courts. The Digital Millennium Copyright Act criminalizes the discussion and dissemination of technology that could be used to circumvent copyright protection mechanisms, and makes it easier to act against alleged copyright infringement on the Internet. Many school districts in the United States frequently censor material deemed inappropriate for the school setting. In 2000, the U.S. Congress passed the Children's Internet Protection Act (CIPA) which requires schools and public libraries receiving federal funding to install internet filters or blocking software.[104] Congress is also considering legislation to require schools, some businesses and libraries to block access to social networking websites, The Deleting Online Predators Act. Opponents of Internet censorship argue that the free speech provisions of the First Amendment bars the government from any law or regulation that censors the Internet.

A 4 January 2007 restraining order issued by U.S. District Court Judge Jack B. Weinstein forbade a large number of activists in the psychiatric survivors movement from posting links on their websites to ostensibly leaked documents which purportedly show that Eli Lilly and Company intentionally withheld information as to the lethal side-effects of Zyprexa. The Electronic Frontier Foundation appealed this as prior restraint on the right to link to and post documents, saying that citizen-journalists should have the same First Amendment rights as major media outlets. It was later held that the judgment was unenforcable, though First Amendment claims were rejected.

In January 2010, a lawsuit was filed against an online forum, Scubaboard.com, by a Maldives diving charter company (see scubaboard lawsuit). The owner of the company claimed $10 million in damages caused by users of scubaboard, scubaboard.com, and the owner of scubaboard.com. Individual forum members were named in the lawsuit as "employees" of the forum, despite their identity being anonymous except for their IP address to the moderators and owners of scubaboard.com. This lawsuit demonstrates the vulnerability of internet websites and internet forums to local and regional lawsuits for libel and damages.
          Working with Visual Studio Web Development Server and IE6 in XP Mode on Windows 7   

Originally posted on: http://geekswithblogs.net/imilovanovic/archive/2010/04/26/working-with-visual-studio-web-development-server-and-ie6-in.aspx

 

(Brian Reiter from  thoughtful computing has described this setup in this StackOverflow thread. The credit for the idea is entirely his, I have just extended it with some step by step descriptions and added some links and screenhots.)

 

If you are forced  to still support Internet Explorer 6, you can setup following combination on your machine to make the development for it less painful. A common problem when developing on Windows 7 is that you can’t install IE6 on your machine. (Not that you want that anyway). You will probably end up working locally with IE8 and FF, and test your IE6 compatibility on a separate machine. This can get quite annoying, because you will have to maintain two different development environments where you might not have all needed tools available etc. If you have Windows 7, you can help yourself by installing IE6 in a Windows 7 XP Mode, which is basically just a Windows XP running in a virtual machine.

 

[1] Windows XP Mode installation

 

After you have installed and configured the XP mode (remember the security settings like Windows Update and antivirus software)  you should add the shortcut to the IE6 in the virtual machine to the “all users” start menu. This shortcut will be replicated to your windows 7 XP mode start menu, and you will be able to seamlessly start your IE 6 as a normal window on your Windows 7 desktop.

 

[2] Configure IE6 for the Windows 7 installation

 

If you configure your XP – Mode to use Shared Networking (NAT), you can now use IE6 to browse the sites on the internet. (add proxy settings to IE6 if necessary).

 

image

 

 

 

 

 

 

 

 

  

The next problem you will confront now is that you can’t connect to the webdev server which is running on your local machine. This is because web development server is crippled to allow only local connections for security reasons. In order to trick webdev in believing that the requests are coming from local machine itself you can use a light weight proxy like privoxy on your host (windows 7) machine and configure the IE6 running in the virtual host.

 
The first step is to make the host machine (running windows 7) reachable from the virtual machine (running XP). In order to do that install the loopback adapter and configure it to use an IP which is routable from the virtual machine. 
 

[3] How to install loopback adapter in Windows 7

 

After installation, assign a static IP which is routable from the virtual machine (in example 192.168.1.66)

image

 

 

 

 

 

 

 

 

The next step is to configure privoxy to listen on that IP address (using some not used port) .Change following line in config.txt:

 

#
#      Suppose you are running Privoxy on an IPv6-capable machine and
#      you want it to listen on the IPv6 address of the loopback device:
#
#        listen-address [::1]:8118
#
#
listen-address  192.168.1.66:8118

 

The last step is to configure the IE6 to use Privoxy which is running on your Windows 7 host machine as proxy for all addresses (including localhost)

 

image

 

 

 

 

 

 

 

 

 

 

 

 

And now you can use your Windows7 XP Mode IE6 to connect to your Visual Studio’s webdev web server.

 

image

 

 

 

 

 

 

 

 

 

[4] http://stackoverflow.com/questions/683151/connect-remotely-to-webdev-webserver-exe


          Еще про айФон, и заодно про Алксниса   
Джон Грубер https://daringfireball.net/2017/06/perfect_ten :

The Apple I, the Apple II, the Macintosh, the iPod — yes, these were all industry-changing products. The iPhone never would have happened without each of them. But the iPhone wasn’t merely industry-changing. It wasn’t merely multi-industry-changing. It wasn’t merely many-industry-changing.

The iPhone changed the world.

... Ten years in and the full potential of the iPhone still hasn’t been fully tapped. No product in the computing age compares to the iPhone in terms of societal or financial impact. Few products in the history of the world compare. We may never see anything like it again — from Apple or from anyone else.
 
Есть одна забавная история, о которой я часто вспоминаю, когда речь идет об айФоне, хотя она собственно говоря не имеет к смартфонам никакого отношения. Но давайте по порядку.

В мае того же 2007 года широкую известность в России получило знаковое "дело Поносова", никому ранее не известного директора одной из сельских школ Пермской области, которого оштрафовали на 5000 рублей за "нелицензионное программное обеспечение" (Windows и Microsoft Office) в его школе.

Этот нелепый во всех отношениях инцидент привлек всеобщее внимание к ситуации с "нелицензионными" версиями Windows, и в частности, заинтересовал уже, напротив, довольно известного политика ультра-националистического толка Виктора Имантовича Алксниса, на тот момент депутата Государственной Думы РФ (от коммунистов). Так уж получилось, что именно в это время Алкснис активно осваивал интернет и ЖЖ (где он весьма неудачно начал с того, что обратился в прокуратуру с жалобой на известного блогера, который недостаточно почтительно о нем отозвался), и поэтому дальнейшее развитие ситуации можно было наблюдать, так сказать, в прямом эфире.

Виктор Алкснис, будучи убежденным патриотом и государственником, пишет программный пост, доступный и сейчас: http://v-alksnis2.livejournal.com/22850.html , где бросает клич: а давайте напишем нашу, русскую, операционную систему! Утрём нос Микрософту! Неужто наши программисты тупее ихних??? Русская операционная система – наш новый Сталинград! И Буран! Вперёд, друзья!

Идею трудно назвать вполне новой – разного рода фрики, идеалисты и мошенники носились с идеей «русской операционной системы» уже задолго до этого – но широкая и отчасти скандальная известность автора идеи выводит ее на качественно иной уровень общественного интереса. Вскоре вокруг автора скапливается довольно разношерстная тусовка (как в онлайне в его ЖЖ, так и в офлайне), и там закипают обычные в подобном случае споры: Зачем какая-то «русская операционная система», если есть Линукс? Линух – дерьмо, FreeBSD рулит! А вот еще есть ReactOS, ее делают наши люди, давайте им поможем! Давайте все вместе соберемся и сделаем наконец дружелюбный интерфейс для Линукса! Лучше Windows ничего быть не может, ваш Линукс сосёт, и вообще Линус его украл!

Небольшое отступление. Как мне кажется, тот пре-кризисный 2007 год, с ценами на нефть по $120, с растущим как на дрожжах Стабилизационным Фондом, с еще вполне актуальными у многих надеждами от новой тогда еще путинский эпохи, был таким несколько странным временем, когда внезапно какое-то количество россиян почувствовали себя богатыми; не обязательно лично богатыми, но как-бы причастными к богатству страны. Всего за 9 лет до этого, в 1998 году, кредит МВФ в 4 миллиарда долларов казался недостижимой мечтой; в 2007 году суммы в десятки миллиардов долларов начали казаться мелкой разменной монетой. Подобно человеку, который обычно обходил базар стороной, зная, что денег у него все равно нет; а внезапно разбогатев, приходит на тот же рынок с видом хозяина, деловито похлопывая по карману с тугим кошельком и рассуждая «а что бы нам такое прикупить?» Примерно так это звучало: «хороший программист стоит $100,000 в год. Тысяча программистов за 5 лет стоят $500 миллионов. Неужели тысяча лучших отечественных программистов не смогут за 5 лет создать достойную замену Windows? И неужели государству жалко каких-то 500 миллионов, чтобы создать Русский Микрософт?»

Как я уже сказал, ценность этой истории с «русской ОС» под руководство Алксниса в том, что каждый может и сегодня изучать ее развитие во времени по ЖЖ-дневнику главного героя. Не буду пытаться пересказать здесь, как именно это все развивалось и чем кончилось, это довольно предсказуемо и в сущности, наверное, не так уж и интересно.

Скажу о другом моменте, который мне кажется самым поразительным в этой истории, хотя он в сущности вполне банален. Как раз в разгар всех этих горячих обсуждений и споров Apple и позднее Гугл сделали ровно то, к чему стремились эти люди: фактически лишили Микрософт монополии на операционный системы. В 2007 году невозможно было поверить, что всего через 5-6 лет Линукс станет самой распространенной операционной системой в мире, оставив Windows далеко позади; между тем, именно это и произошло, и началась эта замечательная история именно в тот исторический день, 29 июня 2007 года. Это именно один из тех многих industry-changing аспектов, о которых писал Грубер, с цитаты из которого я начал эту заметку.

И знаете что? Никто из участников той тусовки даже этого не заметил. Годы спустя, они продолжали спорить о Windows и Линуксе. Посмотрите многочисленные обсуждения этой темы у Алксниса: вы найдете там от силы несколько эпизодический упоминаний айФона.

Мне кажется, это ярчайшая иллюстрация отличий государственного капитализма от свободного общества.

          RavenPack Helps Hedge Funds Deal With Their Big Data 'Hoarding Disorder'   
The default setting for financial firms today is to hold on to data, like something from Hoarders. Firms have difficulty parting with in-house data because they don't know what is OK to delete. To be on the safe side, especially from a compliance perspective, they just save everything. Given the availability of ever cheaper computing power and storage, the direct cost of following such a strategy is fairly low.
          Download at your own risk: Bitcoin mine   



Recently we have seen an emerging trend among malware distributors - Bitcoin miners being integrated into installers of game repacks.


This type of system hijacking is just one of the many ways to exploit a user by utilizing their system's computing resources to earn more cash. Malware is easily bundled with game installers that are then uploaded and shared with unsuspecting users using torrent download sites. Once a machine is infected, a downloaded Bitcoin miner silently carries out mining operations without the user's consent.


We have seen this technique used by Trojan:Win32/Maener.A. This threat is dropped byTrojanDropper:Win32/Maener.A as a bundle with some games. Some of the torrent files names we have seen bundled with this threat are:


Tom Clancy's Ghost Recon.Future Soldier.Deluxe Edition.v 1.7 + 3 DLC.(Новый Диск).(2012).RepackDon't Starve.(2013) [Decepticon] RePackKings Bounty Dark Side by xatabSniper_Elite_III_8_DLC_RePack_MAXAGENTTROPICO_5Ghost_Recon_Future_Soldier_v1.8_RepackTrials.Fusion.RePack.R.G.FreedomKing's Bounty Dark Side.(2014) [Decepticon] RePackWatch Dogs.(2014) [Decepticon] RePack


These files can be easily acquired by anyone who downloads games from a torrent website. The games are repacked to further lure gamers to download the compressed files for free. The installer that we detect as TrojanDropper:Win32/Maener.A is available in Russian and English languages only. It seems to be primarily affecting Russian users, as shown by its infection telemetry.

Figure 1: The five countries with the highest number of detections


An example of the game installer execution is depicted in Figure 2. When the installer application (setup.exe) is run, Trojan:Win32/Maener.A also executes in the background and downloads its Bitcoin mining components.



Figure 2: An example of a game installer. When run, the installer also executes the malware payload
Trojan:Win32/Maener.A can be found running under the filename ActivateDesktop.exe. It uses this reputable file name in order to hide its true identity. When run, it connects to a remote server, eventually downloading a Bitcoin miner and installing it on the system.


The Bitcoin miner is installed as connost.exe and can be launched with the command prompt:Connost.exe -a m7 -o stratum+tcp://:port -u -p .
We have also seen this miner use the file names minerd.exe, svchost.exe and winhost.exe.
Trojan:Win32/Maener.A also adds the following registry entry to enable its automatic execution at every system startup:


HKCU\Software\Microsoft\CurrentVersion\RunGoogleUpdate_CF4A51A46014ACCDC56E3A64BAC64B76 = c:\Trash-100\ActivateDesktop.exe
To help stay protected, we recommend you to install an up-to-date, real-time protection security product such as Microsoft Security Essentials, or Windows Defender for Windows 8.1. And, most importantly, only download from legitimate and reputable sources.
Donna SibanganMMPC
SHA1s3e4c9449d89c29f4e10186cc1b073d45d33c5f83ebcd1aa912dfe00cb2d41de7097dedd1f9ab9127f6b98be4091d108c6195ac543bf949ba3cffb8bf


          Comment on Zorich’s conjecture on Zariski density of Rauzy-Veech groups (after Gutiérrez-Romo) by matheuscmss   
Hi Alex, The Rauzy-Veech group is obtained by computing the KZ cocycle matrices along the paths obtained by following certain pieces of Teichmuller flow orbits which are then connected to some "canonical" points in moduli spaces. In particular, the Rauzy-Veech group is a subgroup of the monodromy of the stratum. Best, Matheus
          Systems Engineer -   
A well-established privately held Internet based company is actively searching for a standout Mid to Senior Systems Engineer to join their fast growing team on a direct hire. The successful candidate needs to have multiple years of familiarity running a large-scale website based on Microsoft technologies, including Windows Server, Internet Information Server, .NET, and other related technologies. Also, needs to be a scripting/automation guru, able to bend servers to your will. Takes pride in working with a team to solve complex business and technological challenges in new and creative ways. The ideal candidate needs to have the following background:

(Sr.) Systems Engineer with the following experience:
?3 to 7 plus years? experience in Windows system administration within a large Windows server environment. Even better if you have run a large-scale website based on IIS and .NET technologies.
?3 to 7 plus years? experience managing the ins and outs of Active Directory.
?2 to 5 plus years? experience writing and maintaining scripts in multiple languages with a focus on PowerShell and VBScript.
?2 to 5 plus years? experience with security and network/distributed computing concepts.
?2 plus years? experience with Linux, even better if it is Red Hat Linux.
?2 plus years? experience with virtualization technologies, VMware and Red Hat CloudForms a plus.
?Current MCSE, MCITP or related certification is preferred.
?1-2 years of Client Server, Microsoft Active Directory experience
?1-2 years of Networking experience
?Bachelor?s degree in computer science or engineering or equivalent industry experience.

Qualifications:
?Participate in the design and implementation of new or changing systems based on customer needs and internal guidelines.
?Define and evaluate integration strategies and architecture enhancements to meet mission objectives/needs. Help develop detailed technical plans to guide development and integration activities.
?Share knowledge by effectively documenting work and processes.
?Stay current on new technology and methodologies in your area of expertise.
?Develop and maintain a thorough knowledge of assigned support areas including technologies used and internal processes.
?Work with the team to ensure the quality of implementations to align with established standards and best practices.
?Proactively generate solutions for managing a large system base.
?Conduct requirement analysis.
?Respond quickly and effectively to production issues and take responsibility for seeing those issues through resolution.
?Effectively collaborate and communicate across departments and other business units virtually, electronically and in person.
?Ability and confidence to represent the team to other departments such as AppDev, EOC, PMO and leadership.
?Possess excellent analytical and problem solving skills.
?Ability to work with minimal direction, yet also able to work in a team environment.
?Familiarity with basic ITIL concepts, especially around change management.
?Utilizes provided tools, logic, and other appropriate resources to make decisions.
?Manage and address trouble tickets as assigned in a timely manner.
?Easily adjust to changing priorities or projects.
?Develop technology plans and road maps for migration of systems and the development/analysis of recommendations for upgrades and enhancements to the existing IT infrastructure. We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Dubai-based logistics company automates backup through cloud   

Dubai-based CWT-SML Logistics has automated its manual backup processes through the cloud as part of a move to the latest technologies to improve its operations. What are your peers in the Nordics region looking to spend their budget on in 2017? Unsurprisingly, cloud computing is one of the biggest draws and more than half of CIOs in the region will spend more on cloud technologies this year than they did in 2016.


          Oracle DBA/Dev   
<span>&nbsp;<br>Job Description:<br>&nbsp;<br>Oracle DBA-08653<br>Description<br>&bull; 7+ years of in-depth Oracle DB and other database design and administration across a multi-platform UNIX/NT environment.<br>&bull; Application implementation and installation in an online environment of Oracle R12/ 11i with demonstrated competency in execution of multiple projects to meet goals<br>&bull; Support for Oracle VCP Module(s) &ndash; Demantra and ASCP is highly desired<br>&bull; Minimum of 5 years of project management or technical responsibility through various project in multidiscipline, high-performance work teams and development groups<br>&bull; 3+ years of experience in successfully developed and implemented applications in new computing environment, using new and emerging technologies<br>&bull; Plans and manages all technical and functional tasks associated with database administration, including creating, maintaining, and monitoring databases; performance tuning analysis; capacity planning; workload modeling and prediction; systems support; long-term strategic planning; application support and optimization; problem resolution tracking; and software upgrades.<br>&bull; Assesses customer requirements; analyzes the architecture of business events and databases that support the architecture; evaluates possible solutions; and presents recommendations to IT management<br>&bull; Performs software installation and configuration, including upgrades and applying patches<br>&bull; Works closely with Business Solutions teams, developers, Systems Administrators and peers in other areas of IT<br>&bull; Ensure backups are successful and restore processes are known and tested<br>&bull; Ensures that databases supporting application systems are developed in a way that complies with architectural standards and established methodologies and practices<br>&bull; Monitors and reports to IT management and user the status of multiple concurrent project efforts<br>&bull; Anticipates and identifies issues inhibiting the attainment of project goals; develop and implement corrective actions<br>&bull; Provides 24/7 on-call support as needed<br>&bull; Develops project resource strategies, allocating budget, staff, &nbsp;tools, and specialized support necessary for cost-effective implementation and support of project efforts<br>&bull; Has sufficient experience and expertise to perform multiple tasks/assignments of a medium to complex level. General supervision and little guidance.<br>&bull; Works on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors. Exercises judgment in selecting methods, techniques and evaluation criteria for obtaining results.<br>&bull; Excellent research and trouble shooting skills and experience in Oracle Applications R12 and 11i for EBS and VCP<br>&bull; Conceptual knowledge of information technologies and methodologies in a multi-platform environment is desirable<br>&bull; Experience in scheduling and performance of regular database maintenance such as cloning, rebuilding indexes, table space reorganization, and data file additions<br>&bull; Experience in implementation, maintenance and monitoring of database backups and recovery, enforcing proper policies and procedures<br>&bull; Knowledge of High Availability Clustered Multiprocessing functionality<br>&bull; Experience in effectively managing projects in a cross-functional environment against critical deadlines<br>&bull; Excellent communications skills and strong customer focus<br>&bull; Must be able to plan, coordinate and execute multiple projects and deadlines<br>&bull; Strong technical knowledge, with hands-on experience managing systems development in client server/web architectures and environments; knowledge MS Office, Windows, SQL Server, Oracle ERP (11i), and Oracle tools is critical<br>&bull; Adapt quickly to rapidly changing technology and apply it to business needs<br>&bull; Ability to define problems, collect data, establish facts, and draw valid conclusions.<br>&bull; Able to interpret an extensive variety of technical instructions in mathematical or diagram form<br>&bull; Strong ability to develop and maintain effective working relationships with others who may have ongoing competing priorities and viewpoints<br>Qualifications<br>&bull; Application implementation and installation in an online environment of Oracle R12/ 11i with demonstrated competency in execution of multiple projects to meet goals<br>&bull; Support for Oracle VCP Module(s) &ndash; Demantra and ASCP is highly desired<br>&bull; Solid understanding of relational database management systems, client/server programming and distributed databases<br>&bull; Experience in database design, server and application products configuration, and patch installations<br>&bull; Full knowledge of SQL, and stored procedures, including programming, tuning and optimization, and Unix Shell scripts<br>&bull; Solid knowledge and experience in areas of database monitoring through scripts and tools, database optimization, database migration, DDL management, backup and recovery, scheduling packages, and related RDBMS-specific tools and utilities, e.g. Appworx, Oracle Enterprise Manager, etc.<br>Job Discipline Information Technology (IT) &amp; Support Services<br>Primary LocationUS-California-Irvine<br>Schedule Full-tim<br>&nbsp;<br>&nbsp;<br>&nbsp;<br></span>
          To $57k Detail-Oriented Accountants - -   
San Diego Companies are currently looking for enthusiastic and career-oriented individuals to join their accounting teams! If you?re skilled at computing and looking to grow in your career, we want to talk to you! Successful Candidates will: ? Possess 3-5 years accounting experience ? Be skilled in multiple accounting softwares (Peachtree, Quickbooks, MAS, Great Plains and other SAP based programs) ? Be able to efficiently check figures, and documents for mathematical accuracy and proper codes. ? Reconcile or note and report discrepancies found in records. ? Be highly organized and pay great attention to detail. - A bachelors degree is preferred Duties will include using accounting software to record, store and analyze information. You may also be responsible for compiling statistical, financial, accounting or auditing reports regarding expenditures, accounts payable and receivable, as well as profits and losses. Don?t miss out on this great opportunity to further your career, apply today for immediate consideration. We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Accounting Clerk/Admin - CONVENIENT LOCATION   
This Accounting Clerk/Admin Position Features:
? Convenient Location
? Pleasant Team Environment
? Company Values their Employees
? Great Pay to $16.00/hr DOE

Are you a numbers person? Do you have great attention to detail? Are you a self-starter who works well within a team? If you answered yes to all these questions this may be the ideal position you are seeking. An engineering and design firm is seeking an Accounting Clerk/Admin to welcome to their team. If you are seeking a company that values their employees and boasts of a pleasant environment do not pass up this opportunity.

The Accounting Clerk/Admin will be responsible for the following:
? Computing, classifying, and recording numerical data to complete financial records.
? Calculate, post and verify duties to obtain primary financial data for use in maintaining accounting records.
? Operate computers programmed with accounting software to record, store, and analyze information.
? As well as other administrative functions to support the Accounting Team.
Previous Accounting Clerk/Admin experience a MUST.

Do not let this opportunity pass you by. Apply for this stellar position TODAY!
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Developer-Java   
MUST HAVE RECENT JAVA EXPERIENCE!

The Software Engineer is responsible for the design and implementation of Java-based server applications for an enterprise class Network Management System providing control for high performing Wi-Fi devices. The ideal candidate is self-motivated and comfortable working in a small, agile team. This is a team oriented environment so all engineers must work on site in Thousand Oaks.

Responsibilities:

Participate in the design and development of multi-tier, server-based wireless network monitoring and configuration applications
Collaborate with Marketing to evaluate feature requests and propose solutions
Develop and maintain both new and existing technology
Estimate work and resolve technical issues
Work with QA and Customer Support to identify and resolve issues

Requirements:

Experience developing applications in Java
Experience with designing and implementing highly available scalable application servers
Experience with designing and implementing enterprise class commercial software and/or a large scale web application
Strong understanding of object oriented methodologies and design patterns
Experience with Spring
Knowledge of relational databases such as MySQL
Knowledge of Linux operating systems
Experience with the Eclipse development environment

Preferred:

Knowledge of network switching and routing
Knowledge of wireless networking
Experience with developing Software-as-a-Service (SaaS)
Experience with Java web frameworks such as Wicket
Experience with message queuing (JMS)
Experience with ORM frameworks such as Hibernate
Experience with SNMP
Experience with NoSQL databases such as Cassandra
Experience with distributed computing (e.g. Hadoop)
Experience with HTML , CSS, JavaScript and jQuery
Experience with WebSockets
Experience Git, Maven and Jenkins/Hudson
Experience with agile development methodologies such as Scrum
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Billing Clerk   
This Billing Clerk Position Features:
? Great Pay to $32K

Compile data, compute fees and charges, and prepare invoices for billing purposes. Duties include computing costs and calculating rates for goods, services, and shipment of goods; posting data; and keeping other relevant records. May involve use of computers, and adding and booking machines.

Please reach out to Kariann to apply today! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Developer-Java   
MUST HAVE RECENT JAVA EXPERIENCE!

The Software Engineer is responsible for the design and implementation of Java-based server applications for an enterprise class Network Management System providing control for high performing Wi-Fi devices. The ideal candidate is self-motivated and comfortable working in a small, agile team. This is a team oriented environment so all engineers must work on site in Thousand Oaks.

Responsibilities:

Participate in the design and development of multi-tier, server-based wireless network monitoring and configuration applications
Collaborate with Marketing to evaluate feature requests and propose solutions
Develop and maintain both new and existing technology
Estimate work and resolve technical issues
Work with QA and Customer Support to identify and resolve issues

Requirements:

Experience developing applications in Java
Experience with designing and implementing highly available scalable application servers
Experience with designing and implementing enterprise class commercial software and/or a large scale web application
Strong understanding of object oriented methodologies and design patterns
Experience with Spring
Knowledge of relational databases such as MySQL
Knowledge of Linux operating systems
Experience with the Eclipse development environment

Preferred:

Knowledge of network switching and routing
Knowledge of wireless networking
Experience with developing Software-as-a-Service (SaaS)
Experience with Java web frameworks such as Wicket
Experience with message queuing (JMS)
Experience with ORM frameworks such as Hibernate
Experience with SNMP
Experience with NoSQL databases such as Cassandra
Experience with distributed computing (e.g. Hadoop)
Experience with HTML , CSS, JavaScript and jQuery
Experience with WebSockets
Experience Git, Maven and Jenkins/Hudson
Experience with agile development methodologies such as Scrum
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Accounts Receivable/Payable Clerk   
This Accounts Receivable/Payable Clerk Position Features:
? Great Pay to $30K

Great opportunity with growing company! Fast paced, busy environment looking for an upbeat employee.

Responsibilities:

?Prepare work to be accomplished by gathering and sorting documents and related information.
?Pay invoices by verifying transaction information; scheduling and preparing disbursements; obtaining authorization of payment.
?Obtain revenue by verifying transaction information; computing charges and refunds; preparing and mailing invoices; identifying delinquent accounts and insufficient payments.
?Collect revenue by reminding delinquent accounts; notifying customers of insufficient payments.
?Prepare financial reports by collecting, analyzing, and summarizing account information and trends.
?Maintain accounting ledgers by posting account transactions with a proprietary system.
?Verify accounts by reconciling statements and transactions.
?Resolves account discrepancies by investigating documentation.
?Maintains financial security by following internal accounting controls. ?Secures financial information by completing data base backups.
?Maintains financial historical records by filing accounting documents. ?Contributes to team effort by accomplishing related results as needed.

Skills/Qualifications:

Microsoft Word/Excel, Administrative Writing Skills, Organization, Data Entry Skills, General Math Skills, Financial Software, Analyzing Information, Attention to Detail, Thoroughness, Reporting Research Results, Verbal Communication, type skills of 40+ wpm

Great Benefits! Apply for this great position as an Accounts Receivable/Payable Clerk today!
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Accounting Clerk/Admin - CONVENIENT LOCATION   
This Accounting Clerk/Admin Position Features:
? Convenient Location
? Pleasant Team Environment
? Company Values their Employees
? Great Pay to $16.00/hr DOE

Are you a numbers person? Do you have great attention to detail? Are you a self-starter who works well within a team? If you answered yes to all these questions this may be the ideal position you are seeking. An engineering and design firm is seeking an Accounting Clerk/Admin to welcome to their team. If you are seeking a company that values their employees and boasts of a pleasant environment do not pass up this opportunity.

The Accounting Clerk/Admin will be responsible for the following:
? Computing, classifying, and recording numerical data to complete financial records.
? Calculate, post and verify duties to obtain primary financial data for use in maintaining accounting records.
? Operate computers programmed with accounting software to record, store, and analyze information.
? As well as other administrative functions to support the Accounting Team.
Previous Accounting Clerk/Admin experience a MUST.

Do not let this opportunity pass you by. Apply for this stellar position TODAY!
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Quality Assurance Lead Tester   
Adecco is currently hiring for a Quality Assurance Lead Tester job in Ottawa, ON. This is a contract role to perform work with our client, a Crown Corporation in the South end of Ottawa. The role would start February 1st 2015 with a tentative end date of June 30th 2015 with a strong possibility of extension.

Reporting to the Manager, the QA Lead will provide test scripts and execution, s/w bug detection, planning and coordination activities, whose scope may involve several functional groups, external vendors or partners, as well as the re-engineering of business processes all in an effort to support the project.

With a minimum of 5 years' experience in Quality Assurance, project planning and coordination and a customer oriented / business acumen mind set. The QA Lead will also have the skills required to execute test scripts needed to confirm applications are stable and free of s/w bugs in order to ensure the a seamless transition to our mobile workforce.

Project Support Duties
Be the QA Lead tester on the project;
Assume overall ownership of all test scripts relating to the project;
Create, execute and maintain all test scripts as it relates to the project;
Be the lead on the following types of testing: (unit, functional, UAT, end-to-end, performance, pilot, regression & usability testing);
Coordinate & host daily or weekly status meetings as it pertains to testing (as needed);
Record and distribute meeting minutes / actions items / follow-ups from the meetings;
Identify any project risks and propose action / mitigation plans as it pertains to testing;
Be the lead on setting up any required testing environments, Labs, PDTs, peripherals, etc. (as required);
Help coordinator any project related activities deemed necessary by the project lead;

Special Requirements:

Required
Proficient use of all MS office applications, especially MS Project, Outlook, Power Point and Excel
Ability to work within a team environment and must be able to work in fast paced environment
Excellent communication and written skills
Experience in creating and maintaining project materials, especially action logs and meeting minutes
Experience in managing multiple stakeholders to stay on plan and achieve project goals

Asset
Experience in the mobile (handheld) computing environment
Experience working with Quality Center or any other issue tracking application
Knowledge of unionized working environment.
          Google, A friend who tracks all your activities   
As all of you know, Google is specializing in Internet-related services and products that include online advertising technologies, search and cloud computing. Without our notice, Google keep's tracking out our activities. Here are the links that will show you some of the data Google has about you. Find out How does Google see me? Google ...continue reading Google, A friend who tracks all your activities
          Network Administrator Up to $100,000   
Are you an experienced Network Administrator with a passion for the IT world? This is the position for you! Growing company in Costa Mesa is looking for a Network Administrator to add to their team. Position features great pay up to $100K, benefits package, casual office environment and room for growth! If you are interested in this opportunity and your experience matches the following criteria, please reply with your resume attached for immediate consideration.

Job Description:
Install, configure, and support an organization?s local area network (LAN), wide area network (WAN), and Internet systems or a segment of a network system. Monitor network to ensure network availability to all system users and may perform necessary maintenance to support network availability. May monitor and test Web site performance to ensure Web sites operate correctly and without interruption. May assist in network modeling, analysis, planning, and coordination between network and data communications hardware and software. May supervise computer user support specialists and computer network support specialists. May administer network security measures.

Duties/Responsibilities:
?Maintain and administer computer networks and related computing environments including computer hardware, systems software, applications software, and all configurations.
?Perform data backups and disaster recovery operations.
?Diagnose, troubleshoot, and resolve hardware, software, or other network and system problems, and replace defective components when necessary.
?Plan, coordinate, and implement network security measures to protect data, software, and hardware.
?Configure, monitor, and maintain email applications or virus protection software.
?Operate master consoles to monitor the performance of computer systems and networks, and to coordinate computer network access and use.
?Load computer tapes and disks, and install software and printer paper or forms.
?Design, configure, and test computer hardware, networking software and operating system software.
?Monitor network performance to determine whether adjustments need to be made, and to determine where changes will need to be made in the future.
?Confer with network users about how to solve existing system problems.

A Bachelor?s Degree and a minimum of 3 years relevant experience are required to be considered for this position.

If this job description fits your skill set, please apply with your resume attached today! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Accounts Payable   
We are seeking an individual experienced in Accounts Payable functions for a long term contract assignment. This position is with a Fortune 500 company located in downtown Pittsburgh. Qualified candidates are encouraged to read the description below and apply today.

Responsibilities include:
-Receiving invoices and checking on payment status
-Paying outstanding invoices
-Setting up recurring payments when necessary
-Compile data, compute fees and charges and prepare invoices for billing purposes.
-Computing company charges, itemizing statements or invoices and computing payroll.

Requirements:
-Peoplesoft experience is preferred
-High School Diploma or equivalent required;
-Associate Degree in accounting or related financial discipline a plus.
-Two to four years financial and/or accounting experience required.
-Experienced with Microsoft Office

Adecco is the global leader in employment and HR services, connecting people to jobs and jobs to people through its network of more than 6,000 offices in 71 countries/territories around the world. Check us out: www.AdeccoUSA.com
EOE
          Sr. QA Engineer   
<span>&nbsp;<br>OFFICIAL JOB DESCRIPTION:<br>Sr. QA Engineer - Mobile / SDET / IOS<br>&nbsp;<br>Our Company:<br>&nbsp;<br>Tyco is the world&#39;s largest dedicated fire protection and security company. Our mission is to advance safety and security by finding smarter ways to save lives improve businesses and protect where people live and work. Our 69,000 employees in over 1,000 locations around the world take a consultative approach to delivering tailored, industry-specific solutions. Our global reach allows us to anticipate changes across geographies and industries, and deploy the right solutions rapidly. In the most challenging and demanding environments, we help our customers achieve their safety, security and business goals.<br>&nbsp;<br>Our products and solutions help protect:<br>&bull; 80% of the world&#39;s top 200 retailers<br>&bull; More than 1,000,000 fire fighters around the world<br>&bull; 90% of the top 50 oil and gas companies<br>&bull; International airports<br>&bull; 100+ major stadiums around the world<br>&bull; A majority of the Fortune 500<br>&bull; 200+ hospitals around the world<br>&bull; Nearly 3 million commercial, government and residential customers<br>&nbsp;<br>&nbsp;<br>With a tradition of customized service and a passion for technology and innovation, Tyco develops practical, integrated fire protection and security solutions for increasingly complex environments.<br>&nbsp;<br>Summary<br>&nbsp;<br>Act as a Senior QA engineer in the development of automated functional, integration, regression and performance testing of mobile applications running on a multi-tenant cloud based enterprise software solution. &nbsp;The Senior QA Engineer will play an important role in automated functional, integration and regression testing of mobile applications leveraging a new cloud-based platform that will support several different market verticals. &nbsp;The Senior QA Engineer will participate in determining the technical direction for the functional, integration, regression and performance testing of mobile applications running on highly scalable multi-tenant cloud platform technology components which include developing test framework to perform scale/load testing of mobile applications, middleware, database, web services, and associated cloud services.<br>&nbsp;<br>Job Responsibilities<br>&bull; Responsible for the design, development, and implementation of automated functional, integration, regression and performance testing framework for mobile applications on Andorid and iOS platforms, including, but not limited to development and/or diagnostic software.<br>&bull; Contributes to the development of test strategies, devices and systems.<br>&bull; Contributes to the development of new techniques, models and plans within area of expertise.<br>&bull; Evaluates complex situations using multiple sources of information filters, validates and interprets dynamic material.<br>&bull; Applies developed project management techniques.<br>&bull; Define the technical implementation of the system architecture and business strategy for the cloud based platform.<br>&bull; Participates in the development of automated functional, integration, regression and performance testing features from collaboration on requirements definition, feature design, coding, testing, and deployment to Level 3 support.<br>&bull; Reviews development of testing frameworks, coding standards, conducts code reviews and walkthroughs, and conducts in-depth design reviews.<br>&bull; Interfaces with Product Management, Project Management, Software Development, Firmware Development, and Quality Assurance to ensure that a high quality product is delivered which meets or exceeds all published guidelines<br>&bull; Mentors, coaches junior QA engineers to ensure that each of their deliverables and behaviors mirror software developmental excellence<br>Desired Skills and Experience<br>Education and Experience Requirements<br>Bachelor&#39;s degree in Computer Science or in a related engineering field.<br>&bull; 5 years of experience in software testing and development.<br>&bull; 3 years hands on experience in integration and performance testing mobile applications leveraging cloud-based solutions or highly scalable multi-tenant enterprise solutions.<br>&nbsp;<br>Required Job Skills<br>&bull; Must be proficient in analyzing complex issues and architectures and reducing them to practice. &nbsp;Strong analytical skills are essential.<br>&bull; Must have excellent communication and team management skills to effectively lead junior engineers and to collaborate with the globally disperse development teams.<br>&bull; Under general supervision; ability to exercise independent judgment; must compare alternate courses of action and make a decision after considering the options. &nbsp;Must be self motivated and a &quot;self-starter&quot;<br>&bull; Enterprise Operations / Architecture - Must have spent at least 3 years developing automated functional, integration and regression tests for large-scale, enterprise-wide, complex information technology initiatives, at both an infrastructure and an application level.<br>&bull; Cloud Architecture - Technical knowledge and load testing experience using either JMeter, BlazeMeter, Blitz, Gatling, LoadRunner, or loadUI. Experience using tools such as JUnit, Cucumber, Mockito, Arquillian, Selenium, Jasmine, BusterJS, SauceLabs, BrowserStack or Espresso. Familiarity with common cloud architecture, enabling components, and deployment platforms (such as JMS, Kafka, J2EE, Storm, Gearman, Infrastructure as a service, Platform as a Service, Software as a Service).<br>&bull; Cloud Platforms - Application functional, integration and regression testing experience utilizing distributed processing solutions such as Hadoop, distributed storage solutions such as Cassandra, real-time and post analytics processing architectures, application server platforms, clustered infrastructures, and distributed queuing technologies such as JMS or Kafka. &nbsp;Minimum of 2 years experience developing highly scalable data-driven applications based on structured and unstructured data sets.<br>&bull; Software Development - Minimum of 3 years experience with Enterprise Java (J2EE or Spring, Hibernate) or .NET architectures. Minimum of 3 years experience with object oriented programming languages (Java, C#, Objective-C). &nbsp;Any other relevant languages (Groovy/Grails, Python, RoR) is a plus<br>&bull; Web Services &ndash; Automated functional, integration and regression testing of applications utilizing one or more of the following web services technologies: JSON-RPC, JSON-WSP, Web Services Description Language (WSDL), REST, RPC, or XML<br>&bull; Performance Tuning - Minimum of 2 years experience with performance and scalability tuning of high-volume websites/applications Minimum of 2 years of relevant experience with Parallel and Grid Computing Technologies.<br>&bull; Web based designs - Minimum 5 years experience automated functional, integration and regression testing and troubleshooting complex web-based N-tier enterprise applications that run in mixed operating system environments.<br>&nbsp;<br></span>
          Sr System Admin-AIX Servers and Storage   
<span>Sr System Admin-AIX Servers and Storage<br>&nbsp;<br>The Sr System Admin for IBM AIX Servers and Storage will report directly to the AIX Infrastructure Services Team Lead and will provide central accountability for all AIX Infrastructure Server System Administration Services which are those activities associated with the installation, maintenance and technical support of existing and future AIX-based system configurations and supporting systems software (e.g., operating systems, utilities, databases, and middleware) necessary to deliver the required Services hosted on these platforms that support customer&rsquo;s business applications. The Sr System Admin for IBM AIX Servers and Storage position will be the senior technical resource providing systems engineering and system administration services for which the AIX Infrastructure Server team is accountable. &nbsp;Systems engineering and system administration services are those activities associated with the provisioning and management of the Data Center existing and future server environment using ITIL processes. The Sr System Admin for IBM AIX Servers and Storage is responsible for providing highly responsive, flexible BIM AIX infrastructure server services to the County of Orange.<br>&nbsp;<br>Key responsibilities:<br>&nbsp;<br>-Provides Senior level AIX Infrastructure Server design and engineering solutions to meet the needs of the client.<br>-Provides system engineering and administration (e.g. configuration and performance optimization, access controls, manage files, disk space management , apply APARs, and configure and manage LPARs) &nbsp;and systems programming support for IBM AIX V6.1 &amp; 7 based hosting systems required to deliver services and meet SLRs on a 7X24X365 basis.<br>-Review, install, configure, test, and implement new releases of AIX 6.1 TL9 and V7 operating system software with a focus on using quick deployment/recovery software such as IBM&rsquo;s Network Installation Manager (NIM) and cloning via MKSYSB images.<br>-Analyze IBM Power 8 server hardware and AIX 6L Version 6.1 and/or 7 operating system software problems.<br>-Provide storage administration support for IBM V7000 Storage system to include set-up and maintenance of replication, snapshots and other related tasks.<br>-Develop and maintain IBM AIX system utility programs and scripts in the Korn Shell (KSH) to manage the system with an emphasis on automating routine tasks.<br>-Install, test, and provide technical support, administration and security administration for hardware and Software to deliver services and meet SLRs.<br>-Install code fixes for all AIX services elements (e.g., hardware, middleware and OS Software).<br>-Provide onsite support as required and coordinate with third party provider for ticket resolution (e.g. Support third party remote diagnose, coordinate third party installation of physical parts replacement, etc.) for AIX based hosting systems.<br>-Document and maintain in the Policies, Standards and Procedures Manual Data Center operational procedures for the AIX Infrastructure Server System Administration services team.<br>-Interfaces with Data Center Tower Leadership and technical support personnel, ITSM service delivery personnel, Application Support personnel, and contract support personnel. <br>&nbsp;<br>Required Skills/experience:<br>-5+ years&#39; experience providing infrastructure server engineering and administration services for IBM AIX server computing environments.<br>-Experience as a &ldquo;hands on&rdquo; advanced systems administrator installing, configuring, and updating the AIX Version 6.1 and 7 operating systems.<br>-Experience as a senior AIX systems administrator managing servers in an environment consisting of 20 or more LPAR servers.<br>-Experience with IBM Power 7/8 server employment and support for local clustered and extended clustered environments.<br>-Experience with IBM Power HA Clustering and Extended HA Clustering planning, employment and support across multiple sites.<br>-Experience with IBM V7000 Storage management and replication.<br>-Experience developing and implementing Unix Korn Shell (KSH) and/or Perl scripts for automation of system monitoring and maintenance procedures.<br>-Experience with Logical Partitioning and Virtualization on IBM Power 7/8 systems.<br>-Experience with configuring local and SAN storage on AIX 6/7 systems using Logical Volume Manager (LVM) and Object Data Manager (ODM).<br>-Experience in performance tuning on AIX 6/7 using the VMO, IOO, NO, NFSO and related AIX performance tuning commands.<br>-Experience working in commercial or local government business environments; experience working with Orange County beneficial.<br>-BA or BS degree or equivalent experience. Preferably in a technical field.<br>-Excellent Communication Skills &ndash; written and oral.<br>-Excellent planning/time management and client-facing skills.<br>&nbsp;<br>Additional Desired Skills/experience:<br>-IBM Certified Systems Administrator beneficial.<br>-ITIL V3 Foundations certifications beneficial.<br>-Experience in employment and support of IBM Tivoli monitoring/reporting and productivity software tools.<br>-Disaster Recovery Site implementation and maintenance for IBM AIX computing environments.<br>&nbsp;<br>&nbsp;<br></span>
          Airline Pilot   
This is a job description for the Airline Pilot position.
* Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications. Set operational specifications and formulate and analyze software requirements. Apply principles and techniques of computer science, engineering, and mathematical analysis.

* Consult with engineering staff to evaluate interface between hardware and software, develop specifications and performance requirements and resolve customer problems.

* Develop and direct software system testing and validation procedures.

* Advise customer about, or perform, maintenance of software system.

* Monitor functioning of equipment to ensure system operates in conformance with specifications.

* Prepare reports and correspondence concerning project specifications, activities and status.

* Operate industrial trucks or tractors equipped to move materials around a warehouse, storage yard, factory, construction site, or similar location.

* Move controls to drive gasoline- or electric-powered trucks, cars, or tractors and transport materials between loading, processing, and storage areas.

* Move levers and controls that operate lifting devices, such as forklifts, lift beams and swivel-hooks, hoists, and elevating platforms, to load, unload, transport, and stack material.
          Containers and Clusters and Swarms, Oh My!   

It may sound like hyperbole, but put simply, there is a revolution underfoot: Cloud resources and smarter tools have enabled systems that provide enormous leaps in computing efficiency and most definitely impact the bottom line. The result? New business models that exploit speed, ease, cost savings, and constant improvement. And to accompany these new models, consumer expectation has likewise rapidly evolved.

For example, driven by the software world’s continuous delivery model, cloud vendors will create and deploy multiple versions of an application per day. Constant enhancement of existing products and services and the uninterrupted flow of new product offerings are now normal. There is innovation in the market, but it comes with pressure. People have become accustomed to instantaneous improvement and expect companies to move fast. If you press a button on a web form, you expect an immediate result.


          IT Support Analyst - Great Work Environment   
This IT Support Analyst Position Features:
? Great Work Environment
? Opportunity For Growth
? Well-known Company
? Great Pay to $41K

Immediate need for IT Support Analyst seeking great work environment, opportunity for growth and well-known company. Experience with Windows and MAC OS x computers, experience supporting mobile iOS and OS x devices and knowledge of basic network support LAN/WAN will be keys to success in this growing, stable organization. Will be responsible to provide overall IT support to all end users and maintain equipment and resources for the Company to run productively and efficiently. Work with support services in installation, configuration, maintenance, and troubleshooting of Windows-based desktops and Mac OS X computers. Resolve technical problems with desktop computing equipment and software; will use ticketing system to track effort; and will use Remote Software to assist with deployment and troubleshooting of software and hardware. Entertainment Venues . Great benefits. Apply for this great position as a IT Support Analyst today! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          To $57k Detail-Oriented Accountants   
San Diego Companies are currently looking for enthusiastic and career-oriented individuals to join their accounting teams! If you?re skilled at computing and looking to grow in your career, we want to talk to you!

Successful Candidates will:
? Possess 3-5 years accounting experience
? Be skilled in multiple accounting softwares (Peachtree, Quickbooks, MAS, Great Plains and other SAP based programs)
? Be able to efficiently check figures, and documents for mathematical accuracy and proper codes.
? Reconcile or note and report discrepancies found in records.
? Be highly organized and pay great attention to detail.
- A bachelors degree is preferred

Duties will include using accounting software to record, store and analyze information. You may also be responsible for compiling statistical, financial, accounting or auditing reports regarding expenditures, accounts payable and receivable, as well as profits and losses.

Don?t miss out on this great opportunity to further your career, apply today for immediate consideration.
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Getting started with HPC   
High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously.
          Sr. Web Developer   
<span><span style="color:#000000;background-color:transparent;font-family:Times New Roman;font-size:12pt;font-weight:normal;font-style:normal;">Sr. Web Developer job available in Oak Creek, WI<br>&nbsp;<br>A nationally recognized and highly respected client of ours is seeking a Sr. Web Developer for direct hire/permanent placement. As a Sr. Web Developer, the position involves working in the UI, core app, and database areas. The project will be developing the application via web, with future plans that it will be automated, and eventually making it to mobile for the next phase. At this time our client is seeking W2 candidates and not seeking candidates requiring sponsorship or working corp-corp.<br>&nbsp;<br>POSITION SUMMARY <br>Designs and develops web applications and associated web services in support of our client&rsquo;s Remote Electronic Access Control solutions. The position involves working in the UI, core app, and database areas. <br>&nbsp;<br>ESSENTIAL DUTIES AND RESPONSIBILITIES include (but not limited to) the following:<br>&bull;Analyzes software requirements to determine feasibility of design within time and cost constraints.<br>&bull;Designs formal software requirements from customer/market level requirements<br>&bull;Consults with hardware engineers and other engineering staff to evaluate/develop interfaces between hardware and software<br>&bull;Designs software within operational and performance requirements of overall system<br>&bull;Responsible for reviews of all software project phases (Development requirements, Test requirements, Code)<br>&bull;Understands how to insert new code into software build and follows proper procedures<br>&bull;Works with Software Testing to resolve issues to ensure testing can continue<br>&nbsp;<br>PREFERRED QUALIFICATIONS <br>&bull;A solid understanding of networking/distributed computing environment concepts<br>&bull;Experience in web design, API and web services development<br>&bull;Solid understanding of the principles of routing, client/server programming<br>&bull;As new technologies emerge and impact our systems, expected to learn new technologies and resolve any problems involved in integrating new technologies with our systems<br>&bull;Expert knowledge of software engineering design methods and techniques, specifically Agile development methodology<br>&bull;Experience and knowledge with .NET Framework and Visual Studio<br>&bull;Experience and knowledge of maintaining and debugging live software systems<br>&bull;Ability to determine whether a particular problem is caused by hardware, operating systems software, application programs, or network failures<br>&bull;Able to look at a problem and develop multiple solution approaches<br>&bull;Possess excellent written and verbal communication skills<br>&bull;Working knowledge of security and encryption &ndash; preferable but not mandatory<br>&nbsp;<br>EDUCATION<br>Bachelor&#39;s degree in Software/Computer Engineering discipline from four-year college or university, plus 5 &ndash; 10 years related experience.<br>&nbsp;<br>TECHNICAL REQUIREMENTS<br>&bull;C#, Javascript, Angular.js, CSS, MVC, AJAX, HTLML5, XML, HTML, SQL 2008/2012, Cassandra, MongoDb, Linux, Flash, Apache Tomcat, Windows Server 2008, ASP.NET<br>*Must Have Technical Requirements: &nbsp;Strong Angular.JS experience, C#.Net, Web Development, and Web Services experience<br>&nbsp;<br>This opportunity will not last long.<br>Our client is looking to move quickly to fill this role.<br>To be considered, you must apply online now with your resume.<br>We are actively monitoring all of those that apply.<br>Apply below, and thank you for partnering with Modis! <br>&nbsp;<br>&nbsp;<br></span><br></span>
          Accounting Clerk   
Local Bakersfield company is seeking a dedicated Accounting Clerk with superior organizational skills and strong attention to detail. You will be responsible for computing, classifying, and recording numerical data to keep financial records complete and up-to-date. Duties will consist of data entry of accounts payable, deposits/postings of journal entries, and posting payable/receivable financial information. This personable, energetic and self-reliant individual will also be responsible for checking the accuracy of figures, calculations, and postings pertaining to business transactions. A well-founded knowledge of accounting software is expected and 2+ years experience in the accounting field is essential. Apply for this great position as an Accounting Clerk today by sending your resume in Word format to Allison Homestead, ahomestead@act-1 dot com (.com) We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          2014-04-05 - SPSPhilly - Authentication and Authorization   

n today’s complex market place of corporate partnerships and relationships, sharing information is pertinent to ensuring that business operations are conducted in a secure computing environment with trusted entities being provided access to protected information. In this session, Dan will discuss the basics of authentication and authorization in relation to the SharePoint platform. Further, we will be discussing the technical underpinnings of the SharePoint platform’s processing of a user’s identity dependent on identity provider and authorization settings. As a part of this session we will demonstrate different authentication and authorization configurations that are common place in today’s business settings to include when to use: • Integrated Windows Authentication • Forms Based Authentication using SQL Server • ADFS as a Trusted Identity Provider • Threat Management Gateway with Kerberos (Constrained Delegation using client certs) After attending this session, attendees will have a better grasp of the configuration complexities involved with each scenario as well as the user experience impacts based on the path taken.
          ‘Trimensional’ convierte tu iPhone en un escáner 3D   
Grant Schindler, un científico investigador del College of Computing de Georgia Tech, ha creado Trimensional, una aplicación que se puede instalar en iPhone 4, iPad 2 y la versión más reciente de iPod touch para escanear caras u otros objetos en 3 dimensiones. Lo que habría costado cientos o miles de dólares en su desarrollo […]
          Barcelona estrena el tercer supercomputador más rápido de Europa   


El Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS) estrena hoy el MareNostrum 4, el tercer supercomputador más rápido de Europa y el decimotercero del mundo, multiplicando su potencia por diez en relación al MareNostrum 3.

La entrada Barcelona estrena el tercer supercomputador más rápido de Europa aparece primero en EFE futuro.


          Comment on We can teach women to code, but that just creates another problem: Why Computational Media is so female by Teaching the students isn’t the same as changing the culture: Dear Microsoft: absolutely not. by Monica Byrne | Computing Education Blog   
[…] powerful blog post from Monica Byrne with an important point. I blogged a while back that teaching women computer science doesn’t change how the industry might treat them.  Monica is saying something similar, but with a sharper point. I know I’ve heard from CS […]
          Microservices are Proof that Service-Oriented Architecture is Alive   
Our guest on the podcast this week is Lori MacVittie, Principal Technical Evangelist at F5 Networks.     We discuss trends in so far in 2017 in cloud computing, DevOps, IoT, and Machine Learning....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
          Thomas Friedman Whines About His Lost TPP   

Thomas Friedman, who is legendary for his boldly stated wrong assertions, got into the game again making absurd claims about the Trans-Pacific Partnership (TPP) and the great loss the U.S. suffers from it going down. Friedman tells readers:

"It was not only the largest free-trade agreement in history, it was the best ever for U.S. workers, closing loopholes Nafta had left open. TPP included restrictions on foreign state-owned enterprises that dumped subsidized products into our markets, intellectual property protections for rising U.S. technologies — like free access for all cloud computing services — but also anti-human-trafficking provisions that prohibited turning guest workers into slave labor, a ban on trafficking in endangered wildlife parts, a requirement that signatories permit their workers to form independent trade unions to collectively bargain and the elimination of all child labor practices — all to level the playing field with American workers."

This is of course wrong. First, and most importantly, all the provisions on items like human trafficking, child labor, and trading in endangered wildlife depended on action by the administration. In other words, if the TPP had been approved by Congress last year we would be dependent on the Trump administration to enforce these parts of the agreement. Even the most egregious violations could go completely unsanctioned, if the Trump administration opted not to press them. Given the past history with both Democratic and Republican administrations, this would be a very safe bet.

In contrast, the provisions on items like violations of the patent and copyright provisions or the investment rules can be directly enforced by the companies affected. The TPP created a special extra-judicial process, the investor-state dispute settlement system, which would determine if an investor's rights under the agreement had been violated.

Read More ...


          Surface Laptop review: Microsoft’s best?   

Microsoft’s Surface line has been known for its daring and innovative takes on traditional computing formats like the tablet, the all-in-one PC, and the laptop. The Surface team at Redmond have released some amazing products over the past several years. What was once known as the team who introduced $900 million disasters like the Surface […]

Read More: Surface Laptop review: Microsoft’s best?


          Adjunct Faculty – Engineering Technology - RCC Institute of Technology (RCCIT) - Canada   
Fundamental concepts of Robotics. RCC Institute of Technology is looking for adjunct faculty within the Faculty of Engineering Technology and Computing to teach...
From RCC Institute of Technology (RCCIT) - Tue, 06 Jun 2017 06:07:32 GMT - View all Canada jobs
          Essential Computer Security: Everyone's Guide to Email, Internet, and Wireless Security   

Essential Computer Security provides the vast home user and small office computer market with the information they must know in order to understand the risks of computing on the Internet and what they can do to protect themselves. Tony Bradley is the Guide for the About.com site for Internet Network Security. In his role managing the content for a site that has over 600,000 page views per month and a weekly newsletter with 25,000 subscribers, Tony has learned how to talk to people, everyday people, about computer security. Intended for the security illiterate, Essential Computer Security is a source of jargon-less advice everyone needs to operate their computer securely. * Written in easy to understand non-technical language that novices can comprehend * Provides detailed coverage of the essential security subjects that everyone needs to know * Covers just enough information to educate without being overwhelming


          Royal Info Service Offered   
Wondering how you can get a part time job and earn Rs.10000 monthly from our company. Start your career as an Ad Publishing Worker of Royal Info Service, just simple computing idea. Home Based Job, where you can earn money with 100% genuinely. Ideal home based opportunity for students both school and college grades, factory workers, stay-at-home moms, retired couples to earn extra money...
          Hurry up!!! Special Offers on Cloud Computing Summer Training in Aptech Malviya   
Aptech Malviya Nagar’s six weeks project based summer training program on cloud computing will prepare you to step into the rapidly growing cloud computing market by giving you practical knowledge of the actual work required to be performed in the in...
          The future of apprenticeships, education in construction   
FIU in D.C. weekly update: The College of Engineering and Computing hosted the panel discussion “Education + Industry: Partnering for the Future of Construction”; President...
          High Performance Computing Postdoctoral Scholar (Physics) - Lawrence Berkeley National Laboratory - Berkeley, CA   
Interact with LBNL and other investigators working on similar and related scientific problems. Berkeley Lab’s Physics Division has two High Performance...
From Lawrence Berkeley National Laboratory - Fri, 23 Jun 2017 05:06:39 GMT - View all Berkeley, CA jobs
          Mobile Minutes: Venmo underground; Intel acquires Mobileye; Bitcoin miners revolt; Miscue challenges Amazon   
Today in mobile marketing – Venmo: The secret, hip social network you've never heard of; Intel's $15 billion purchase of Mobileye rewrites driverless landscape; Bitcoin miners signal revolt amid sluggish blockchain; Miscue calls attention to Amazon’s dominance in cloud computing.
          Leeds University Business School   














Hej! Vi som gör detta blogginlägg är Emma och Maria, två doktorander vid Biblioteks- & informationsvetenskap/BHS, som under tre veckor besökt professor David Allen och hans forskningsgrupp vid AIMTech research centre, Leeds University Business School. Centret har ett tydligt fokus på forskning och undervisning inom Information Systems och Information Management. Själva forskar vi på ny teknik i organisationer: sociala medier (Emma) och cloud computing (Maria). Resan var oerhört värdefull, dels för att  kunna knyta kontakter med andra forskare inom området men också få vidgade perspektiv. 

Under vår tid i Leeds undersökte vi också möjligheterna till framtida samarbete mellan Magisterprogrammet i Strategisk Information och Kommunikation, som ges vid BHS (där vi båda är aktiva som programansvarig/kursansvarig/lärare), samt deras MSc Information Systems and Information Management. Det var mycket intressant att utbyta kunskap och erfarenheter kring programmen samt diskutera spännande utvecklingsmöjligheter. 






Vi arbetade inte hela tiden utan hann också med en hel del annat: sociala aktiviteter med andra doktorander, utforska Leeds, som är en fantastiskt charmig stad med tillmötesgående människor, besök till Edinburgh och Manchester samt många promenader längs kanalen i vårt närområde. Vi fann snabbt att det fanns kopplingar mellan Leeds och Borås. Staden har en lång textilhistoria samt en samt en växande profil inom skulptur och konst. Vi är säkra på att det finns många anledningar att återkomma (förhoppningsvis redan i september) och det är med ny påfylld energi som vi befinner oss hemma i Sverige igen. Till sist, stort tack till professor David Allen och hans kolleger för ett fantastiskt varmt mottagande! 










          Minor Field Studies - Malaysia   
23rd of January 2015 - Singapore - 新加坡共和国 - Singapura

Marina Bay Sands, Marina Bay, Singapore
To be back in Singapore, after working at the National University of Singapore in 2012, is very exciting. The purpose of this trip is to go to Malaysia, but first I stay for two days here in the lovely "Lion City". Since there are no lions in Singapore, one might wonder why it's nickname is Lion City. There is a long story, but to make it short; in Malay Singapore is Singapura - derived from "Singa" for lion and "Pura" for city.

I have an interesting day ahead, going to work as a volunteer and plant trees at the Pasir Ris Park.


25th of January Malaysia – Johor Bahru

Pulai Springs Resort, JB, Malaysia
I got much later than expected to Malaysia today due to something I ate last night. Quite troublesome to go through the Woodlands Checkpoint feeling sick, there is like a highly secured fort at the border of Singapore/Malaysia. Well, it was worth the trouble once I arrived at my hotel in Johor Bahru. I was supposed to meet with the the faculty today, but it changed in the very last minute. Early night to get ready for a morning meeting at the Universiti Teknologi Malaysia, UTM.



26th of January Malaysia – Johor Bahru - Universiti Teknologi Malaysia


Visit at Universiti Teknologi Malaysia, UTM
My morning meeting took place at Media and Game Innovation Centre of Excellence (MaGICX), located on the UTM campus. The director of MaGICX, Dr. Mohd Sharizal Sunar, and Dr. Farhan Mohamed started the meeting with a presentation of their research centre and UTM. 
We discussed possibilities for computer science and information systems students from the University of Borås to do field studies in the region. 


They were interested and are willing to help students from Sweden applying for a Minor Field Study scholarship, great news for students with ambitions of conducting field studies in Malaysia. 

During the morning I was also introduced to other members of MaGICX and had a chance to see what kind of research they were conducting.




At lunchtime I was introduced to Dr. Adrian David Cheok from City University of London. He is a professor of Pervasive Computing and is setting up a new research lab in Iskandar, in the south of Malaysia. This region is very dynamic and interesting things are happening here! 


27th - 28th of January Malaysia – Johor Bahru - Universiti Teknologi Malaysia

I participated in the weekly morning meetings at MaGICX. The staff meet on Tuesdays for three presentations by either seniors, Ph.D students or master students. Excellent opportunity for me to get an insight in their work. I also had a chance to introduce myself, the purpose of my trip and my own research.
After a nice lunch it was time for a campus tour, UTM has a large and lush campus. They even have stables at the campus!
The last meeting of the day was with the Deputy Dean of the Faculty of Computing, Dr. Siti Zaiton Mohd Hashim. We realized that we had a lot to discuss and booked another meeting the following day.




          C++ Developer - Distributed Computing - Morgan Stanley - Montréal, QC   
Comfortable programming in a Linux environment, familiar with ksh and bash. Morgan Stanley is a global financial services firm and a market leader in investment...
From Morgan Stanley - Wed, 28 Jun 2017 00:14:01 GMT - View all Montréal, QC jobs
          New method could enable more stable and scalable quantum computing   
Scientists have discovered a new topological material which may enable fault-tolerant quantum computing.


          Waycom recrute une vingtaine de collaborateurs d’ici la fin de l’année   

Opérateur Télécom, Hébergeur et Intégrateur, spécialisé dans les solutions de Cloud Computing, Connexions réseaux, Infogérance et Communications Unifiées, Waycom continue son expansion à l’échelle nationale en recrutant une vingtaine de collaborateurs d’ici la fin de l’année.

Pour accompagner son développement, Waycom recherche une vingtaine de collaborateurs parmi lesquels des profils techniques (Linux/DevOps, Réseaux et Sécurité, Python) et Ingénieurs Commerciaux. Afin d’attirer ces nouveaux talents, le groupe a choisi de renforcer sa Marque Employeur.

Pour cela, l’Opérateur Télécom a mis en place un site Carrières destiné aux candidats. Cette plateforme permet aux candidats de pouvoir se projeter davantage chez Waycom, notamment à travers différents contenus photos et vidéos ou encore des témoignages de collaborateurs.

Postulez directement en vous rendant sur le site recrutement du groupe : https://carrieres.waycom.net/fr

Catégorie actualité: 
Image actualité AMP: 

          NASA Seeking Reviewers for Remote Sensing Water Quality and Airborne Instrument Technology Transition Programs   

NASA’s Science Mission Directorate would like to direct scientists' attention to its new volunteer reviewer web form at http://science.nasa.gov/researchers/volunteer-review-panels/.

In addition to some Space Science programs, the Directorate currently is seeking reviewers specifically for the review of proposals to the Remote Sensing Water Quality (RSWQ) and Airborne Instrument Technology Transition (AITT) ROSES programs.

When interested scientists go to the web page and check the box for one of these programs, they also will have the opportunity to indicate their areas of expertise, e.g., shallow water systems, atmospheric correction for RSWQ and different types of suborbital instruments and scientific focus areas for AITT.

Thumbnail: 

          High Performance Computing & Google Cloud Webinar 7/13   

 

The Higher Education & Research Google Cloud Platform team would like to invite you to a 45-minute webinar next month to learn about how Google’s public cloud can complement and extend your campus HPC to best serve the campus research community.  You have made significant investments in high performance computing, however meeting ever expanding computing needs and supporting the latest technologies available can be difficult and expensive. Join us to learn how GCP can further enable your campus community to think bigger and reach insights faster.

 

The webinar will take place on Thursday July 13th at 1pm EST/10am PST. Click here to register.

 

During this webinar we will cover the following:

·  Why Google’s Cloud?

·  High Performance Computing & GCP

oExtending your environment for batch processing

oInnovative Technologies: GPUs, TPUs

oPrice to Performance Advantage: Preemptible Machines, Custom Machines, Discounts

·  Spotlight: MIT Principal Research Scientist, Andrew Sutherland,  shares his GCP experience

·  Answers to your questions


          Productive teamwork   

June 14, 2017

MSU and the University of Michigan team up to advance the state’s autonomous vehicle research 

This year’s recent Mackinac Policy Conference (MPC) served as the backdrop to show off some of what Michigan has to offer in autonomous vehicle technology and research.Team effort -- Greg McGuire of University of Michigan Engineering and Spartan Engineer Garrick Brazil were among those who showcased Michigan's autonomous tech at the 2017 Mackinac Policy Conference in May.

Autonomous vehicles from Michigan State University and the University of Michigan, and some of the engineering students working on them, were under a tent on Shepler’s Ferry dock May 30-31 as visitors headed to the 2017 MPC on Mackinac Island. 

The event was hosted by Michigan Planet M, a statewide effort to elevate Michigan as the national hub of mobility innovations. Planet M, which was launched at last year’s MPC, is part of the Michigan Economic Development Corporation (MEDC). 

State officials attending this year’s MPC visited the tent en route to the Mackinac’s Grand Hotel. One of them was Kirk T. Steudle, director of the Michigan Department of Transportation, who said the focus is on how people and goods move around the state, country, and world. 

“The technology being developed right here in Michigan will be the center of it all,” Steudle said. “It will morph into the entire world in ways that society interacts with machines, and moving goods and products.” 

Steve Arwood, MEDC chief executive officer, said the state’s newest round of mobility innovations are just beginning. 

“When you think about what mobility means in Michigan, it’s our legacy in the auto industry obviously, and our future,” Arwood said. “That’s only one part of it. But when you think about all the different ways that mobility is going to change civil engineering for our roads, legal, insurance, and all the opportunities for people who might not have mobility -- it’s very exciting.” 

To hear more on this mobility conversation, visit the website of the Michigan PBS TV show: Under The Radar. 

Here's a sampling of some of the Twitter tweets that promoted events on the dock at Mackinaw City: 

#MSU #SpartanEngineer Saif Imron explains some of the intelligent #autonomous devices featured at @MichiganPlanetM @sheplersferry, #mpc17.

@michiganstateu leads in #autonomous computer vision, radars, antennas & hi-assurance computing. Pleased to rep at #MPC17 w/ @UMichMcity

• From Shepler’s Ferry: There's our own Andrew M. - maintenance extraordinaire and security chief for our @MichiganPlanetM guests. It's a great day on the Straits!

• From PlanetM: "#AutonomousVehicles also open up a whole new world for people with disabilities - and for our seniors." - @onetoughnerd #MPC17 #mobility 

Michigan's two largest universities had autonomous tech on display on Shepler's Ferry dock in Mackinaw City, each drawing loyal alums -- like Spartan Patty Shepler.

More on autonomous research at MSU
Autonomous or self-driving vehicles take a warehouse full of technology – cameras, radars and other sensors, security and recognition technology, and a trunkful of computers – to make it happen. 

MSU’s autonomous research technology is part of a project known as CANVAS – Connected and Autonomous Networked Vehicles for Active Safety. Spartan scientists are focusing on the key areas of recognition and tracking objects such as pedestrians or other vehicles; fusion of data captured by radars and cameras; localization, mapping and advanced artificial intelligence algorithms that allow an autonomous vehicle to maneuver in its environment; and computer software to control the vehicle. 

“Much of our work focuses on technology that integrates the vehicle with its environment,” said Hayder Radha, a professor of electrical and computer engineering and director of CANVAS. “In particular, MSU is a recognized leader in computer vision, radars and antenna design, high-assurance computing and related technologies­­–all areas that are at the core of self-driving vehicles.”

CANVAS is part of the larger MSU Mobility Studio initiative, consisting of CANVAS, plus smart infrastructure and mobility management vehicles, pedestrians and cyclists. Another important aspect of a future connected-and-autonomous vehicle is its ability to communicate with other vehicles and the infrastructure surrounding it. Such communications will, for example, enable a car to detect other vehicles that are approaching an intersection and recognize whether the other vehicle is going to stop in time.