Facebook changes algorithm to curb 'tiny group' of spammers   
SAN FRANCISCO (Reuters) - Facebook Inc said on Friday it was changing the computer algorithm behind its News Feed to limit the reach of people known to frequently blast out links to clickbait stories, sensationalist websites and misinformation.

          Algorithm Software Engineer   
MA-Lexington, Solidus is searching for an Algorithm Software Engineer. The Engineer will perform tasks including the development of algorithms for text, speech, and video; experimental evaluation of algorithms; and implementation of visual tools incorporating algorithms. Algorithm development tasks include development of information extraction tools in Java, C#, Matlab, and C+ for text processing, visualization
          Derek Crowden posted a blog post   
Derek Crowden posted a blog post

Facebook Promotion Tips For The Automotive Dealers

Facebook now has over 1 billion active users every single day so we can only conclude that any automotive dealer needs to have a presence on the social media network. Your potential customers are going to look at the profiles you have on social media before a buying decision is made. If the Facebook page associated with the dealership is not active or does not exist, trust is going down. Brand credibility can automatically be increased when Facebook pages are properly managed. Here are some tips to help you achieve just that.Photos And Videos Are Very EffectiveMost people think that a page like the Lemon Law Group Partners reviews works best in establishing trust but this is not actually the case. People want to see images and videos associated with vehicles and the dealership. Videos and photographs will do a lot better than the posts that have no visual assets. If you want to maximize the organic reach of the videos, just upload them straight to the network. That is because Facebook has an algorithm that will prefer uploaded videos. If you want to add pictures, use PNG with a length of 470 px.Timing Is VitalThere is no universal best time to post content on Facebook. Different times are going to work differently based on current audience. What you have to do is use the Insights Tab in your account so you can see when fans are online. This is when you want to schedule posts as there is a larger audience part present on the social network.Change Meta Tags For Facebook AudiencesThe Meta descriptions and titles are important for higher Google rankings but for Facebook you need to have descriptions and titles that are as baiting and as attention grabbing as possible. You can alter them when you make a link post before the post goes live on Facebook. Take advantage of this opportunity to make everything catchier.Keep Things SimpleThe text that you add to everything that you post should be as to the point and as simple as possible. Try to avoid the disorganized or spammy posts by simply using under 500 characters. You do not want to reveal many details since your audience should want to click to read and learn more. If more information has to be communicated to the audience, add links to the site so that readers can learn more.Tags Are UsefulMost businesses have a problem since they do not have advanced budgets available for advertising. On Facebook it will be a little difficult to have a very high organic reach. A trick to do this is to basically tag businesses and friends in order to get a higher visibility. This will increase the number of people seeing the posts. Alternatively, a great way to start is to find groups that are important for your industry and share relevant content there. Just make sure that you write different descriptions every single time you share on groups to also get more visibility in search engines. See More

          An Investigation of Subtraction Algorithms from the 18th and 19th Centuries   

Survey includes both cyphering books and arithmetic texts.


          Deal: Save 40% off UVI Sparkverb, UVX-3P & UVX-10P synth emulations 50% off   
UVI Sparkverb salePlugin Boutique has launched an exclusive sale on Sparkverb, the algorithmic reverb plugin that traverses everything from natural sounding spaces to infinite,
          Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors   

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms based on floating point (FP) numbers. Algorithms can definitely benefit from basing their mathematics on bits and integers (bytes, words) if we could just accelerate them too. FPGAs can do this, […]

The post Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors appeared first on HPCwire.


          Personalized Notifications at Twitter   

Gary Lam, staff engineer at Twitter, spoke about personalized notifications at QCon London 2017. This involved giving a high-level overview of their personalization and recommendations algorithms, and an explanation of how they work at scale despite the large volumes of data and bi-modal nature of Twitter.

By Andrew Morgan
          Ciena teams with University of Waterloo   
Ciena announced that it is working with engineering researchers at the University of Waterloo to develop solutions to help network operators and Internet providers address to the ever increasing demand for faster data transmission over the Internet.

The partners stated that the research relationship has received funding support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

A key area of the University of Waterloo's partnership with Ciena focuses on realising the maximum possible capacity from the optical cables that run under the oceans and which handle around 95% of intercontinental communications, including an estimated $10 trillion per day in financial transactions. Ciena noted that the reliable, high-speed transmission of huge amounts of data over undersea cables is increasingly important in fields including healthcare and academic research.

For the research program, Amir Khandani, a professor of electrical and computer engineering at Waterloo, is leading a team of post-doctoral fellows and graduate students that are developing algorithms designed to efficiently and rapidly correct errors, including lost or dropped bits of data, that occur during extremely high-speed, long-distance optical transmission.

When incorporated on the electronic chips that are built into equipment for receiving and transmitting data, the algorithms developed by the Waterloo team can free up cable capacity, while also enabling the faster correction of errors in line with other technological advances in optical communications.


Under the three-year partnership, announced at an event at the University of Waterloo, Mr. Khandani holds the position of Ciena/NSERC Industrial Research Chair on Network Information Theory of Optical Channels. Ciena noted that the relationship between Waterloo Engineering and Ciena has already produced seven U.S. patents, with additional patents pending.



          Facebook changes algorithm to curb 'tiny group' of spammers   
SAN FRANCISCO (Reuters) - Facebook Inc said on Friday it was changing the computer algorithm behind its News Feed to limit the reach of people known to frequently blast out links to clickbait stories, sensationalist websites and misinformation.

          Europe seeks company to monitor Google's algorithm in €10m deal   

Follows €2.4bn mega fine

The European Commission is seeking a company to police Google’s algorithm in a tender worth €10m, following its record antitrust fine against the advertising business of €2.4bn (£2.1bn).…


           Smoothed analysis of the 2-Opt algorithm for the general TSP    
Englert, Matthias, Röglin, Heiko and Vocking, Berthold. (2016) Smoothed analysis of the 2-Opt algorithm for the general TSP. ACM Transactions on Algorithms , 13 (1). 10. ISSN 1549-6325
          Comment on Google Patents Extracting Facts from the Web by mehedi hasan   
I am trying to know about seo, actually i want to know how to rank a website properly, and i have get some idea from here and i have inspired.Machine learning is totally a game changer lately. It’s still developing and so far the results are awesome! I cannot even comprehend complexity of machine learning algorithms.
          You May Wear Pants   

The Morning Heresy is your daily digest of news and links relevant to the secular and skeptic communities.

Cardinal George Pell is the third-most powerful official in the Catholic Church as head of the Vatican’s finances. And he’s just been charged multiple sexual assaults in his native Australia. I think it’s time to play this song again.

Gov. Matt Bevin of Kentucky signs into law a bill allowing Bible classes in public schools. I’m sure that not far behind are classes on the Qu’ran, the Bhagvat Gita, Dianetics, and the Books of Bokonon. Any day now.

ProPublica reveals “a trove” of Facebook’s internal documents showing its secret guidelines for determining what is and is not hate speech, and thus able to be censored. And some of it is frankly disturbing:

One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men. ... White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. 

Adi Robertson at The Verge notes that Facebook’s task is more or less impossible:

If the company decides to promote a positive social environment without taking clear political sides, it will trend toward a faux neutrality where “hate” is any negative opinion, punishing people who criticize the status quo. But if it admits to an ideological bent, it will have to start formulating political stances on which groups worldwide deserve the most protection from hate. The more social responsibility it accepts, the more liable it is for failing to police its users, and the more power it has to control speech — not just comments to other users, but personal timeline posts or photographs. 

Oh, hey hey hey, make sure you dig the latest Cause & Effect newsletter. It’s got international news, SCOTUS news, and not all of the news is bad!

CFI’s Benjamin Radford is on the panel for a discussion about psychics (along with a bunch of alleged psychics) on WNPR’s Colin McEnroe Show.

Another “psychic,” Michelle Marks of Dixon, Illinois, is arrested for “exploitation of an elderly person and multiple weapons charges.” Wow she was busy!

Researchers at the Catholic University of Louvain wanted to see if atheists in Western Europe were more or less closed-minded than the religious. And that’s what they found, more or less:

Atheists tended to show greater intolerance of contradiction, meaning when they were presented with two seemingly contradictory statements they rated one as very true and the other as very false. They also showed less propensity to be able to imagine arguments contrary to their own position and find them somewhat convincing.

A group of scientists and other figures sign on to an initiative called Mission 2020, intended to get greenhouse gas emissions to “curve downward” by 2020, because failing that, the goals of Paris become unattainable. While it’s noble, I am skeptical. Especially after my Point of Inquiry interview with Elizabeth Kolbert earlier this month.

Undaunted by my kind of despair, a “global covenant of mayors” pledging to meet the goals of the Paris accord has brought on board more than 7400 cities.

Cleve Wootson at WaPo introduces us to Michael Tate Reed, the guy who keeps ramming his car into Ten Commandments monuments, as he did in Arkansas yesterday:

[In 2014] He was diagnosed with schizoaffective disorder and released under an agreement that required him to continue treatment. He sent a rambling letter to the newspaper apologizing and describing the voices in his head and his attempts to recover from mental health issues. 

Speaking of which, Joseph Frankel at The Atlantic looks at how the phenomenon of “hearing voices” in one’s head—which is sometimes what makes people think they’re psychic, and others say they’re psychotic—may be more common than once thought, and also may be more controllable:

These experiments suggest that auditory hallucinations are the result of the mind failing to brand its actions as its own. Watching what the brain does during these hallucinations may clarify how that works, and what differences in the brain create these experiences. ... Drawing a parallel with Autism Spectrum Disorder, the [researchers] are interested in the extent to which the psychics they saw “might occupy the extreme end of a continuum” of people who hear voices. 

The Mormon Church is taking bold strides into the latter half of the 20th century, offering paid maternity or parental leave to church employees and (hold on to your butts) allowing women to wear pants.

A California health department report says that 111 people have chosen to take their own lives in the first six months of the state’s right-to-die law. 

Uranus isn’t just odd because its name makes you chuckle (which is why I must pronounce it “YURen-uss,” not, you know, the other way). It’s on a crazy 98-degree axis, and its magnetic shield is all wobbly.

When there’s somethin’ strange…in the neighborhood…who you gonna call? The Royal Thai Police Force! They ain’t afraid o’no phi pob. 

Ken Ham’s Ark Encounter is a fun, scientific, educational museum experience for the whole family! Oh wait, the city wants them to pay a “safety assessment fee”? I meant to say, it’s a ministry. So, sorry. Exempt. Phew! I mean, “amen.”

My home state of Maine gets its first case of measles in 20 years. Oh god I hope it’s not me! 

Quote of the Day:

I’m just gonna give this one to Colbert:

* * * 

Linking to a story or webpage does not imply endorsement by Paul or CFI. Not every use of quotation marks is ironic or sarcastic, but it often is.

Image by Chris&Rhiannon (CC BY-NC-SA 2.0)

Follow CFI on Twitter: @center4inquiry

Got a tip for the Heresy? Send it to press(at)centerforinquiry.net!

News items that mention political​ candidates are for informational purposes only and under no circumstances are to be interpreted as statements of endorsement or opposition to any political candidate. CFI is a nonpartisan nonprofit.

 

The Morning Heresy: “I actually read it.” - Hemant Mehta

 


          Java SE 8 Update 131 and More    
p.p1 {margin: 0.0px 0.0px 32.0px 0.0px; font: 20.0px Arial; color: #404040} li.li2 {margin: 0.0px 0.0px 31.2px 0.0px; font: 20.0px Arial; color: #1f4f82} li.li3 {margin: 0.0px 0.0px 31.2px 0.0px; font: 20.0px Arial; color: #252525} span.s1 {font-kerning: none; color: #1f4f82} span.s2 {font-kerning: none} span.s3 {color: #252525} span.s4 {font-kerning: none; color: #252525} ul.ul1 {list-style-type: none}

Java SE 8u131 (Java SE 8 update 131) is now available. Oracle strongly recommends that most Java SE users upgrade to the latest Java 8 update, which includes important security fixes. For information on new features and bug fixes included in this release, please read the following release notes:

Important Note: Starting with this Critical Patch Update releases, all JRE versions will treat JARs signed with MD5 as unsigned. Learn more and view testing instructions here. For more information on cryptographic algorithm support, please check the JRE and JDK Crypto Roadmap.

Oracle Java SE Embedded Version 8 Update 131 is also available. You can create customized JREs using the JRECreate tool. To get started, download an eJDK bundle suitable for your target platform and follow instructions to create a JRE that suits your application's needs.

Also released are Java SE 7u141 and Java SE 6u151, which are both available as part of Oracle Java SE Support. For more information about those releases, please read the following release notes: 


          What's Cool in Java 8, and New in Java 9    

Which features in Java 8 and 9 should you look at first? Aurelio Garcia-Ribeyro, director of product management on the Java platform explains the most popular features of Java 8 and 9 

The proposed schedule for the Java 9 general availability is July 27, 2017. Java 9 has over 90 Java enhancement proposals (JEPs) that are available in the JDK 9 early access. To help you navigate those JEPs, Aurelio classified the JEPs in 5 categories, which are: behind the scenes, new functionality, new standards, housekeeping and gone. In this presentation, he describes some of the most important JEPs in each category.  

 

Behind the scenes: functionalities that you get as default in JDK 9. They will be compatible with your current code and tools and you will get better performance 

250: Store Interned Strings in CDS Archives
254: Compact Strings 
225: Javadoc Search
 
New functionality: new capabilities that you have to change your code or use new tools to get. 

Project Jigsaw: Module system for the Java platform 
282: jlink, the Java Linker 
277: Enhanced deprecation 
269: Convenience Factory Methods for Collections 
222: Shell, a read-eval-print loop (REPL) 
238: Multi-release JAR Files 

New standards: JDK 9 will take advantage of new standards in the industry. 

267: Unicode 8.0 
226: UTF-8 Property resource bundles 
249: OCSP Stapling for TLS 
287: SHA-3 Hash algorithms 
110: HTTP 2/Client 

Housekeeping: improvements to existing libraries, changes in internal code,  work for future improvements.

260: Encapsulate most internal APIs
275: Modular Java application packaging 
223: New version-string scheme 
295: Ahead of time compilation 
280: Indify string concatenation 
271: Unified GC Logging 
248: Make G1 the default Garbage Collector 
213: Milling Project Coin 
290: Filter incoming serialization data 
214: Remove GC combinations  

You can test any of those functionalities by trying the JDK 9 early access The full list of JEPs is available as part of the OpenJDK JDK 9 

For JDK 8, Aurelio gives an example demonstrating how to use lambdas in Java 8 and pass not just data but behavior. He also explains the default and reference methods. For example, when you define a collection interface you can pass a default method. This will ensure compatibility with code written for older versions of your interfaces and add new functionality to your interfaces. 

Chapters for Java 8 features 
Lambda Expressions - 2m29s 
Default Methods and Method References - 10m04s 
Date and Time API - JSR 310 


          VERYX® C140 Digital Sorters come with multi-sensor Pixel ...   
Available in belt-fed and chute-fed configurations in various widths, VERYX® C140 Digital Sorters come with specialized infeed and collection shakers. Capable of combining inputs from multiple cameras and laser sensors, units are designed with self-adjustment algorithms, smart alarms and FMAlert™. Belt-fed systems sorts frozen and fresh cut fruits and vegetables, leafy greens and potato chips whereas Chute-fed systems sort nuts, dried fruits, IQF products.

This story is related to the following:
Machinery & Machining Tools
Material Handling & Storage

Search for suppliers of: Sorting Machinery | Sorters | Potato & Onion Sorters

           Facebook changes algorithm to curb spammers    
SAN FRANCISCO, June 30 (Reuters) - Facebook Inc said on Friday it was changing the computer algorithm behind its News Feed to limit the reach of people who...
          Comment on The Biggest Shift in Supercomputing Since GPU Acceleration by jimmy   
@Rob, the deep learning algorithms for object recognition by far surpass anything that people were able to with classical deterministic models. That's why they are being used for self-driving cars; they have been proven to increase driver safety by a good margin. You can mention the one-off cases in the early days of self-driving, but that's not an interesting statistic at all. Deep learning is essentially an attempt to make sense of tons of noisy data, and many of the models today are not understood by their authors: "hey we did this and now we got this awesome result", very ad-hoc. In the end though, it's all statistical-mathematics, it's just that at the moment the slightly theoretically challenged CS folks are playing with this new toy, and mathematical understanding is inevitable.
          Software Development Manager - AFT Entropy Management Tech - AMZN CAN Fulfillment Svcs, Inc - Toronto, ON   
We operate at a nexus of machine learning, computer vision, robotics, and healthy measure of hard-earned expertise in operations to build automated, algorithmic...
From Amazon.com - Tue, 27 Jun 2017 14:12:51 GMT - View all Toronto, ON jobs
          Softwareentwickler (m/w) Fahrerassistenzsysteme - avitea GmbH - Lippstadt   
Weiterentwicklung von erfolgreichen Fahrerassistenzsystemen Softwareentwicklung im Bereich Radarsysteme und Signalverarbeitung Entwicklung von Algorithmen
Gefunden bei avitea - Wed, 28 Jun 2017 10:21:13 GMT - Zeige alle Lippstadt Jobs
          Facebook changes algorithm to curb "tiny group" of spammers   
The move is another step by Facebook to weed out spam, a battle that gained urgency after hoax news stories spread widely during last year's US election.
          (IT) Hadoop Architect/Developer   

Location: Foster City, CA   

Key Responsibilities: Visa is currently seeking for a Senior Hadoop Architect/Developer with extensive experience in RDBMS data modelling/dev with Tableau developer experience in Finance area to deliver Corporate Analytics new strategic framework initiative. This BI platform provides analytical/operational capability to various business domains that are to be used by Corporate Finance Systems. This Developer role will be primarily responsible for designing, developing and implementing Hadoop framework ETL using relational databases and use Tableau reporting on it. The new Hadoop framework to be used to build from the scratch Oracle Financial analytics/P2P/Spend/Fixed asset solution into Hadoop framework from OBIA. The individual should have a finance business background with extensive experience in OBIA Fixed Assets, P2P, Financial analytics, Spend Analytics, and Projects. Expert in Hadoop framework components like Sqoop, Hive, Impala, Oozie, Spark, HBase, HDFS.. " Architect, Design and implement column family schemas of Hive and HBase within HDFS. Assign schemas and create Hive tables. Managing and deploying HDFS HBase clusters. " Develop efficient pig and hive scripts with joins on datasets using various techniques. Assess the quality of datasets for a hadoop data lake. Apply different HDFS formats and structure like Parquet, Avro, etc. to speed up analytics " Fine tune hadoop applications for high performance and throughput. Troubleshoot and debug any hadoop ecosystem run time issues " Hands on experience in configuring, and using Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Spark, Impala, Pig, Oozie, Zookeeper and Flume. " Desired candidate should have strong programming skills on Scala or Python to work on Spark " Experience in converting core ETL logics using PySpark SQL or Scala language " Good experience on Apache Hadoop Map Reduce programming, PIG Scripting and Distribute Application and HDFS. " In-depth understanding of Data Structure and Algorithms. " Experience in managing and reviewing Hadoop log files. " Implemented in setting up standards and processes for Hadoop based application design and implementation. " Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa. " Experience in Object Oriented Analysis, Design (OOAD) and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns. " Experience in managing Hadoop clusters using Cloudera Manager tool. " Very good experience in complete project life cycle (design, development, testing and implementation) of Client Server and Web applications " Experience in connecting Hadoop framework components to Tableau reporting " Expert in Tableau data blending and data modeling. " Create Functional and Technical design documentation " Perform Unit and QA testing data loads and development of scripts for data validation " Support QA, UAT, SIT and
 
Type: Contract
Location: Foster City, CA
Country: United States of America
Contact: Baljit Gill
Advertiser: Talentburst, Inc.
Reference: NT17-11842

          (IT) Senior Scala Engineer   

Rate: Euros 550 - 600 per day   Location: Dublin   

My client is a Financial Services Powerhouse who are looking to add a number of top Scala Engineers to their team based here in Dublin..Great time to come over and contract in Dublin as the Euro is very strong against the Brithish Pound and with the added government benefits of UK based contractors coming over here, now is a great time to secure a contract role here in Dublin for more bang for your buck.* They are looking for people who have both commercial experience in Scala, but will also be happy to speak with people who come from a strong Java based background who want to enter a commercial environment and gain Scala exposure. Great team with a very tech driven manager who ensures that his team not only use best practice to deliver on the large scale applications that they are building but have fun doing it. Job Description If you are committed to change and like a challenge, then this is the place for you. My client has moved from a traditional application aligned software delivery methods to highly agile cross functional and component teams. This is one of the most interesting and exciting times to join the organisation. Their culture is one of excellence; team work, learning, delivered value and believing in their people. They look for new team members that love to learn and take initiatives and take opportunities to improve processes. If working amongst people who are committed to leading the industry appeals to you, then you need to apply for this role. They want you to be part of a strategic drive to migrate their classic applications to the new strategic architecture; The strategic system is being constructed in Scala and Python using Angular in the web stack. They follow Agile management, Agile, Scrum and Kanban delivery methodologies, with a focus on ATDD/BDD & TDD. Responsibilities You will be a key figure within the Dublin team working across a number of teams in other regions, working within an Agile team, writing requirements and specification (eg automated tests), developing relationships across the technology and user communities. In addition making sure you are always focusing on delivery of value using exceptional quality. As the successful applicant you will be responsible for supporting modern agile software development methods; including educating & mentoring less experienced OOP team members and working with teams across London and other regions. Requirements A background in modern OO language with experience in at least one of Java/Scala/Python/C# with a preference for Scala Experienced in using design patterns and following best software engineering practices An understanding of fundamental algorithms and ability to optimize existing code Proficient written and verbal communication skills to support and shape the platform and clearly articulate technical designs and concepts Relationship building skills A team player with exceptional interpersonal skills, eg collaborative working skills Experience of Specification by
 
Rate: Euros 550 - 600 per day
Type: Contract
Location: Dublin
Country: Ireland
Contact: Brendan Hennessy
Advertiser: Stelfox Ltd
Email: Brendan.Hennessy.40B2C.FD891@apps.jobserve.com
Start Date: ASAP
Reference: JS

          Natural Cycles Fertility App   



Whether you're trying to conceive quickly or prevent pregnancy without the use of contraception, knowing your cycle in depth is key. A calendar can be handy for tracking dates but with technology being in the palm of our hands, quite literally, it's definitely worth downloading the Natural Cycles fertility app for the latest information on the go. 


The app was created by Dr Raoul Scherwitzl and Dr Elina Berglun, a Swedish husband and wife team, and it has already helped over 5,000 women to conceive. That's quite remarkable, especially as many became pregnant within just 3 months of trying. Recent studies also show that Natural Cycles can be as effective as using the pill so if on the other hand you would like a break from contraception without pregnancy as a result, it's a reliable alternative.





How does it work?

Each morning a woman is required to take just a few seconds to record her temperature (under her tongue) and input the details into the app. You will receive a free thermometer if you subscribe to the app for a full year so you don't even need to worry about purchasing one. This, along with tracking her period, enables the apps unique algorithm to help an individual to identify when they are ovulating and their fertile window so they know exactly when they should, or shouldn't, have sex.

The more you track the more accurate your ovulation prediction becomes.


Dr Elina Berglund says “The success of Natural Cycles depends on its algorithm. We’ve called the algorithm Alba (after our daughter) and it’s unique because it has collected data from hundreds of thousands of cycles. This means we can adapt to each individual woman’s body and, with a high degree of precision and accuracy, determine when she is ovulating.”




The app is easy to download via the iTunes or Google stores and very simple to navigate. I've been using it for a few weeks and it has quickly become part of my daily routine. I just pop in the relevant data and if I'm fertile on a specific day then the cycle date will be circled in red, green if not, making it clear whether sex should be on the agenda for me.


Upon registering you enter some basic details which include your height, weight, date of birth, last known cycle information and whether you intend to track your fertility or plan a pregnancy. Then you're ready to begin. You can of course change these details if you later decide that you are ready to try for a baby or vice versa. You can even get a free month trial to see how it works for you before committing which I really appreciated - it's good to try before you buy.




I have been on the pill for many years to prevent pregnancy but I decided to have a break from it to test the app. I felt as though tracking the signs my body provides with Natural Cycles would be sufficient to prevent a bundle of joy, for the time being, and it has been successful in doing so. This is now a method that I intend to use for the foreseeable future as I feel it can be trusted well and it has also given me a much better understanding of my body, something I've not experienced before.


Natural Cycles is offering women in the UK a refund if they don’t conceive in the first 9 months of use as well as 50% off a yearly subscription by visiting https://www.naturalcycles.com/en/plan and inputting this code (valid until 31 December): MiniMesNC


What contraception do you use? Did you conceive easily?
Would you use the app?


Image Map


          Software Development Manager - AFT Entropy Management Tech - AMZN CAN Fulfillment Svcs, Inc - Toronto, ON   
We operate at a nexus of machine learning, computer vision, robotics, and healthy measure of hard-earned expertise in operations to build automated, algorithmic...
From Amazon.com - Tue, 27 Jun 2017 14:12:51 GMT - View all Toronto, ON jobs
          Render Wrangler - Rodeo FX - Canada   
Knowledge of queuing systems, scheduling algorithms, SQL (MySQL), Qube, Maya, Houdini, Nuke, Arnold and other VFX industry tools a plus....
From Rodeo FX - Tue, 27 Jun 2017 07:40:14 GMT - View all Canada jobs
          Peer-Reviewed Research: Penises Cause Warming   


By Professor Doom

     Advanced mathematics is not for the uninitiated. Even with years of training, it’s easy enough to go to a research seminar and have at best merely a basic idea of what the latest findings are about. Experts in the field usually understand completely of course, but even if what’s being said seems incomprehensible to the layman there’s no way you can “fake it” well enough to fool an expert. In short: while both an expert mathematician and a lunatic can spew what looks like mathematical gibberish, only the former can do it in a way that’s still comprehensible to mathematicians. You just can’t fake it well enough to fool an expert.

      I’ve looked at research in other fields, with the belief that I’d only understand only the basics. Thus, I was surprised to find “advanced topics” in Education and Administration are incredibly basic and accessible to anyone, even if other fields (hi Physics!) definitely made me feel quite limited in my understanding of advanced topics.

     I know full well if I tried to imitate writing and research in advanced physics, an expert would casually shred my gibberish. And I’ve demonstrate that with no effort I can emulate “advanced” Education and Administration writing.

      Gender studies and gender related studies are big on campus anymore. Casual inspection on my part led me to believe it was meaningless at best, and ideological indoctrination at worst. I’m hardly the only scholar to make such conjectures, but scholars know that “conjecture” is just a fancy way to say “guess.” A couple of scholars decided to prove this stuff is just plain ol’ crap:


-- Sokal1 refers to a previous hoax played on these guys, years ago.

     The two “researchers” made a point of generating a 3,000 word paper packed with jargon and devoid of any meaning. A sample paragraph will give the gist of it:
Destructive, unsustainable hegemonically male approaches to pressing environmental policy and action are the predictable results of a raping of nature by a male-dominated mindset. This mindset is best captured by recognizing the role of [sic] the conceptual penis holds over masculine psychology. When it is applied to our natural environment, especially virgin environments that can be cheaply despoiled for their material resources and left dilapidated and diminished when our patriarchal approaches to economic gain have stolen their inherent worth, the extrapolation of the rape culture inherent in the conceptual penis becomes clear….

     The whole paper of fake research is much like the above, with the key conclusion:
The conceptual penis presents significant problems for gender identity and reproductive identity within social and family dynamics, is exclusionary to disenfranchised communities based upon gender or reproductive identity, is an enduring source of abuse for women and other gender-marginalized groups and individuals, is the universal performative source of rape, and is the conceptual driver behind much of climate change.

      The entire above paragraph is actually just one sentence, but the reader could be forgiven for not reading it through. Allow me to edit it down to at least a minimal level of comprehensibility (keeping in mind the authors were deliberately trying not to be understood):
The conceptual penis…is the conceptual driver behind much of climate change.

      The paper is pure gibberish, little different than simply stringing along a bunch of mathematical symbols and believing it to mean something. The researchers even used a well-known (in the right circles) piece of software to generate the “research.” Yes, this field is so ridiculous (despite the regularly growing departments on campus) that somebody actually wrote a research paper generator for it:
Some references cite the Postmodern Generator, a website coded in the 1990s by Andrew Bulhak featuring an algorithm…that returns a different fake postmodern “paper” every time the page is reloaded. We cited and quoted from the Postmodern Generator liberally;   

      Keep that in mind: not only did they use a gibberish generator for the paper, they used it as a reference, not that anyone noticed—the experts in this field do not even know when they are being mocked! Other references in the paper were likewise questionable (to be generous):

Not only is the text ridiculous, so are the references. Most of our references are quotations from papers and figures in the field that barely make sense in the context of the text. Others were obtained by searching keywords and grabbing papers that sounded plausibly connected to words we cited. We read exactly zero of the sources we cited, by intention, as part of the hoax.


     Of course, the researchers wrote it all under pseudonyms. They then sent it out to peer reviewed journals for publication. I really want to emphasize this: peer review is considered the gold standard of publication, even though time and again it’s been revealed as flawed at best and highly corrupt at worst. Mostly the corruption is by coordinating with the reviewers but in this case the researchers decided to have legitimate experts in the field legitimately review the paper. Why did they even hope that their hoax would possibly work?

That is, we assumed we could publish outright nonsense provided it looked the part and portrayed a moralizing attitude that comported with the editors’ moral convictions. Like any impostor, ours had to dress the part, though we made our disguise as ridiculous and caricatured as possible...


     Identity politics and political correctness is destroying our campuses (and some would say, the country). The researchers are quite justified in wondering if these things are also destroying what we now inaccurately call “science.”

     So, they wrote a gibberish paper with bogus references. That’s the easy part. Next, they sent their paper to journals, and did receive rejections—none of the rejections noted that the paper was pure hokum. But one journal suggested another which might be amenable:

We feel that your manuscript would be well-suited to our Cogent Series, a multidisciplinary, open journal platform for the rapid dissemination of peer-reviewed research across all disciplines.


     Cogent sent it to reviewers:

We took them up on the transfer, and Cogent Social Sciences eventually accepted “The Conceptual Penis as a Social Construct.” The reviewers were amazingly encouraging, giving us very high marks in nearly every category. For example, one reviewer graded our thesis statement “sound” and praised it thusly, “It capturs [sic] the issue of hypermasculinity through a multi-dimensional and nonlinear process” (which we take to mean that it wanders aimlessly through many layers of jargon and nonsense). The other reviewer marked the thesis, along with the entire paper, “outstanding” in every applicable category.


      So, a paper that is unarguably complete gibberish can pass the peer review process in this field. Granted the reviewers did have a few issues even with a paper they loved:

They didn’t accept the paper outright, however. Cogent Social Sciences’ Reviewer #2 offered us a few relatively easy fixes to make our paper “better.” We effortlessly completed them in about two hours, putting in a little more nonsense about “manspreading” (which we alleged to be a cause of climate change) and “dick-measuring contests.”


     Now, the gentle reader might well believe that the journal is just a sham. Not true! Journals have their own accreditation system:

First, Cogent Social Sciences operates with the legitimizing imprimatur of Taylor and Francis, with which it is clearly closely partnered. Second, it’s held out as a high-quality open-access journal by the Directory of Open Access Journals (DOAJ), which is intended to be a reliable list of such journals. In fact, it carries several more affiliations with similar credentialing organizations...


      Much as higher education has serious, grave problems with a bogus accreditation system, so too do journals, apparently. The researchers’ conclusion regarding the field of gender studies is valid:

”…there are significant reasons to believe that much of the problem lies within the very concept of any journal being a “rigorous academic journal in gender studies.”


     I have two conclusions based on this wildly successful hoax:

1)              I often have global warming believers tell me of the hundreds of peer reviewed studies supporting the notion that Earth will boil over any minute now because of humanity’s technology. I have my doubts, and knowing that a complete hoax article supporting such ideas can easily be peer reviewed and published only increases my doubts further. This paper comes as close as possible to literally saying man is responsible for global warming…and is rubbish.

2)              When I was at a community college, I often encountered faculty and administrators who, after even a brief conversation, I simply could not fathom how they made it through a graduate level program. I gave them the benefit of the doubt, but after longer conversations, the question kept reverberating in my mind: how? They got their degrees and positions through writing papers much like this hoax paper, and it’s clear we have a whole industry of hoax “science” publishing, doing much to explain the surplus of advanced degrees in these strange fields.


     The two authors set out to create a hoax paper, and succeeded brilliantly. Yes, it was done before nearly 20 years ago by Sokal, but that only serves to demonstrate nothing has changed. At this point, as I’ve told many friends, when it comes to “the latest scientific research,” you may as well flip a coin when it comes to deciding whether it’s true or not.




 1)     In 1996, Alan Sokal, a Professor of Physics at NYU, published the bogus paper, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” in the preeminent cultural studies journal Social Text which is in turn published by Duke University Press. The publication of this nonsense paper, in a prestigious journal with a strong postmodernist orientation, delivered a devastating blow to postmodernism’s intellectual legitimacy.

          Facebook changes algorithm to curb 'tiny group' of spammers   
SAN FRANCISCO (Reuters) - Facebook Inc said on Friday it was changing the computer algorithm behind its News Feed to limit the reach of people known to frequently blast out links to clickbait stories, sensationalist websites and misinformation.

          Now, algorithm can predict smell of new molecules   

The post Now, algorithm can predict smell of new molecules appeared first on Online Telangana News|Latest Telangana News|Telangana Updated News.


          Yelp created the hotdog app from Silicon Valley (only better)   

Yelp’s Photos Team is developing AI and machine learning algorithms to understand content in user photos. Using neural networks, the systems are able to identify specific attributes that make a photo “beautiful.” The neural network can also identify and categorize based on the type of dish in the photo ー meaning that Silicon Valley’s hotdog-identifying app actually exists. The motivation behind the AI is the rise of photo sharing at restaurants. People love snapping and gramming the trendiest spots, dishes, and menus, but they also love posting on Yelp. Over 100,000 photos are uploaded to Yelp per day, so the…

This story continues at The Next Web

Or just read more coverage about: Yelp
          Weekend project: Ghetto RPC with redis, ruby and clojure   

There’s a fair amount of things that are pretty much set on current architectures. Configuration management is handled by chef, puppet (or pallet, for the brave). Monitoring and graphing is getter better by the day thanks to products such as collectd, graphite and riemann. But one area which - at least to me - still has no obvious go-to solution is command and control.

There are a few choices which fall in two categories: ssh for-loops and pubsub based solutions. As far as ssh for loops are concerned, capistrano (ruby), fabric (python), rundeck (java) and pallet (clojure) will do the trick, while the obvious candidate in the pubsub based space is mcollective.

Mcollective has a single transport system, namely STOMP, preferably set-up over RabbitMQ. It’s a great product and I recommend checking it out, but two aspects of the solution prompted me to write a simple - albeit less featured - alternative:

  • There’s currently no other transport method than STOMP and I was reluctant to bring RabbitMQ into the already well blended technology mix in front of me.
  • The client implementation is ruby only.

So let me here engage in a bit of NIHilism and describe a redis based approach to command and control.

The scope of the tool would be rather limited and only handle these tasks:

  • Node discovery and filtering
  • Request / response mechanism
  • Asynchronous communication (out of order replies)

Enter redis

To allow out of order replies, the protocol will need to broadcast requests and listen for replies separately. We will thus need both a pub-sub mechanism for requests and a queue for replies.

While redis is initially an in-memory key value store with optional persistence, it offers a wide range of data structures (see the full list at http://redis.io) and pub-sub support. No explicit queue function exist, but two operations on lists provide the same functionality.

Let’s see how this works in practice, with the standard redis-client redis-cli and assuming you know how to run and connect to a redis server:

  1. Queue Example

    Here is how to push items on a queue named my_queue:

    redis 127.0.0.1:6379> LPUSH my_queue first
    (integer) 1
    redis 127.0.0.1:6379> LPUSH my_queue second
    (integer) 2
    redis 127.0.0.1:6379> LPUSH my_queue third
    (integer) 3
    

    You can now subsequently issue the following command to pop items:

    redis 127.0.0.1:6379> BRPOP my_queue 0
    1) "my_queue"
    2) "first"
    redis 127.0.0.1:6379> BRPOP my_queue 0
    1) "my_queue"
    2) "second"
    redis 127.0.0.1:6379> BRPOP my_queue 0
    1) "my_queue"
    2) "third"
    

    LPUSH as its name implies pushes items on the left (head) of a list, while BRPOP pops items from the right (tail) of a list, in a blocking manner, with a timeout argument which we set to 0, meaning that the action will block forever if no items are available for popping.

    This basic queue mechanism is the main mechanism used in several open source projecs such as logstash, resque, sidekick, and many others.

  2. Pub-Sub Example

    Queues can be subscribed to through the SUBSCRIBE command, you’ll need to open two clients, start by issuing this in the first:

    redis 127.0.0.1:6379> SUBSCRIBE my_exchange
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "my_hub"
    3) (integer) 1
    

    You are now listening on the my_exchange exchange, issue the following in the second terminal:

    redis 127.0.0.1:6379> PUBLISH my_exchange hey
    (integer) 1
    

    You’ll now see this in the first terminal:

    1) "message"
    2) "my_hub"
    3) "hey"
    
  3. Differences between queues and pub-sub

    The pub-sub mechanism in redis, broadcasts to all subscribers and will not queue up data for disconnect subscribers, where-as queues will deliver to the first available consumer, but will queue up (in RAM, so make sure of your consuming ability)

Designing the protocol

With the following building blocks in place, a simple layered protocol can be designed offering the following functionality, offering the following workflow:

  • A control box broadcasts a requests with a unique ID (UUID), with a command and node specification
  • All nodes matching the specification reply immediately with a START status, indicating that the requests has been acknowledged
  • All nodes refusing to go ahead reply with a NOOP status
  • Once execution is finished, nodes reply with a COMPLETE status

Acknowledgments and replies will be implemented over queues, solely to demonstrate working with queues, using pub-sub for replies would lead to cleaner code.

If we model this around JSON, we can thus work with the following payloads, starting with requests:

request = {
  reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  match: {
    all: false, /* setting to true matches all nodes */
    node_facts: {
      hostname: "www*" /* allowing simple glob(3) type matches */
    }
  },
  command: {
    provider: "uptime",
    args: { 
     averages: {
       shortterm: true,
       midterm: true,
       longterm: true
     }
    }
  }
}

START responses would then use the following format:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  uuid: "5b4197bd-a537-4cc7-972f-d08ea5760feb",
  hostname: "www01.example.com",
  status: "start"
}

NOOP responses would drop the sequence UUID not needed:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  hostname: "www01.example.com",
  status: "noop"
}

Finally, COMPLETE responses would include the result of command execution:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  uuid: "5b4197bd-a537-4cc7-972f-d08ea5760feb",
  hostname: "www01.example.com",
  status: "complete",
  output: {
    exit: 0,
    time: "23:17:20",
    up: "4 days, 1:45",
    users: 6,
    load_averages: [ 0.06, 0.10, 0.13 ]
  }
}

We essentially end up with an architecture where each node is a daemon while the command and control interface acts as a client.

Securing the protocol

Since this is a proof of concept protocol and we want implementation to be as simple as possible, a somewhat acceptable compromise would be to share an SSH private key specific to command and control messages amongst nodes and sign requests and responses with it.

SSL keys would also be appropriate, but using ssh keys allows the use of the simple ssh-keygen(1) command.

Here is a stock ruby snippet, gem which performs signing with an SSH key, given a passphrase-less key.

require 'openssl'

signature = File.open '/path/to/private-key' do |file|
  digest = OpenSSL::Digest::SHA1.digest("some text")
  OpenSSL::PKey::DSA.new(file).syssign(digest)
end

To verify a signature here is the relevant snippet:

require 'openssl'

valid? = File.open '/path/to/private-key' do |file|

  OpenSSL::PKey::DSA.new(file).sysverify("some text", sig)
end

This implements the common scheme of signing a SHA1 digest with a DSA key (we could just as well sign with an RSA key by using OpenSSL::PKey::RSA)

A better way of doing this would be to sign every request with the host’s private key, and let the controller look up known host keys to validate the signature.

The clojure side of things

My drive for implementing a clojure controller is integration in the command and control tool I am using to interact with a number of things.

This means I only did the work to implement the controller side of things. Reading SSH keys meant pulling in the bouncycastle libs and the apache commons-codec lib for base64:

(import '[java.security                   Signature Security KeyPair]
        '[org.bouncycastle.jce.provider   BouncyCastleProvider]
        '[org.bouncycastle.openssl        PEMReader]
        '[org.apache.commons.codec.binary Base64])
(require '[clojure.java.io :as io])


(def algorithms {:dss "SHA1withDSA"
                 :rsa "SHA1withRSA"})

;; getting a public and private key from a path
(def keypair (let [pem (-> (PEMReader. (io/reader "/path/to/key")) .readObject)]
               {:public (.getPublic pem)
                :private (.getPrivate pem)}))

(def keytype :dss)

(defn sign
  [content]
  (-> (doto (Signature/getInstance (get algorithms keytype))
        (.initSign (:private keypair))
        (.update (.getBytes str)))
      (.sign)
      (Base64/encodeBase64string)))

(defn verify
  [content signature]
  (-> (doto (Signature/getInstance (get algorithms keytype))
        (.initVerify (:public keypair))
        (.update (.getBytes str)))
      (.verify (-> signature Base64/decodeBase64))))

Redis support has several options, I used the jedis Java library which has support for everything we’re interested in.

Wrapping up

I have early - read: with lots of room for improvements, and a few corners cut - implementations of the protocol, both the agent and controller code in ruby, and the controller code in clojure, wrapped in my IRC bot in clojure, which might warrant another article.

The code can be found here: https://github.com/pyr/amiral (name alternatives welcome!)

If you just want to try out, you can fetch the amiral gem in ruby, and start an agent like so:

$ amiral.rb -k /path/to/privkey agent

You can then test querying the agent through a controller:

$ amiral.rb -k /path/to/privkey controller uptime
accepting acknowledgements for 2 seconds
got 1/1 positive acknowledgements
got 1/1 responses
phoenix.spootnik.org: 09:06:15 up 5 days, 10:48, 10 users,  load average: 0.08, 0.06, 0.05

If you’re feeling adventurous you can now start the clojure controller, it’s configuration is relatively straightforward, but a bit more involved since it’s part of an IRC + HTTP bot framework:

{:transports {amiral.transport.HTTPTransport {:port 8080}
              amiral.transport.irc/create    {:host "irc.freenode.net"
                                              :channel "#mychan"}}
 :executors {amiral.executor.fleet/create    {:keytype :dss
                                              :keypath "/path/to/key"}}}

In that config we defined two ways of listening for incoming controller requests: IRC and HTTP, and we added an “executor” i.e: a way of doing something.

You can now query your hosts through HTTP:

$ curl -XPOST -H 'Content-Type: application/json' -d '{"args":["uptime"]}' http://localhost:8080/amiral/fleet
{"count":1,
 "message":"phoenix.spootnik.org: 09:40:57 up 5 days, 11:23, 10 users,  load average: 0.15, 0.19, 0.16",
 "resps":[{"in_reply_to":"94ab9776-e201-463b-8f16-d33fbb75120f",
           "uuid":"23f508da-7c30-432b-b492-f9d77a809a2a",
           "status":"complete",
           "output":{"exit":0,
                     "time":"09:40:57",
                     "since":"5 days, 11:23",
                     "users":"10",
                     "averages":["0.15","0.19","0.16"],
                     "short":"09:40:57 up 5 days, 11:23, 10 users,  load average: 0.15, 0.19, 0.16"},
           "hostname":"phoenix.spootnik.org"}]}

Or on IRC:

09:42 < pyr> amiral: fleet uptime
09:42 < amiral> pyr: waiting 2 seconds for acks
09:43 < amiral> pyr: got 1/1 positive acknowledgement
09:43 < amiral> pyr: got 1 responses
09:43 < amiral> pyr: phoenix.spootnik.org: 09:42:57 up 5 days, 11:25, 10 users,  load average: 0.16, 0.20, 0.17

Next Steps

This was a fun experiment, but there are two outstanding problems which will need to be addressed quickly

  • Tests test tests. This was a PoC project to start with, I should have known better and wrote tests along the way.
  • The queue based reply handling makes controller logic complex, and timeout handling approximate, it should be switched to pub-sub
  • The signing should be done based on known hosts’ public keys instead of the shared key used now.
  • The agent should expose more common actions: service interaction, puppet runs, etc.

          An Algorithm Helps Protect Mars Curiosity's Wheels   

Read article: An Algorithm Helps Protect Mars Curiosity's Wheels

There are no mechanics on Mars, so the next best thing for NASA's Curiosity rover is careful driving.




          Robust Timing Calibration for PET Using L1-Norm Minimization   
Positron emission tomography (PET) relies on accurate timing information to pair two 511-keV photons into a coincidence event. Calibration of time delays between detectors becomes increasingly important as the timing resolution of detector technology improves, as a calibration error can quickly become a dominant source of error. Previous work has shown that the maximum likelihood estimate of these delays can be calculated by least squares estimation, but an approach is not tractable for complex systems and degrades in the presence of randoms. We demonstrate the original problem to be solvable iteratively using the LSMR algorithm. Using the LSMR, we solve for 60 030 delay parameters, including energy-dependent delays, in 4.5 s, using 1 000 000 coincidence events for a two-panel system dedicated to clinical locoregional imaging. We then extend the original least squares problem to be robust to random coincidences and low statistics by implementing $\ell _{1}$ -norm minimization using the alternating direction method of the multipliers (ADMM) algorithm. The ADMM algorithm converges after six iterations, or 20.6 s, and improves the timing resolution from 64.7 ± 0.1s full width at half maximum (FWHM) uncalibrated to 15.63 ± 0.02ns FWHM. We also demonstrate this algorithm’s applicability to commercial systems using a GE Discovery 690 PET/CT. We scan a rotating transmission source, and after subtracting the 511-keV photon time-of-flight due to the source position, we calculate 13 824 per-crystal delays using 5 000 000 coincidence events in 3.78 s with three iterations, while showing a timing resolution improvement that is significantly better than previous calibration methods in the literature.
          Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach   
Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer’s disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than $1\times 10^{\mathrm {\mathbf {-6}}}$ . Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.
          Segmentation of Pathological Structures by Landmark-Assisted Deformable Models   
Computerized segmentation of pathological structures in medical images is challenging, as, in addition to unclear image boundaries, image artifacts, and traces of surgical activities, the shape of pathological structures may be very different from the shape of normal structures. Even if a sufficient number of pathological training samples are collected, statistical shape modeling cannot always capture shape features of pathological samples as they may be suppressed by shape features of a considerably larger number of healthy samples. At the same time, landmarking can be efficient in analyzing pathological structures but often lacks robustness. In this paper, we combine the advantages of landmark detection and deformable models into a novel supervised multi-energy segmentation framework that can efficiently segment structures with pathological shape. The framework adopts the theory of Laplacian shape editing, that was introduced in the field of computer graphics, so that the limitations of statistical shape modeling are avoided. The performance of the proposed framework was validated by segmenting fractured lumbar vertebrae from 3-D computed tomography images, atrophic corpora callosa from 2-D magnetic resonance (MR) cross-sections and cancerous prostates from 3D MR images, resulting respectively in a Dice coefficient of 84.7 ± 5.0%, 85.3 ± 4.8% and 78.3 ± 5.1%, and boundary distance of 1.14 ± 0.49mm, 1.42 ± 0.45mm and 2.27 ± 0.52mm. The obtained results were shown to be superior in comparison to existing deformable model-based segmentation algorithms.
          Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images   
Segmenting a broad class of histological structures in transmitted light and/or fluorescence-based images is a prerequisite for determining the pathological basis of cancer, elucidating spatial interactions between histological structures in tumor microenvironments (e.g., tumor infiltrating lymphocytes), facilitating precision medicine studies with deep molecular profiling, and providing an exploratory tool for pathologists. This paper focuses on segmenting histological structures in hematoxylin- and eosin-stained images of breast tissues, e.g., invasive carcinoma, carcinoma in situ, atypical and normal ducts, adipose tissue, and lymphocytes. We propose two graph-theoretic segmentation methods based on local spatial color and nuclei neighborhood statistics. For benchmarking, we curated a data set of 232 high-power field breast tissue images together with expertly annotated ground truth. To accurately model the preference for histological structures (ducts, vessels, tumor nets, adipose, etc.) over the remaining connective tissue and non-tissue areas in ground truth annotations, we propose a new region-based score for evaluating segmentation algorithms. We demonstrate the improvement of our proposed methods over the state-of-the-art algorithms in both region- and boundary-based performance measures.
          Deep Learning Segmentation of Optical Microscopy Images Improves 3-D Neuron Reconstruction   
Digital reconstruction, or tracing, of 3-D neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods, prior to applying tracing or reconstruction techniques. This preprocessing step is expected to remove noises in the data, thereby leading to improved reconstruction results. In this paper, we proposed to use 3-D convolutional neural networks (CNNs) for segmenting the neuronal microscopy images. Specifically, we designed a novel CNN architecture, that takes volumetric images as the inputs and their voxel-wise segmentation maps as the outputs. The developed architecture allows us to train and predict using large microscopy images in an end-to-end manner. We evaluated the performance of our model on a variety of challenging 3-D microscopy images from different organisms. Results showed that the proposed methods improved the tracing performance significantly when combined with different reconstruction algorithms.
          A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology   
Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.
          NVIDIA Video: Codota’s AI-Based Code   
Startup Codota is a member of our NVIDIA Inception Program at http://nvda.ws/2tlqpiD. Watch how their AI algorithms learn from millions of programs to provide the best, most relevant, and helpful code for any given task: http://nvda.ws/2sO2VBJ This video is via NVIDIA.
          Algorithms vs Serendipitous Discovery   
Stores won’t go away. People love to find and discover on our own. Obviously curation and convenience are incredible valuable, but our brains release dopamine when we shop and find and feel inspired. Stores are becoming THE new kind of media and are the most powerful, adaptable and influential form of media we have to … Continue reading Algorithms vs Serendipitous Discovery
          3D Coat 4.7.31 (x64) + Manual | 775 Mb |   
3D Coat 4.7.31 (x64) + Manual File Size: 775 Mb 3D-Coat is an advanced program designed to allow you to easily create detailed 3D models, to which you can add textures, colors and various special effects. 3D-Coat is the one application that has all the tools you need to take your 3D idea from a block of digital clay all the way to a production ready, fully textured organic or hard surface model. Today 3D-Coat is available to learn at 204+ Universities and schools worldwide. Complex interface Being that the software is a state-of-the-art piece of software, its interface had to incorporate a multitude of features and options. The workspace is neatly disposed so you can have a large overview of the on-going project. The easy control of the 3D environment with only the mouse allows you to quickly examine and modify your 3D model. Also, the menus along the sides of the actual workspace allow you to quickly access the tools you need in order to edit your artwork project. Working on layers Most of the digital image editing and creation programs use the layer system to allow you to disassemble your project and work on each individual component. 3D-Coat takes things further and incorporates the layer system in the 3D modeling process. These layers can contain depth, color and specular, and they can be edited by adding extrusion, transparency, contrast, multiplication of depth or additional specular. Any layer can be easily disabled or enabled at any time, allowing you to quickly try out different versions of the project. A multitude of features The program puts at your disposal a large collection of effects and textures for you to use when creating a 3D model. The Retopology and UV, along with Voxel Sculpting, allow you to create extremely detailed and realistic models, to which you can easily add color and textures. An advanced program for 3D sculpting The vast array of options and features 3D-Coat has to offer, along with the intuitive interface and the easy mouse control over the workspace make the program a complex and professional tool for 3D sculpting, with impressive realistic results. FEATURES Easy Texturing & PBR - Microvertex, Per-pixel or Ptex painting approaches - Realtime Physically Based Rendering viewport with HDRL - Smart Materials with easy set-up options - Multiple paint Layers. Popular blending modes. Layer groups - Tight interaction with Photoshop - Texture size up to 16k - Fast Ambient Occlusion and Curvature map calculation - Rich toolset for all kind of painting tasks, and more... Digital Sculpting Voxel (volumetric) sculpting key features: - No topological constraints. Sculpt as you would with Clay - Complex boolean operations. Fast kit bashing workflow Traditional sculpting offers you such powerful technology as: - Adaptive dynamic tesselation (Live Clay) - Dozens of fast and fluid sculpting brushes - Boolean operations with crisp edges 3D Printing Export Wizard. And more... Ultimate Retopo Tools - Auto-retopology (AUTOPO) with user-defined edge loops - Fast and easy-to-use manual Retopo tools - Possibility to import reference mesh for retopologization - Use your current low-poly mesh as your retopo mesh - Retopo groups with color palette for better management - Advanced baking settings dialog - And more... Fast & Friendly UV Mapping - Professional toolset for creating and editing UV-sets - Native Global Uniform (GU) unwrapping algorithm - Multiple UV-sets support and management - Support ABF, LSCM, and Planar unwrapping algorithms - Individual islands tweaking - Lastly, it is fast, easy, and fun to use. OS: Windows Vista / 7 / 8 / 10 (64-bit) HOMEPAGE http://3dcoat.com/home/ DOWNLOAD: http://nitroflare.com/view/FD1CF7F069614DE/k5561.3D.Coat.4.7.31.x64..Manual.rar http://nitroflare.com/view/B11D9427F24F8DF/k5561.3D.Coat.4.7.31.x64..Manual.rar
          OpenCV 3 - Advanced Image Detection and Reconstruction | 342 MB   
OpenCV 3 - Advanced Image Detection and Reconstruction ----- Packt Video Training ----- MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 1.5 Hours | 342 MB Genre: eLearning | Language: English Making your applications see has never been easier with OpenCV. With it, you can teach your robot how to follow your cat, write a program to correctly identify the members of One Direction, or even help you find the right colors for your redecoration. OpenCV 3 Computer Vision Application Programming Solutions provides a complete introduction to the OpenCV library and explains how to build your first computer vision program. You will be presented with a variety of computer vision algorithms, and exposed to important concepts in image and video analysis, which will enable you to build your own computer vision applications. This video helps you to get started with the OpenCV library, and shows you how to install and deploy it to write effective computer vision applications following good programming practices. You will learn how to read and write images and manipulate their pixels. Different techniques for image enhancement and shape analysis will be presented. You will learn how to detect specific image features such as lines, circles, or corners. You will be introduced to the concepts of mathematical morphology and image filtering. DOWNLOAD: http://nitroflare.com/view/8EB605038F91B71/pds87.OpenCV.3..Advanced.Image.Detection.and.Reconstruction.part1.rar http://nitroflare.com/view/D350A351856E52A/pds87.OpenCV.3..Advanced.Image.Detection.and.Reconstruction.part2.rar http://nitroflare.com/view/344BFC2E6913CBB/pds87.OpenCV.3..Advanced.Image.Detection.and.Reconstruction.part3.rar
          Algorithms Unlocked   

          Lese-Tipp: Datenspezialistin Hilary Mason gibt dem "Focus" ein A-bis-Z-Interview zur Digitalisierung.   
Lese-Tipp: Datenspezialistin Hilary Mason spricht mit Jörg Rohleder über die Auswirkungen der Digitalisierung auf unsere Welt. Das Gespräch ist von A wie Algorithmus bis Z wie Zero Sum Game gegliedert. Hilary Mason betont, dass mehr Kommunikation das beste Geschenk des digitalen Zeitalters sei – und dass sich Sex durch die Digitalisierung nicht grundlegend ändern werde. "Focus" 27/2017, S. 54-59 (Paid)

Lese-Tipp: Datenspezialistin Hilary Mason spricht mit Jörg Rohleder über die Auswirkungen der Digitalisierung auf unsere Welt. Das Gespräch ist von A wie Algorithmus bis Z wie Zero Sum Game gegliedert. Hilary Mason betont, dass mehr Kommunikation das beste Geschenk des digitalen Zeitalters sei – und dass sich Sex durch die Digitalisierung nicht grundlegend ändern werde.
"Focus" 27/2017, S. 54-59 (Paid)


          Beautiful Code   

Beautiful Code Edited By Andy Oram & Greg Wilson

2.5/5

Beautiful Code is a collection of essays from programmers working in a number of different areas, from language design, to operating systems, to enterprise application development. If you are a programmer, chances are good that at least a couple of essays in here will appeal to you.

First, the good. Some essays are great. Yukihiro Matsumoto, the creator of Ruby, has arguably the best (and shortest) essay in the collection, concentrating on what makes code beautiful and how those factors influenced his design of Ruby. Elliote Rusty Harold’s contribution on lessons learned creating XML verifiers is also a standout. He goes through several implementations, learning from each to improve the performance of the next, all while maintaining correctness across all cases. Charles Petzold’s description of on-the-fly code generation for image processing is dense, but interesting. As a sometimes Python programmer, Andrew Kuchling’s discussion of the design trade-offs in the design of Python’s dictionary implementation was much appreciated and gives insights into performance optimizations you can make with your application if needed.

Unfortunately there is also a fair amount of bad. One issue is that the book is simply too long. The editors mention they got a more enthusiastic response from contributors then they expected. They may have felt compelled to include all or most of the responses. But beyond the length, some of the essays are just bad. For example, Douglas Crockford’s “Top Down Operator Precedence” essay dives right in without actually explaining the algorithm. It is explained piecemeal throughout the code, but you never get a good feel for exactly what is going on. Other contributors have the view that whatever skills they need to do their work is essential to being a true software engineer. For example, Bryan Cantrill writes that postmortem debugging with core dumps “is an essential part of our craft - a skill that every serious software engineer should develop.” Quite honestly, only a very narrow niche of software engineers are serious then. Other authors take similar narrow views at times, whether it is the author of math libraries feeling that everybody needs to care about high performance computing or C programmers feeling that every real programmer should implement their own dynamic dispatch code in C at some point in their careers.

Beautiful Code is worth the read, but don’t hesitate to skim over the essays that don’t interest you. I probably would have enjoyed it more if I didn’t force myself through most of the them. (Also see Jeff Atwood’s review for a good explanation of why the title of the book is misleading.)


          Weekend Reading: Want Some Volatility With That?   

Authored by Lance Roberts via RealInvestmentAdvice.com,

Over the last couple of week’s, volatility has certainly picked up. As shown in the chart below, stocks have vacillated in a 1.5% trading range ever since the beginning of June. (Chart through Thursday)

Despite the pickup in volatility, support for the market has remained firm. Importantly, this confirms the conversation I had with Kevin Massengill of Meraglim just recently discussing the impact of Algorithmic Trading and how they are simultaneously currently all “buying the dip.” As he notes, this is all “fine and dandy” until the robots all decide to start “selling rallies” instead. (Start at 00:02:40 through 00:04:00)

But even with the recent pickup in volatility, volatility by its own measure remains extremely compressed and near its historical lows. While extremely low volatility is not itself an immediate issue, like margin debt, it is the “fuel” that when ignited “burns hot” during the reversion process.

Currently, as we head into the extended July 4th weekend, the bull market trend remains clearly intact. With the “accelerated advance” line holding firm on Thursday’s sell-off, but contained below the recent highs, there is little to suggest the advance that began in early 2016 has come to its final conclusion.

However, such a statement should NOT be construed as meaning it WON’T end as it more assuredly will. The only questions are simply when and how deep the subsequent reversion will be?

Volatility is creeping back. The trick will be keeping it contained.

In the meantime, this is what I am reading over the long holiday weekend.

Happy Independence Day.


Politics/Fed/Economy


Video


Markets


Research / Interesting Reads

 


Life is [Stocks] are a fragile thing. One minute you’re chewin’ on a burger, the next minute you’re dead meat.” – Adopted From Lloyd, “Dumb and Dumber”


          On This Day in Math - July 1   

Mathematics, rightly viewed, possesses not only truth, but supreme beauty
a beauty cold and austere, like that of sculpture, 
without appeal to any part of our weaker nature,
without the gorgeous trappings of painting or music, 
yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. 
 The true spirit of delight, the exaltation, 
the sense of being more than Man, 
which is the touchstone of the highest excellence, 
is to be found in mathematics as surely as in poetry.
--BERTRAND RUSSELL,


The 182nd day of the year; there are 182 connected bipartite graphs with 8 vertices. *What's So Special About This Number

The 182nd prime (1091) is the smaller of a pair of twin primes (the 40th pair, actually) *Math Year-Round ‏@MathYearRound(Students might convince themselves that it was not necessary to say it was the smaller of the pair.)

Language time:
182 is called a pronic, promic, or heteromecic and even an oblong number. Pronic Numbers are numbers that are the product of two consecutive integers; 2, 6, 12, 20, ..(doubles of triangular numbers).  Pronic seems to be a misspelling of promic, from the Greek promekes, for rectangular, oblate or oblong. Neither pronic nor promic seems to appear in most modern dictionaries. Richard Guy pointed out to the Hyacinthos newsgroup that pronic had been used by Euler in series one, volume fifteen of his Opera, so the mathematical use of the "n" form has a long history.

Oblong is from the Latin ob (excessive) + longus (long). The word oblong is also commonly used as an alternate name for a rectangle. In his translation of Euclid's "Elements", Sir Thomas Heath translates the Greek word eteromhkes[hetero mekes - literally "different lengths"] in Book one, Definition 22 as oblong. . "Of Quadrilateral figures, a square is that which is both equilateral and right-angled; an oblong that which is right angled but not equilateral...". (note that with this definition, a square is not a subset of rectangles.)


EVENTS

1349 Sometimes, a little astronomical knowledge can be a dangerous thing, even to those who possess it. A tale from medieval England is passed down from the chronicles of the scholar Thomas Bradwardine of a witch who attempted to force her will on the people through knowledge of an impending eclipse. Bradwardine, who had studied astronomical predictions of Arabian astronomers, saw through the ruse, and matched the prediction of the July 01, 1349 A.D. lunar eclipse with a more precise one of his own. No word survives as to the fate of the accused, but one can only suspect banishment or worse.*listosaur.com

1694 Opening of the University of Halle in Germany. Georg Cantor later taught there. *VFR

1770 – Lexell's Comet passed closer to the Earth than any other comet in recorded history, approaching to a distance of 0.0146 a.u. *OnThisDay & Facts ‏@NotableHistory discovered by astronomer Charles Messier

1798 Napoleon’s fleet reached Alexandria, bearing Monge and Fourier.*VFR

1819 William George Horner’s (1786–1837) method of solving equations is presented to the Royal Society.*VFR In numerical analysis, the Horner scheme (also known as Horner algorithm), named after William George Horner, is an algorithm for the efficient evaluation of polynomials in monomial form. Horner's method describes a manual process by which one may approximate the roots of a polynomial equation. The Horner scheme can also be viewed as a fast algorithm for dividing a polynomial by a linear polynomial with Ruffini's rule. Student's often learn this process as synthetic division.  *Wik

1847 The United States issued its first two postage stamps. They pictured Benjamin Franklin and George Washington respectively [Scott #1-2]. *VFR

1852 Dirichlet delivers a memorial lecture at the Berlin Academy in honor of his close friend Jacobi, calling him the greatest member of the Academy since Lagrange. *VFR

1856 Weierstrass appointed Professor of Mathematics at the Royal Polytechnic School in Berlin. *VFR

In 1858, the Wallace-Darwin theory of evolution was first published at the Linnaean Society in London*. The previous month Charles Darwin received a letter from Alfred Wallace, who was collecting specimens in the East Indies. Wallace had independently developed a theory of natural selection - which was almost identical to Darwin's. The letter asked Darwin to evaluate the theory, and if worthy of publication, to forward the manuscript to Charles Lyell. Darwin did so, almost giving up his clear priority for he had not yet published his masterwork The Origin of Species. Neither Darwin nor Wallace were present for the oral presentation at the Linnaean Society, where geologist Charles Lyell and botanist Joseph Hooker presented both Wallace's paper and excerpts from Darwin's unpublished 1844 essay.*TIS
In his annual report the following May, society president Thomas Bell wrote, “The year which has passed has not, indeed, been marked by any of those striking discoveries which at once revolutionize, so to speak, the department of science on which they bear.” *Futility Closet

1873 From a letter dated July 1, 1873, in the Coast Survey files in the National Archives in Washington. Peirce writes, "Newcomb, in a paper .... says he finds that pendulums hung by springs twist and untwist as they oscillate and says this will affect the time of oscillation."The Charles S. Peirce-Simon Newcomb Correspondence by Carolyn Eisele.

1894 The New York Mathematical Society changed its name to the American Mathematical Society to reflect its national charter. [AMS Semicentennial Publications, vol. 1, p. 74]. *VFR

1908 International agreement to use SOS for distress signal signed. An International Radiotelegraphic Convention, ... met in Berlin in 1906. This body signed an international agreement on November 3, 1906, with an effective date of July 1, 1908. An extensive collection of Service Regulations was included to supplement the Convention, and in particular Article XVI adopted Germany's Notzeichen distress signal as the international standard, stating: "Ships in distress shall use the following signal: · · · — — — · · · repeated at brief intervals". *Citizens Compendium

1918 Florian Cajori (1859–1930) appointed professor of the history of mathematics at the University of California, Berkeley, one of the few such chairs in the the world. During the next twelve years he published 159 papers on the history of mathematics. *VFR

1948 The Bell System Technical Journal publishes the first part of Claude Shannon's "A Mathematical Theory of Communication", regarded as a foundation of information theory, introducing the concept of Shannon entropy and adopting the term Bit. *Wik

1964 The New York Times, in a full page ad, announced that Paul Newman and Joanne Woodward would play a game on an elliptical pool table. It had a pocket at one focus so that if the ball passed over the other focus it would bank off the rail into the pocket. [UMAP Journal, 4(1983), p. 176; Recreational Mathematics Magazine, no. 14, January-February 1964] *VFR

1980 A method of trisecting any given acute angle Origami is demonstrated. Hisashi Abe invented this idea and published in July, 1980 edition of the Japanese journal "Suugaku Seminar"(Mathematics Seminar). For this method, and more ways to trisect the angle, see this post.
*Takaya Iwamoto

2001 The last occurrence that there were 3 eclipses in one month, and of which two solar eclipses. For July 2000 being on 1st a partial solar eclipse, 16th a total lunar eclipse, and 31st a partial solar eclipse. The next occurrence with a month with 3 eclipses will be December 2206 with a partial solar eclipse on 1st and 30th and a total lunar eclipse on 16th. Ref. Fred Espenak 06/00 SEML. *NSEC

2010 Grigori Yakovlevich Perelman turned down the Clay Millineum prize of one million dollars, saying that he considers his contribution to proving the Poincaré conjecture to be no greater than that of Richard Hamilton, who introduced the theory of Ricci flow with the aim of attacking the geometrization conjecture. On March 18 It had been announced that he had met the criteria to receive the first Clay Millennium Prize for resolution of the Poincaré conjecture. *Wik

2015 Michael Elmhirst Cates, becomes the 19th Lucasian Professor of Mathematic at the University of Cambridge. Professor Cates is a physicist and Professor of Natural Philosophy and Royal Society Research Professor at the University of Edinburgh. Previous recognitions for Prof. Cates include Maxwell Medal and Prize (1991), the Paul Dirac Medal and Prize (2009), and the Weissenberg Award (2013). He will assume the chair from another Physicist, Michael Green. He follows a line that began with Isaac Barrow and Isaac Newton and includes Charles Babbage, Paul Dirac, and Stephen Hawking


BIRTHS

1646 Gottfried Wilhelm Leibniz (July 1, 1646 – November 14, 1716) born in Leipzig, Germany. Leibniz occupies a prominent place in the history of mathematics and the history of philosophy. He developed the infinitesimal calculus independently of Isaac Newton, and Leibniz's mathematical notation has been widely used ever since it was published. He became one of the most prolific inventors in the field of mechanical calculators. While working on adding automatic multiplication and division to Pascal's calculator, he was the first to describe a pinwheel calculator in 1685[4] and invented the Leibniz wheel, used in the arithmometer, the first mass-produced mechanical calculator. He also refined the binary number system, which is at the foundation of virtually all digital computers. In philosophy, Leibniz is mostly noted for his optimism, e.g. his conclusion that our Universe is, in a restricted sense, the best possible one that God could have created. Leibniz, along with René Descartes and Baruch Spinoza, was one of the three great 17th century advocates of rationalism. The work of Leibniz anticipated modern logic and analytic philosophy, but his philosophy also looks back to the scholastic tradition, in which conclusions are produced by applying reason to first principles or a priori definitions rather than to empirical evidence. Leibniz made major contributions to physics and technology, and anticipated notions that surfaced much later in biology, medicine, geology, probability theory, psychology, linguistics, and information science. He wrote works on politics, law, ethics, theology, history, philosophy, and philology. Leibniz's contributions to this vast array of subjects were scattered in various learned journals, in tens of thousands of letters, and in unpublished manuscripts. As of 2010, there is no complete gathering of the writings of Leibniz.*Wik

1779 John Farrar (July 1, 1779 – May 8, 1853) born at Lincoln, Massachusetts. As Hollis professor of mathematics and natural philosophy at Harvard, he was responsible for a sweeping modernization of the science and mathematics curriculum, including the change from Newton’s to Leibniz’s notation for the calculus. *VFR

1788 Jean Victor Poncelet (July 1, 1788 – December 22, 1867) born in Metz, France. He taught engineering and mechanics, but had a hobby of much greater interest—projective geometry. *VFR French mathematician and engineer whose study of the pole and polar lines associated with conic led to the principle of duality. While serving as an engineer in Napoleon's 1812 Russian campaign as an engineer, he was left for dead at Krasnoy, but then captured. During his imprisonment he studied projective geometry and wrote a treatise on analytic geometry. Released in 1814, he returned to France, and in 1822 published Traité des propriétés projectives des figures in which he presented his fundamental ideas of projective geometry such as the cross-ratio, perspective, involution and the circular points at infinity. As a professor of mechanics (1825-35), he applied mechanics to improve waterwheels and was able to double their efficiency.*TIS

1848 Emil Weyr (1 July 1848 in Prague, Bohemia (now Czech Republic) - 25 Jan 1894 in Vienna, Austria) His father Frantisek Weyr, was a professor of mathematics at a realschule (secondary school) in Prague from 1855. Emil was four years older than his brother Eduard Weyr who also became a famous mathematician. Emil attended the realschule in Prague where his father taught, then studied at the Prague Polytechnic from 1865 to 1868 where he was taught geometry by Vilém Fiedler.
He studied in Italy with Cremona and Casorati during the academic year 1870-71 returning to Prague where he continued to teach. In 1872 he was elected to be Head of the Union of Czech Mathematicians and Physicists. In 1875 he was appointed as professor of mathematics at the University of Vienna. He, together with his brother Eduard Weyr, were the main members of the Austrian geometric school. They were interested in descriptive geometry, then in projective geometry and their interests turned towards algebraic and synthetic methods in geometry. Among many works Emil Weyr published were Die Elemente der projectivischen Geometrie and Über die Geometrie der alten Aegypter.
Emil Weyr led the geometry school in Vienna throughout the 1880's up until his death. Together with Gustav von Escherich, Emil Weyr founded the important mathematical journal Monatshefte fuer Mathematik und Physik in 1890. The first volumes of the journal contain papers written by his brother Eduard. In 1891 Emil Weyr became one of the first 19 founder members of the Royal Czech Academy of Sciences. *SAU

1906 Jean Dieudonn´e (1 July 1906 – 29 November 1992) born. *VFR French mathematician and educator known for his writings on abstract algebra, functional analysis, topology, and his theory of Lie groups. Dieudonné was one of the two main contributors to the Bourbaki series of texts. He began his mathematical career working on the analysis of polynomials. He worked in a wide variety of mathematical areas including general topology, topological vector spaces, algebraic geometry, invariant theory and the classical groups. *TIS



DEATHS


1957 Donald McIntosh (Banffshire, 13 January 1868 – Invernesshire, 1 July 1957) graduated from the University of Aberdeen and taught at George Watson's Ladies College in Edinburgh. He was appointed a Director of Education. He became Secretary of the EMS in 1899 and President in 1905. *SAU

1963 Bevan Braithwaite Baker (1890 in Edinburgh, Scotland - 1 July 1963 in Edinburgh, Scotland) graduated from University College London. After service in World War I he became a lecturer at Edinburgh University and was Secretary of the EMS from 1921 to 1923. He left to become Professor at Royal Holloway College London. *SAU

1971 Sir William Lawrence Bragg (31 Mar 1890; 1 Jul 1971 at age 81) was an Australian-English physicist and X-ray crystallographer who at the early age of 25, shared the Nobel Prize for Physics in 1915 (with his father, Sir William Bragg). Lawrence Bragg formulated the Bragg law of X-ray diffraction, which is basic for the determination of crystal structure: nλ = 2dsinθ which relates the wavelength of x-rays, λ, the angle of incidence on a crystal, θ, and the spacing of crystal planes, d, for x-ray diffraction, where n is an integer (1, 2, 3, etc.). Together, the Braggs worked out the crystal structures of a number of substances. Early in this work, they showed that sodium chloride does not have individual molecules in the solid, but is an array of sodium and chloride ions. *TIS

1983 Richard Buckminster Fuller (July 12, 1895 – July 1, 1983) was a U.S. engineer and architect who developed the geodesic dome, the only large dome that can be set directly on the ground as a complete structure, and the only practical kind of building that has no limiting dimensions (i.e., beyond which the structural strength must be insufficient). Fuller also invented a wide range of other paradigm-shifting machines and structural systems. He was especially interested in high-strength-to-weight designs, with a maximum of utility for minimum of material. His designs and engineering philosophy are part of the foundation of contemporary high-tech design aesthetics. He held over 2000 patents.*TIS
This is another one who died within two weeks of his date of birth. I must organize data on this...



Credits :
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbell
          ALL IN TIL IT DROPS, FORGET ALL WARNINGS; THE ULTIMATE HIGH RISK GAME / SAFE HAVEN   

All In Til It Drops, Forget All Warnings; The Ultimate High Risk Game

By: Doug Wakefield


I have not posted to the blog recently since week by week equity markets continue to reflect "perfect calm". Don't get me wrong, banana peels and trends are building that could shift the landscape quickly.

Yet today, what we are watching is not only "perfection" in the S&P 500 and Dow, but in South Korea's Kospi Index, India's Nifty 50, and in Argentina's MelVal Index.

Do global headlines support a time of world peace and booming economic prosperity worldwide? Do they support a "perfect calm" in global stocks?

Think with me. Don't fall asleep from the continued drone of algorithmic calm.

Look at these four charts, developed by UBS and Citigroup found in two recent articles on Zero Hedge.

The Global Credit Impulse


Credit growth has fallen off a cliff. The "perfect" stock market view is not supported by "perfect" credit growth.

Is the solution even MORE debt? The Keynesian central bankers seem determined to outrun the largest stock bubble on record by printing their way out of this conundrum.

Central Banks Buy 1.5 Trillion in Assets YTD, ValueWalk, June 10, 2017

After 8 years and over 12 trillion in asset purchases between 2009 and 2016, central banking actions continue telling the crowd, "we have your back". Yet the reality of 2000-2002 and 2007-2009 make it clear there is no such thing as a permanent bailout.

These charts by Citigroup make it clear that the era of the "assisted" stock investor is facing the reality of a severe credit contraction.

Central Banks' Response to Lack of Inflation
Credit Fuelled Asset Price Inflation
China House Prices and European Equities


There is no "the market feels"; only "the computers reacted". Without someone's computers preset to trade at specific technical lines over and over again, the "perfect calm" would not exist. A crowd of humans worldwide could never be this precise.

June 9th was a yet another major warning to global equity investors. Ignoring so many fundamental and technical warnings is only increasing more system-wide risk. When the patterns change due to extreme crowding, the big shift will not present investors with much time to mentally and financially adjust.


G3 Central Bank Balance Sheet

For 6 weeks I have been adding to my longest newsletter since starting The Investor's Mind in 2006.

It is called "Ten 'Little' Dominoes". We finished domino #8 last week, with #9 being released next week. Click here to join those reading The Investor's Mind as we monitor these world trends, and seek to prepare for what lies ahead, rather than hope in more "assistance",


Dow Weekly Chart

          Software Design Engineer - Relocation to Wilton, CT - (Boston)   
Location Wilton - CT, US Activity Level Bachelor Experience 0-2 Starter Available since 2017-06-06 Functional area Background Computer Science Reference US02878 Apply for this jobIntroductionASML brings together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world's leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, the Netherlands, and we have 18 office locations around the United States including main offices in Wilton, CT, Chandler, AZ and San Jose, CA.Job DescriptionThe individual will be responsible for key modules of the production software throughout the development cycle, ranging from specifying functional requirements by working with multi-disciplinary teams, providing detailed design specifications, outlining testing effort, to implementing the software and executing the testing steps to qualify the product, etc. The engineer will work closely with the team leader, the project management and other developers to create robust software that offers advanced architecture and fulfills the business needs.As a software engineer at ASML, you will be working on robotics, image processing, complex algorithms, GUI and application software.
          Informatics researchers combine algorithm with EHR data to predict secondary stroke risks   
A team of cardiologists and informatics researchers in Northern California has developed a way to predict which patients will experience an irregular heartbeat after having a stroke—a significant risk factor for a second stroke. The tool could be used to identify patients that require 24/7 monitoring.
          Microsoft made its AI work on a $10 Raspberry Pi   

When you're far from a cell tower and need to figure out if that bluebird is Sialia sialis or Sialia mexicana, no cloud server is going to help you. That's why companies are squeezing AI onto portable devices, and Microsoft has just taken that to a new extreme by putting deep learning algorithms onto a Raspberry Pi. The goals is to get AI onto "dumb" devices like sprinklers, medical implants and soil sensors to make them more useful, even if there's no supercomputer or internet connection in sight.

Via: Mashable

Source: Microsoft


          Facebook gibt Moderatoren vollständigen Zugriff auf Konten von Terrorverdächtigen   
Die Verdächtigen ermittelt das Social Network mithilfe von Algorithmen. Die Moderatoren sollen anschließend die Inhalte bewerten. Dafür dürfen sie auch den Standort des Nutzers ermitteln und seine privaten Nachrichten lesen.
          The Value Of Our Digital Bits   

I think way too much about the digital bits being transmitted online each day. I study the APIs that are increasingly being used to share these bits via websites, mobile, and other Internet-connected devices. These bits can be as simple as your messages and images or can be as complex as the inputs and outputs of algorithms used in self-driving cars. I think about bits at the level up from just the 1s and 0s, at the point where they start to become something more meaningful, and tangible--as they are sent and received via the Internet, using web technology.

The average person takes these digital bits for granted, and are not burdened with the technical, business, and political concerns surrounding each of these bits. For many other folks across a variety of sectors, these bits are valuable and they are looking to get access to as many of them as you can. These folks might work at technology startups, hedge funds, maybe in law enforcement or just tech-savvy hacker or activist on the Internet. If you hang out in these circles, data is often the new oil, and you are looking to get your hands on as much of it as you can, and are eager to mine it everywhere you possibly can. 

In 2010, I started mapping out this layer of the web that was emerging, where bits were beginning to be sent and received via mobile devices, expanding the opportunity to make money from these increasingly valuable bits on the web. This move to mobile added a new dimension to each bit, making it even more valuable than they were before--it now possessed a latitude and longitude, telling us where it originated. Soon, this approach to sending and receiving digital bits spread to other Internet-connected devices beyond just our mobile phones, like our automobiles, home thermostats, and even wearables--to name just a few of the emerging areas.

The value of these bits will vary from situation to situation, with much of the value lying in the view of whoever is looking to acquire it. The value of a Facebook wall post is worth a different amount to an advertiser looking for a potential audience, then it will be to law enforcement looking for clues in an investigation, and let's not forget the value of this bit to the person who is posting it, or maybe their friends who are viewing it. When it comes to business in 2017, it is clear that our digital bits are valuable, even if much of this value is purely based on perception and very little tangible value in the real world. With many wild claims about the value and benefit of gathering, storing, and selling bits.

Markets are continually working to define the value of bits at a macro level, with many technology companies dominating the list, and APIs are defining the value of bits at the micro level--this is where I pay attention to things, at the individual API transaction level. I enjoy studying the value of individual bits, not because I want to make money off of them, but because I want to understand how those in power positions perceive the value of our bits and are buying and selling our bits at scale. Whether it is compute and storage in the cloud, or the television programs we stream, and pictures and videos we share in our homes, these bits are increasing in value, and I want to understand the process how everything we do is being reduced to a transaction. 


          Many Perspectives On Internet Domains   

I am always fascinated by how people see Internet domains. I do not expect everyone to grasp all of the technical details of DNS or the nuance of the meaning behind the word domain, but I'm perpetually amazed by what people associate or do not associate with the concept. I like to write about these things under my domain literacy work, saving the research I do for future use, but also using the process to polish my storytelling on the subject, and hopefully being more influential when it comes to domain literacy discussions.

After watching the conversation around Audrey's decision to block annotation from her domain(s), I just wanted to take a moment and capture a few of the strange misconceptions around domains I've seen come up, as well as rework some of the existing myths and misunderstandings I deal with regularly when it comes to my API research, and wider domain literacy work. Let's explore some of the storytelling going on when it comes to what is an Internet domain.

What Is A Domain?
Many folks have no idea what a domain is. That they type them in regularly in their browsers, click on them, let alone that you can buy and own your own domain. This illiteracy actually plays into the hands of tech entrepreneurs, and each wave of capitalists who are investing in them--they do not want you knowing the details of each domain, who is behind them, and they want to make sure you are always operating on someone else's domain. It is how they will own, aggregate, and monetize your bits, always being the first to extract any value from what you do online, and via your mobile phones.

You Don't Own Your Domain!
A regular thing I hear back from people about domains is that you don't every truly own your domain. Well, I'd first say that you never really truly own ANYTHING, but that is probably another conversation. Do you really own your house? What happens if you don't pay your taxes, or use and respect the title company, and other powers involved? What about imminent domain laws? Sure, you don't really own your domain, but you are able to purchase it, control the addressing of it, and decide what gets hosted there (or not). It's pretty damn close to a common definition of ownership for this discussion.

Your Domain Is On the Internet So It Is Public!
Just walk yourself through the top domains you can think of. Does this argument hold any water? Every part, of every domain on the Internet is public because it uses public DNS and Internet infrastructure? No. There are so many grades of access and availability across many domains that use public infrastructure. Domain owners and operators get to determine which portions of a domain are accessible by the public, private partners, and even across internal actors. Even on the public areas, not protected by a password, there can be different levels of content delivery based upon region, individual IP address, or just randomly, leaving it to the algorithm to personalize what you will see. There are no guaranttees of something being public, just because it uses a public domain.

Domain Name Servers (DNS) Is Voodoo
Yes. DNS is voodoo. I've been managing DNS professionally for domains since 1998, and I still think it's voodoo. Even with DNS being a dark art, it is still something the average person can comprehend, and even manage at a basic level for simple domains, especially with the help of DNS service providers. DNS is the address, doorway and even the fence for the perimeter of your domain. DNS also helps you define and quantify the size of your domain, with the number of domains exponentially expanding your digital territory. A basic level proficiency with DNS is required to manage your own domain(s) successfully.

We Own What You Do In Our Domain!
Ok. Sure. Any new data or content that is generated by systems running within your domain can be seen as YOUR intellectual property. However, when you invite people to bring their bits (photos, videos, thoughts) to your domain and don't really educate them about intellectual property, and what you are up to, it can be easily argued that maybe what people generate in your domain isn't always yours. Even with that said, ensuring things happen within a specific domain, so that you can place some sort of ownership claim over those bits is a pretty standard operating procedure for the web today. This is why most of my work is conducted via my own domain(s) each day, and syndicated out to other domains as I see fit.

There Is No Real Difference Between Domains 
As people surf the web, they rarely see the difference between each domain. Unless it's big brands like Twitter, Facebook, Google, and others, I don't think people really ever consider the domain they are on, or who might be behind it. Those of us in the business do a lot of thinking about domains and see the crack in the web, but the average person doesn't see the boundaries, differences, or motivations behind. This all contributes to many different paths people take when it comes to domain literacy--depending on where they boarded with the concepts they'll see domains very differently. While some of us enjoy helping others understand domains, there are many who think it should be kept in the realm of the dark arts, and something normals shouldn't worry their pretty little heads about.

Everybody Gets The Same Experience At A Public Domain
Each domain you visit on the public web looks the same for everyone who visits, is a common perception I get from folks. We are good at projecting our reality at common online domains onto other people. The news I see on my favorite news site is what everyone else sees. My view of Facebook, Instagram, and Twitter is similar to what other people experience, or rather, I don't think people spend much time thinking about it, things are the way they are through a lack of curiosity. My Facebook is definitely not your Facebook. Our web experience is increasingly personalized and bubbleized, changing how and what each domain will mean to different folks. Net Neutrality is under attack on many fronts and is rapidly being eroded away in our browsers and on our mobile phones via the major providers.

I am captivated by this version of our online world that is unfolding around us. What worries me is the lack of understanding about how it works and some awareness of where they are all operating when online. People don't seem concerned with knowing what is safe, what is not. What worries me the most is that number of people who don't even have the concept of a domain, domain ownership, and any sense of separation between sites online. After that, the misuse, misinformation, and obfuscation of the digital world by people operating in the shadows and benefitting from ad revenue. I know many folks who would argue that we need to create safe spaces (domains) like Facebook where people can operate, but I feel pretty strongly that this is an Internet discussion, and not merely a platform one.

We have a lot of work ahead of us when it comes to web literacy. With the amount of time we are spending online, and the ways we are letting it infiltrate our physical worlds, we have to do better and educating people about the basic building blocks of the web. If we let "them" ruin the web, and platforms are the only safe place to be--cooperations win, and this grand experiment called the web is over. Maybe it already is, or maybe it never was, or maybe we can just help folks just see the web for what it is.


          Working To Understand The Digital World Around Us   

My partner in crime Audrey Watters and I recently rebranded our umbrella company as Contrafabulists, and along the way, we worked with our friend Bryan Mathers to help us develop some graphics that would help define our work. Bryan quickly developed a logo for Contrafabulists that I think represents what we do--embedding ourselves within the gears of the machine, pushing back on the daily stories from the technology sector.

Bryan has a unique approach to conducting his work. He spent time wth us on a video call discussing our vision, listening to both of us speak, while also applying some of what he already knew of Audrey's Hack Education work, as well as my API Evangelist and Drone Recovery work. From this discussion, he created a banner image, that we use as the banner for the Contrafabulists website -- providing another great visualization of our work.

I love staring into the eyes of the owl, which stares back at you with its mechanical gaze, forcing you to ask the hard questions about how you are using technology. Maybe you are complicit in the stories coming out of the technology sector, or maybe you are just a listener or narrator of these stories being--either way, the owl's eyes quickly get to work understanding you, and what defines you from a technical view.

After we launched the Contrafabulists website, Bryan was listening to our podcast, where Audrey and I rant about the week and he produced an image that was unexpected and resonates with me in some powerful ways. Bryans work illustrates where we are at when it comes to defining who we are in the digital world unfolding around us, while the machines are all learning about us as well.

I do not know which conversation inspired Bryan's work, but I'm assuming it was our discussion around what machine learning technology can do, and what it can't do. Machine learning is a very (intentionally so) abstract term that is being used across the latest wave of rhetoric coming out of the technology sector, that often invokes magical visions in your head about what the machines are learning. Understanding more about what is machine learning is, and what it isn't, is a significant portion of my work as API Evangelist, overlapping with Audrey's work on Hack Education--Bryan's work is extremely relevant and continues to help augment our storytelling in an important way.

There are three significant things going on in his image for me. At first glance, it feels like a representation of what the machine sees of us, when trying to interpret a photo of us using facial or object recognition, defining our face, the space and context around us, while also linking that to other aspects of our social and digital footprint. Then I'm overwhelmed with feelings of my own efforts to define who I am, with each blog post, social media post, or image uploaded--in which the machine is working so hard to understand in the same moment. Then there is the intersection of these two worlds, and the struggle to understand, connect, find meaning, and deliver value--the struggle to define our digital self, something we either do ourselves, or it will be done for us by the technological platforms we operate on.

As I process these thoughts, I would add a fourth dimension to this struggle, something that is very API driven--the role 3rd parties play in defining us, and the world around us, in an increasingly digital world. Our world is increasingly being shaped by platforms, and the 3rd parties who have learned how to p0wn these platforms, whether for ideological or financial gain. Our understanding of the immigration debate is perpetually being shaped by platforms like Twitter or Facebook, and a small group of 3rd party influencers who have learned to shape and game the algorithm.

As we are learning, the machine's are also learning about us, something that is being used against us in real-time, by those who understand how to manipulate the algorithms to achieve their objectives. Helping people understand what we mean when we mean when we say machine learning is difficult--this is because machine learning is technically complicated, but it is also designed to provide a smoke screen for any exploitation and manipulation that is occurring behind the scenes. Machine learning is designed to be understood by a handful of wizards, leaving everyone else to bask in the glow of the personalization and convenience it delivers, leaving no questions regarding the magical capabilities of the machine.

Machine learning is increasingly defining us in the online world, watching everything we do on Facebook, Instagram, Twiter, and via search engines like Google, but it is also beginning to define how we see the physical world around us, helping shape how we see other cities, countries, and places we may never actually visit, and experience in person--algorithmically painting a picture of how we see the world.

Audrey and I are dedicated to understanding the stories coming out of the tech sector, cutting through the marketing, hype, and storytelling accompanying each wave of technology. Machine learning is just one of many areas we work to understand, in an increasingly complex landscape of magic and wizardry being sold via the Internet and applications that are infiltrating our mobile phones, televisions, automobiles, and every other corner of our personal and professional lives. 

I'm thankful to have folks like Bryan Mathers along for the ride, assisting us in crafting images for the stories we tell. I feel like our words are critical, but it is equally important to have meaningful images to go along with the words we write each day. Amidst the constant assault of information each day, sometimes all we have time for is just a couple seconds to absorb a single image, making the photo and image gallery an important section in our Contrafabulists toolbox. I imagine using Bryan's machine learning photos in dozens of stories over the next couple of years, and I'm hoping that eventually, it will continue to come into focus, helping us better connect the dots, and see our digital reflection in this pool we have waded into.


          Machine Learning Will Be A Vehicle For Many Heists In The Future   

I am spending some cycles on my algorithmic rotoscope work. Which is basically a stationary exercise bicycle for my learning about what is, and what is not machine learning. I am using it to help me understand and tell stories about machine learning by creating images using machine learning that I can use in my machine learning storytelling. Picture a bunch of machine learning gears all working together to help make sense of what I'm doing, and WTF I am talking about.

As I'm writing a story on how image style transfer machine learning could be put to use by libraries, museums, and collection curators, I'm reminded of what a con machine learning will be in the future, and be a vehicle for the extraction of value and outright theft. My image style transfer work is just one tiny slice of this pie. I am browsing through the art collections of museums, finding images that have meaning and value, and then I'm firing up an AWS instance that costs me $1 per hour to run, pointing it at this image, and extracting the style, text, color, and other characteristics. I take what I extracted from a machine learning training session, and package up into a machine learning model, that I can use in a variety of algorithmic objectives I have.

I didn't learn anything about the work of art. I basically took a copy of its likeness and features. Kind of like the old Indian chief would say to the photographer in the 19th century when they'd take his photo. I'm not just taking a digital copy of this image. I'm taking a digital copy of the essence of this image. Now I can take this essence and apply in an Instagram-like application, transferring the essence of the image to any new photo the end-user desires. Is this theft? Do I own the owner of the image anything? I'm guessing it depends on the licensing of the image I used in the image style transfer model--which is why I tend to use openly license photos. I'll have to learn more about copyright and see if there are any algorithmic machine learning precedents to be had.

My theft example in this story is just low-level algorithmic art essence theft. However, this same approach will play out across all sectors. A company will approach another company telling them they have this amazing machine learning voodoo, and if we run against your data, content, and media, it will tell you exactly what you need to know, give you the awareness of a deity. Oh, and thank you for giving me access to all your data, content, and media, it has significantly increased the value of my machine learning models--something that might not be expressed in our business agreement. This type of business model is above your pay grade, and operating on a different plane of existence.

Machine learning has a number of valuable use, with some very interesting advancements having been made in recent years, notably around Tensorflow. Machine learning doesn't have me concerned. It is the investment behind machine learning, and the less than ethical approaches behind some machine learning companies I am watching, and their tendencies towards making wild claims about what machine learning can do. Machine learning will be the trojan horse for this latest wave of artificial intelligence snake oil salesman. All I am saying is, that you should be thoughtful about what machine learning solutions you connect to your backend, and when possible make sure you are just connecting them to a sandboxed, special version of your world that won't actually do any damage when things go south.


          Why Would People Want Fine Art Trained Machine Learning Models   

I'm spending time on my algorithmic rotoscope work, and thinking about how the machine learning style textures I've been marking can be put to use. I'm trying to see things from different vantage points and develop a better understanding of how texture styles can be put to use in the regular world.

I am enjoying using image style filters in my writing. It gives me kind of a gamified layer to my photography and drone hobby that allows me to create actual images I can use in my work as the API Evangelist. Having unique filtered images available for use in my writing is valuable to me--enough to justify the couple hundreds of dollars I spend each month on AWS servers.

I know why I like applying image styles to my photos, but why do others? Most of the image filters out there we've seen from apps like Prisma are focused on fine art. Training image style transfer machine learning models on popular art that people are already familiar with. I guess this is allows people to apply the characteristics of art they like to the photographic layer of our increasingly digital lives.

To me, it feels like some sort of art placebo. A way of superficially and algorithmic injecting what are brain tells us is artsy to our fairly empty, digital photo reality. Taking photos in real time isn't satisfying enough anymore. We need to distract ourselves from the world by applying reality to our digitally documented physical world--almost the opposite of augmented reality if there is such a thing. Getting lost in the ability to look at the real world through the algorithmic lens of our online life.

We are stealing the essence the meaningful, tangible art from our real world, and digitizing it. We take this essense and algorithmically apply it our everyday life trying to add some color, some texture, but not too much. We need the photos to still be meaningful, and have context in our life, but we need to be able to spray an algorithmic lacquer of meaning on our intangible lives.

The more filters we have, the more lenses we have to look at the exact same moment we live each day. We go to work. We go to school. We see the same scenery, the same people, and the same pictures each day. Now we are able to algorithmic shift, distort, and paint the picture of our lives we want to see.

Now we can add color to our life. We are being trained to think we can change the palette, and are in control over our lives. We can colorize the old World War 2 era photos of our day, and choose whether we want to color within, or outside the lines. Our lives don't have to be just binary 1s and 0s, and black or white.

Slowly, picture by picture, algorithmic transfer by algorithmic transfer, the way we see the world changes. We no longer settle for the way things are, the way our mobile phone camera catches it. The digital version is the image we share with my friends, family, and the world. It should always be the most brilliant, the most colorful, and the painting that catches their eye and makes them stand in front of on the wall of your Facebook feed captivated.

We no longer will remember what reality looks like, or what art looks like. Our collective social media memory will dictate what the world looks like. The number of likes will determine what is artistic, and what is beautiful or ugly. The algorithm will only show us what images match the world it wants us to see. Algorithmically, artistically painting the inside walls of our digital bubble.

Eventually, the sensors that stimulate us when we see photos will be well worn. They will be well programmed, with known inputs, and predictable outputs. The algorithm will be able to deliver exactly what we need, and correctly predict what we will need next. Scheduling up and queuing the next fifty possible scenarios--with exactly the right colors, textures, and meaning.

How we see art will be forever changed by the algorithm. Our machines will never see art. Our machines will never know art. Our machines will only be able to transfer the characteristics we see and deliver them into newer, more relevant, timely, and meaningful images. Distilling down the essence of art into binary, and programming us to think this synthetic art is meaningful, and still applies to our physical world.

Like I said, I think people like applying artistic image filters to their mobile photos because it is the opposite of augmented reality. They are trying to augment their digital (hopes of reality) presence with the essence of what we (algorithm) think matters to use in the world. This process isn't about training a model to see art like some folks may tell you. It is about distilling down some of the most simple aspects of what our eyes see as art, and give this algorithm to our mobile phones and social networks to apply to the photograph digital logging of our physical reality.

It feels like this is about reprogramming people. It is about reprogramming what stimulates you. Automating an algorithmic view of what matters when it comes to art, and applying it to a digital view of matters in our daily worlds, via our social networks. Just one more area of our life where we are allowing algorithms to reprogram us, and bend our reality to be more digital.

I Borrowed This Image From University of Maine Museum of Art


          The Oberservability Of Uber   

I had another observation out of the Uber news from this last week, where Uber was actively targeting regulators and police in cities around the globe, and delivering an alternate experience for these users because they had them targeted as an enemy of the company. To most startups, regulation is seen as the enemy, so these users belong in a special bucket--so they can be excluded from the service, and even actively given a special Uber experience.

It makes me think about the observability of the platforms we depend on, like Uber. How observable is Uber to the average user, to regulators and law enforcement, the government? How observable should the platforms we depend on be? Can everyone sign up for an account, use the website, mobile applications, or APIs and expect the same results? How well can we understand the internal states of Uber, the platform, and company, from knowledge obtained through its existing external outputs -- mobile application, and API.

When it comes to the observability of the platforms we depend on via our mobile phones each day there are no laws stating they have to treat us the same. The applications on our mobile phones are personalized, making notions of net neutrality seem naive. There is nothing that says Uber can't treat each user differently, based upon their profile score, or if they are law enforcement. We are not entitled to any sort of visibility into the algorithms that decide whether we get a ride with Uber, or how they see us--this is the mystery, magic, and allure of the algorithm. This is why startups are able to wrap anything in an algorithm and sell it as the next big thing.

The question of how observable Uber will be defined in coming months and years. What surprises me is that we are just now getting around to having these conversations, when these companies possess an unprecedented amount of observability into our personal and professional lives. The Uber app knows a lot about us, and in turn, Uber knows a lot about us. I'm thinking the more important question is, why are we allowing for so much observability by these tech companies into our lives, with so little in return when it comes to understanding business practices and the ethics behind the company firewall?


          Machine Learning Style Transfer For Museums, Libraries, and Collections   

I putting some thought into some next steps for my algorithmic rotoscope work, which is about the training and applying of image style transfer machine learning models. I'm talking with Jason Toy (@jtoy) over at Somatic about the variety of use cases, and I want to spend some thinking about image style transfers, from the perspective of a collector or curator of images--brainstorming how they can organize, make available their work(s) for use in image style transfers.

Ok, let's start with the basics--what am I talking about when I say image style transfer?  I recommend starting with a basic definition of machine learning in this context, providing by my girlfriend, and partner in crime Audrey Watters. Beyond, that I am just referring to the training a machine learning model by directing it to scan an image. This model can then be applied to other images, essentially transferring the style of one image, to any other image. There are a handful of mobile applications out there right now that let you apply a handful of filters to images taken with your mobile phone--Somatic is looking to be the wholesale provider of these features

Training one of these models isn't cheap. It costs me about $20 per model in GPUs to create--this doesn't consider my time, just my hard compute costs (AWS bill). Not every model does anything interesting. Not all images, photos, and pieces of art translate into cool features when applied to images. I've spent about $700 training 35 filters. Some of them are cool, and some of them are meh. I've had the most luck focusing on dystopian landscapes, which I can use in my storytelling around topics like immigration, technology, and the election

This work ended up with Jason and I talking about museums and library collections, thinking about opportunities for them to think about their collections in terms of machine learning, and specifically algorithmic style transfer. Do you have images in your collection that would translate well for use in graphic design, print, and digital photo applications? I spend hours looking through art books for the right textures, colors and outlines. I also spend hours looking through graphic design archives for movie and gaming industry, as well as government collections. Looking for just the right set of images that will either transfer and produce an interesting look, as well as possible transfer something meaningful to the new images that I am applying styles to.

Sometimes style transfers just make a photo look cool, bringing some general colors, textures, and other features to a new photo--there really isn't any value in knowing what image was behind the style transfer, it just looks cool. Other times, the image can be enhanced knowing about the image behind the machine learning model, and not just transferring styles between images, but also potentially transferring some meaning as well. You can see this in action when I took a nazi propaganda poster and applied to it to photo of Ellis Island, or I took an old Russian propaganda poster and applied to images of the White House. I a sense, I was able to transfer some of the 1000 words applied to the propaganda posters and transfer them to new photos I had taken.

It's easy to think you will make a new image into a piece of art by training a model on a piece of art and transferring it's characteristics to a new image using machine learning. Where I find the real value is actually understanding collections of images, while also being aware of the style transfer process, and thinking about how images can be trained and applied. However, this only gets you so far, there has to still be some value or meaning in how it's being applied, accomplishing a specific objective and delivering some sort of meaning. If you are doing this as part of some graphic design work it will be different than if you are doing for fun on a mobile phone app with your friends.

To further stimulate my imagination and awareness I'm looking through a variety of open image collections, from a variety of institutions:

I am also using some of the usual suspects when it comes to searching for images on the web:

I am working on developing specific categories that have relevance to the storytelling I'm doing across my blogs, and sometimes to help power my partners work as well. I'm currently mining the following areas, looking for interesting images to train style transfer machine learning models:

  • Art - The obvious usage for all of this, finding interesting pieces of art that make your photos look cool.
  • Video Game - I find video game imagery to provide a wealth of ideas for training and applying image style transfers.
  • Science Fiction - Another rich source of imagery for the training of image style transfer models that do cool things.
  • Electrical - I'm finding circuit boards, lighting, and other electrical imagery to be useful in training models.
  • Industrial - I'm finding industrial images to work for both sides of the equation in training and applying models.
  • Propaganda - These are great for training models, and then transferring the texture and the meaning behind them.
  • Labor - Similar to propaganda posters, potentially some emotional work here that would transfer significant meaning.
  • Space - A new one I'm adding for finding interesting imagery that can train models, and experiencing what the effect is.

As I look through more collections, and gain experience training style transfer models, and applying models, I have begun to develop an eye for what looks good. I also develop more ideas along the way of imagery that can help reinforce the storytelling I'm doing across my work. It is a journey I am hoping more librarians, museum curators, and collection stewards will embark on. I don't think you need to learn the inner workings of machine learning, but at least develop enough of an understanding that you can think more critically about the collection you are knowledgeable about. 

I know Jason would like to help you, and I'm more than happy to help you along in the process. Honestly, the biggest hurdle is money to afford the GPUs for training the image. After that, it is about spending the time finding images to train models, as well as to apply the models to a variety of imagery, as part of some sort of meaningful process. I can spend days looking through art collection, then spend a significant amount of AWS budget training machine learning models, but if I don't have a meaningful way to apply them, it doesn't bring any value to the table, and it's unlikely I will be able to justify the budget in the future.

My algorithmic rotoscope work is used throughout my writing and helps influence the stories I tell on API Evangelist, Kin Lane, Drone Recovery, and now Contrafabulists. I invest about $150.00 / month training to image style transfer models, keeping a fresh number of models coming off the assembly line. I have a variety of tools that allow me to apply the models using Algorithmia and now Somatic. I'm now looking for folks who have knowledge and access to interesting image collections, who would want to learn more about image style transfer, as well as graphic design and print shops, mobile application development shops, and other interested folks who are just curious about WTF image style transfers are all about.


          The Residue Of Internets C4I DNA Visible In Ubers Behavior   

The military's fingerprints are visible throughout the Internet's history, with much of the history of compute born out of war, so it's no surprise that the next wave of warfare is all about the cyber (its HUGE). With so much of Internet technology being inseparable from military ideology and much of its funding coming from the military-industrial complex, it is going to be pretty hard for Internet tech to shake its core DNA programmed as part of a command, control, communications, computers, and intelligence (C4I) seeds. 

This DNA is present in the unconscious behavior we see from startups, most recently with the news of Uber deceiving authorities using a tool they developed call Greyball, allowing them to target regulators and law enforcement, and prevent or obscure their access and usage to the ridesharing platform. User profiling and targeting is a staple of Silicon Valley startups. Startups profile and target their definition of ideal behavior(s), and then focus on getting users to operate within this buckets, or you segment them into less desirable buckets, and deal (or don't) with them however you deem appropriate.

If you are a marketer or sales person you think targeting is a good thing. You want as much information on a single user and a group of users as you possibly can so that you can command and control (C2) your desired outcome. If you are a software engineer, this is all a game to you. You gather all the data points you possibly can build your profiles--command, control, communications, and intelligence (C3i). The Internet's DNA whispers to you in your ear--you are the smart one here, everyone is just a pawn in your game. Executives and investors just sit back and pull the puppet strings on all the actors within their control.

It's no surprise that Uber is targeting regulators and law enforcement. They are just another user demographic bucket. I guarantee there are buckets for competitors, and their employees who have signed up for accounts. When any user signs up for your service, you process what you know about them, and put them in a bucket, and see where they exist (or don't) within your sales funnel (repeat, rinse). Competitors, regulators, and law enforcement all have a role to play, the bucket they get put into, and the service they receive will be (slightly) different than everyone else.

Us engineers love to believe that we are the puppet masters, when it reality we are the puppets, with our string pulled by those who invest us, and our one true master--Internet technology. We numb ourselves and conveniently forget the history of the Internet, and lie to ourselves that venture capital has our best interests in mind and that they need us. They do not. We are a commodity. We are the frontline of this new type of warfare that has evolved as the Internet over the last 50 years--we are beginning to see the casualties of this war, democracy, privacy, and security.

This is cyber warfare. It's lower level warfare in the sense that the physical destruction and blood isn't front and center, but the devastation and suffering still exists. Corporations are forming their own militias, drawing lines, defining their buckets, and targeting groups for delivering propaganda to, while they are positioning for a variety of attacks against competitors, regulators, law enforcement, and other competing militias. You attack anyone the algorithm defines as the enemy. You aggressively sell to those who fit your ideal profile. You try to indoctrinate anyone you can trust to be part of your militia, and keep fighting--it is what the Internet wants us to do.


          What Do You Mean When You Say You Are Training A Machine Learning Model?   

I was sharing my latest Algorithmic Rotoscope image on Facebook and a friend asked me what I meant by training a machine learning model. I still suck at quantifying this stuff in any normal way. When you get too close to the fire you lose your words sometimes. It is why I try to step away and write stories about it--helps me find my words, and learn to use them in new and interesting ways.

Thankfully I have a partner in crime who understands this stuff and knows how to use her words. Audrey came up with the following explanation of what machine learning is in the context of my Algorithmic Rotoscope work:

"Machine learning" is a subset of AI in which a computer works at a problem programmatically without being explicitly programmed to do something specific. In this case, the Algorithmia folks have written a program that can identify certain characteristics in a piece of art -- color, texture, shadow, etc. This program can be used to construct a filter and that can be used in turn to alter another image. Kin is "training" new algorithms based on Algorithmia's machine learning work -- in order to build a new filter like this one based on Russian propaganda, the program analyzes that original piece of art -- the striking red, obviously. The computer does this thru machine learning rather than Kin specifying what it should "see."

I use my blog as a reference for my ideas and thoughts, and I didn't want to lose this one. I'm playing with machine learning so that I can better understand what it does, and what it doesn't do. It helps me to have good explanations of what I'm doing, so I can help turn other people on to the concept and help me make more sense (some of the time). We are going to have to develop an ability to have a conversation about the artificial intelligence and machine learning assault that has already begun. It will be important that we help others get up to speed and see through the smoke and mirrors.

When it comes to training algorithmic models using art, there isn't any machine learning going on. My model isn't learning art. When I execute the model against an image it isn't making art either. I am just training an algorithm to evaluate and remember an image, creating a model that can then be applied to other images--transferring the characteristics from one image to another algorithmically. In my work it is important for me to understand the moving parts, and how the algorithmic gears turn, so I can tell more truthful stories about what all of this is, and generate visuals that complement these stories I'm publishing.


          Algorithms for Minimization without Derivatives   

          Optimization: Algorithms and Applications   

          Andy Wingo: guile 2.2 omg!!!   

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)


          On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i, by Jeremiah Blocki and Samson Zhou   
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that (1) An Argon2i DAG is $\left(e,O\left(n^3/e^3\right)\right))$-reducible. (2) The cumulative pebbling cost for Argon2i is at most $O\left(n^{1.768}\right)$. This improves upon the previous best upper bound of $O\left(n^{1.8}\right)$ [Alwen and Blocki, EURO S&P 2017]. (3) Argon2i DAG is $\left(e,\tilde{\Omega}\left(n^3/e^3\right)\right))$-depth robust. By contrast, analysis of [Alwen et al., EUROCRYPT 2017] only established that Argon2i was $\left(e,\tilde{\Omega}\left(n^2/e^2\right)\right))$-depth robust. (4) The cumulative pebbling complexity of Argon2i is at least $\tilde{\Omega}\left( n^{1.75}\right)$. This improves on the previous best bound of $\Omega\left( n^{1.66}\right)$ [Alwen et al., EUROCRYPT 2017] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing. We also show that Argon2i has high {\em fractional} depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
          Privacy-Preserving Distributed Linear Regression on High-Dimensional Data, by Adrià Gascón and Phillipp Schoppmann and Borja Balle and Mariana Raykova and Jack Doerner and Samee Zahur and David Evans   
We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao's garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD) algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.'s method for privacy-preserving ridge regression (S&P 2013), and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.
          Fap Turbo Robot-Binaries Forex Brokers Compare   
This FAP turbo review 2009 will look into the FAP turbo expert advisor. This newsletter will determine if the claimed expert aide robot is earning money or not. This EA is a Metatrader four foreign-exchange trader machine. You set it onto 15-minute charts and just leave it to do its stuff. This EA has basically been tested on live markets. See more about best forex trading robots compared below. The testing started on January 5, 2009 with a start up capital of £500. Quite latterly, the author of FAP turbo review 2009 experienced a massive loss. It all happened on the 19th of January of the current year, just a few days after the beginning of live trading. The writer was using the EURGBP currency pair. It did not happen to just the writer but also to a number or folk, in particular traders of the same currency pair. See more about best forex trading robots compared below.



Best Forex Robots: Compare Top 10 Forex Robots And See Live Trades Online! As Seen on CNN, CNBC, FORBES and Money Networks. See which Forex Robot is the most profitable Forex Trading Robot Online, Real Time! Best Forex Robot: $25,000 BONUS From Recent Live Forex Course Held in Vegas. Latest Enhanced Version of Forex Robots Used by Top Forex Traders Internationally. Find out which Forex Robot is Being Used By Your Forex Broker!

The loss was quite huge since 2 weeks' worth of profit all went into smoke. What happened? Is the EA any good at all? How did such a loss happen? These were the questions raised because of this debacle. This FAP turbo review 2009 closely investigated the entire scenario and was able to draw a favorable conclusion. This robot still can earn money. You may be wondering why FAP turbo review 2009 still gave this expert counsel a good review after a very bad loss. In reality, this robot was performing really well before the draw down. The reason behind Jan 19th's draw down was the incontrovertible fact that the banking world of the United Kingdom crashed at that point. As a result, the EA became volatile too. See more about best forex trading robots compared below. Why did this happen? Are not these EA's built with risk avoidance systems? The answers to these questions are quite apparent. The EA itself has some flaws, just like any application. In reality, nothing is ideal. You will still have to glance at the markets and check for any signs of volatility. If there are signs of this sort of situation, you simply need to turn off the system for that day and skip trading. This is not so bad. These types of scenario only occur often. After the draw down, the writer was able to recuperate his losses and begin with the same quantity just before the draw down. Although it went through some bad trading day, it was able to hold it's own.

FAP Turbo is meant to mechanically investigate trading info. It gives a real-time trading results from one or two accounts and the trader can get updates every fifteen mins.See more about best forex trading robots compared below. By using this software a trader isn't required to have a huge amount of startup capital to proceed. Its user-friendly interface allows user without technical knowledge to use them without so much of a bother. FAP Turbo could work full time all week without the trader's intervention and is legendary for it's almost 95% positive turnout in its nine years since it was first conceived and has only less than 0.45% negative results. See more about best forex trading robots compared below. It also employs a particularly distinctive algorithm method that allows it to prevent losses and optimize its returns. And because it is also equipped with a strict risk management program, FAP Turbo reduces deficits more successfully. See more about best forex trading robots compared below.First, this is so easy to download and it would not take the majority of your time installing it. It also has a video tutorial that will give you step by step instructions on how to properly install and operate FAP Turbo.

There are pro consultants to watch the trading and will start orders if you need it. You may use this manual as a reference to lead you on the way if online help is not readily available. Free updates for this programme are also offered. It also provides lifetime customer membership on their internet site. From here, you can get and download mandatory updates or program revision for your software. See more about best forex trading robots compared below. Like any ventures, cash trading comes with lots of risks to take and avoid so it's way better to first try the demo program available till you master the system and become used to its interface. After you refined your abilities using this automated trader, you can let FAP Turbo do the trading and analyzing while you relax and enjoy the fruits of your investments.
          Best Forex Robot Review-Forex Robot Free   
This FAP turbo review 2009 will look into the FAP turbo expert advisor. This article is going to determine if the claimed expert counsel robot is earning or not. This EA is a Metatrader 4 foreign-exchange trader machine. You set it onto 15-minute charts and just leave it to do its stuff. This EA has basically been tested on live markets. See more about best forex trading robots compared below. The testing started on January five, 2009 with a start-up capital of £500. Quite recently, the writer of FAP turbo review 2009 experienced a big loss. It all happened on the 19th of January of this year, just one or two days after the beginning of live trading. The writer was using the EURGBP currency pair. See more about best forex trading robots compared below.



Best Forex Robots: Compare Top 10 Forex Robots And See Live Trades Online! As Seen on CNN, CNBC, FORBES and Money Networks. See which Forex Robot is the most profitable Forex Trading Robot Online, Real Time! Best Forex Robot: $25,000 BONUS From Recent Live Forex Course Held in Vegas. Latest Enhanced Version of Forex Robots Used by Top Forex Traders Internationally. Find out which Forex Robot is Being Used By Your Forex Broker!

The loss was quite enormous since two weeks' worth of profit all went into smoke. This FAP turbo review 2009 closely investigated the entire eventuality and was able to draw a favorable conclusion. This robot still can earn money. You could be thinking about why FAP turbo review 2009 still gave this expert advisor a favorable review after a very bad loss. In truth, this robot was performing really well before the draw down. The cause of January 19th's draw down was the undeniable fact that the banking sector of the UK crashed at that time. See more about best forex trading robots compared below. As a result, the EA became volatile as well . Why did this happen? Aren't these EA's built with risk avoidance systems? The solutions to these questions are quite plain. The EA itself has some issues, just like any application. In truth, nothing is ideal. The lesson the writer was ready to acquire was that you cannot just blindly turn on the EA and leave it as it is. You'll still have to look at the markets and check for any signs of volatility. If there are signs of this sort of situation, you only need to switch off the system for that day and skip trading. This is not so bad. These categories of eventuality only occur sometimes. The bottom line is this EA will still make you a lot of cash. After the draw down, the writer was able to recuperate his losses and start with an identical quantity just before the draw down. See more about best forex trading robots compared below. Although it went through some bad trading day, it was ready to hold its own.

By using this programme a trader is not needed to have a big quantity of starting capital to proceed. It has the lowest starting capital which is about $50. Its user-friendly interface allows user without technical data to use them without so much of a bother. See more about best forex trading robots compared below. It also employs a very distinctive algorithm method that permits it to prevent losses and optimize its returns. See more about best forex trading robots compared below.Setting this software up isn't a problem. See more about best forex trading robots compared below.First, this is so easy to download and it wouldn't take the majority of your time installing it. It also has a video tutorial which will give you step by step instructions on how to correctly install and operate FAP Turbo. Once installed, this automated trader is all set to do the trading for you with accurate results and trustworthy information.

There are professional experts to see the trading and will start orders if you want it. See more about best forex trading robots compared below. If and ever a user encounters some troubleshooting Problems with the software, FAP Turbo claims to a have a ready customer support system that will handle clients' questions, aside from the manual included in the package to help users install the system. See more about best forex trading robots compared below. Free updates for this programme are also offered. It also provides lifetime client membership on their online site. See more about best forex trading robots compared below. Like any ventures, money trading incorporates plenty of risks to take and avoid so it's way better to first try the demo program available until you master the system and become used to its interface. After you refined your abilities using this automated trader, you can let FAP Turbo do the trading and researching while you sit back and luxuriate in the fruits of your investments.
          (USA-TN-Chattanooga) Data Science Analyst-Enterprise Modeling & Governance Support   
**Description:** This position will focus on the use of information and analytics to improve health and optimize customer and business processes\. It also requires the ability to pair technical and analytical skills to perform day to day job duties\. May also assist in the designing and implementation of systems and use of programming skills to analyze and report insights that drive action\. Assist in the research, validation and development of Predictive models and Identification algorithms\. **Responsibilities:** • Identify, extract, manipulate, analyze and summarize data to deliver insights to business stakeholders\. • Source data can consist of medical and pharmacy claims, program activity and participation data, as well as demographic, census, biometric, marketing and health risk assessment data\. • Perform Model Governance duties such as maintaining a library of Predictive Models and monitoring the model accuracy and performance of these models\. And other required model governance activities\. **Qualifications:** • **Bachelor’s degree in Math, Statistics or Public Health or professional analytical experience\. Qualifying backgrounds include: epidemiologists, quantitative MBAs, quantitative sociologists, data miners, behavioral economists, qualitative researchers, economists, statisticians, or biostatisticians\.** • Strong analytical, communication and technical skills • Problem solving and critical thinking skills • Experience extracting and manipulating large data \(ie: a minimum of 1 million records\) across multiple data platforms\. • Familiarity with healthcare claims data • At least 1 years coding in SAS and/or SQL experience • Familiarity with Hadoop and Teradata coding highly desired\. **US** **Candidates Only** : Qualified applicants will be considered for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, disability, veteran status\. If you require a special accommodation, please visit our Careers website or contact us atSeeYourself@cigna\.com\. **Primary Location:** Bloomfield\-Connecticut **Other Locations:** United States\-North Carolina\-Raleigh, United States\-Colorado\-Greenwood Village, United States\-Tennessee\-Chattanooga, United States\-Pennsylvania\-Philadelphia **Work Locations:** 900 Cottage Grove Road Wilde Bloomfield 06152 **Job:** Bus Ops\-\-Operations Mgmt \(Bus\) **Schedule:** Regular **Shift:** Standard **Employee Status:** Individual Contributor **Job Type:** Full\-time **Job Level:** Day Job **Travel:** Yes, 25 % of the Time **Job Posting:** Jun 29, 2017, 10:35:24 AM
          Microsoft Xbox One Console 500GB   
Microsoft XBOX One W / 500 GB Xbox One brings together the best exclusive games, the most advanced multiplayer, and entertainment experiences you won’t find anywhere else. Play games like Titanfall™ and Halo together with your friends on a network powered by over 300,000 servers for maximum performance. Find new challengers who fit your skill and style with Smart Match, which uses intelligent algorithms to bring the right players together. Turn your best game moments into... $199
          Shadchen-el introduction and defpattern tutorial   

Lately I've been working on an Emacs Lisp library I'm pretty proud of, Shadchen. It implements extensible pattern matching, somewhat like Racket's Match facility for Emacs Lisp. In this tutorial/introduction I'll explain how to use Shadchen's various facilities and, most importantly, how to extend Shadchen itself with new patterns, which is an interesting subject in and of itself, combining compile time and run-time execution in interesting ways. If you already know about pattern matching, feel free to jump to the end, where I talk about writing non-trivial patterns using Shadchen's defpattern.

The Problem Shadchen Solves

Shadchen solves several problems you may not know you may have. In one sentence, Shadchen lets you concisely express both destructuring and type checking for complex data structures. It can be thought of as combining the features of cond, case and assert into one nice package. My experience is that this collection of features helps me write better code, since it encourages me to dilineate exactly the kind of data a function or form expects before doing anything with it.

Shadchen is also conservative - unless you provide a pattern that matches the input data, it will fail with a match error, so you know something is wrong before something strange happens.

For instance, suppose we were writing an interpreter for Lisp. It might look like:

(defun eval (form env)
 (cond form 
  ((symbolp form) (eval-symbol form env))
  ((listp form)
   (case (car form)
    (if (handle-if (cadr form)
                   (caddr form)
                   (caddr form) env)) 
    (let (handle-let 
          (cadr form)
          (cdddr form)))
    ...))))

Note that we have both a cond and a case here, and that after we test our data, we destructure it. Here is a similar piece of code using Shadchen:

(defun eval (form env)
 (match form 
  ((p #'symbolp s) (eval-symbol s env))
  ((list) nil)
  ((list 'if pred true-branch false-branch)
   (handle-if pred true-branch false-branch env))
  ((list-rest 'let (list-rest pairs) body)
   (handle-let pairs body env))
  ...))

(Notes: the pattern p passes when the predicate as its second argument is true on the match value, and then matches against the third argument. So (p #'symbolp s) matches only when form is a symbol and then binds s to that symbol. list matches when the input is a list and each pattern in the the list pattern matches each corresponding element in the list. list-rest is similar but any leftover parts of the list are matched against the final pattern.)

This code is more concise, and yet it is also much more explicit, both in that it provides better naming for values and it provides more explicit error checking. For instance, this version will only match if with three expressions, where as the previous evaluator would have been fine with the expression (if a b c d e f). This version also asserts explicitly that the binding part of the let needs to be a list. With a custom pattern we could also ensure it was a list of symbol/expression pairs in almost the same space.

It takes all kinds, but I found that once I got used to programming with pattern matching, it was hard to go back.

Other Rad Features of Shadchen

match-let

Shadchen wants to let you program in a functional style. To that end, in addition to the regular pattern matching form match, it also provides some other, nice features. Many algorithms involve examining some intermediate data, checking its structure somehow, and then recursively processing the next step. Shadchen allows this kind of thing with the Scheme-flavored match-let form.

The form match-let can be used exactly like let:

(match-let 
  ((x 10)
   (y 11))
 (+ x y))

But in each "binding" pair, the symbol may be replaced with any Shadchen pattern. Eg:

(match-let 
 ((x 10)
  (y 11)
  ((list q r s) (list 1 2 3)))
 (+ x y q r s))

Will give you 27. If any pattern fails, the form produces a match fail error, which means you can use match-let as a let form with tidy type checking.

Finally, a match-let form allows tail recursion. Invoking recur in a tail position inside the form causes the match-let to be re-entered without growing the stack. For instance:

(match-let 
 (((list x y) (list 0 0)))
  (if (< (+ x y) 10000)
      (recur (list (+ x 1) (+ x y)))
      (list x y)))

Results in (141 9870) and can't blow the stack. It is an error to invoke recur in a non-tail position, but because of limitations in Emacs Lisp, it is difficult to enforce this statically.

defun-match

The form defun-match lets you write functions which pattern match on their arguments and split their calculations across multiple bodies, in a bit like the style of Shen or Haskell.

For instance, suppose we have an animal simulator, where each animal is represented by a list, the first element of which is a symbol representing the animal name. We can say:

(defun-match- vocalize ((list-rest 'cat properties))
  "Cat vocalization."
  (message "Meow"))

(defun-match vocalize ((list-rest 'dog properties))
  "Dog vocalization."
  (message "Woof"))

Then:

(vocalize '(cat :name tess))
(vocalize '(dog :name bowzer))

Functions defined with defun-match can also use recur to re-enter themselves without growing the stack. Consider a function which causes a list of animals to vocalize:

(defun-match- vocalize-list (nil) nil)
(defun-match vocalize-list ((cons animal animals))
  (vocalize animal)
  (recur animals))

recur can dispatch to any of the bodies defined for the function and it doesn't grow the stack. It must be invoked from tail position, though non-tail calls can be affected by simply calling the function.

(N.B. defun-match- with that dangling minus sign causes previous bodies to be expunged before defining the indicated body.)

Extending Shadchen with defpattern

Shadchen is an extensible pattern matching facility. We can define new patterns much in the way we define new functions, although patterns are more like macros than functions. Let's look at a simple example, and then I'll guide you through a more complex example I just added to the library using the defpattern.

A quirk of Common and Emacs Lisp is that (car nil) is nil even though nil is not a cons cell, and so does not have a car or a cdr. I hate this behavior, because its quite evident that (cons nil some-list) is different from nil, but car can't tell that - the user has to do more inspection to find this out. Bugs waiting to happen, let me tell you.

However, I'm nothing if not accommodating, and so the cons pattern in Shadchen will, in fact, match against nil. So:

(match nil 
 ((cons a b) (list a b)))

Will be '(nil nil). Let's define a pattern which is like cons, but only matches against actual cons cells, into which category nil fails to fall.

(defpattern strict-cons (car cdr)
 `(p #'consp (cons ,car ,cdr)))

A defpattern body must evaluate to a legal shadchen pattern. Each argument to the defpattern is also a shadchen pattern. So this pattern reads "define a new pattern strict-cons, which first checks that the match value is a cons cell using the p pattern, and then matches the car and cdr of that cons cell against the patterns car and cdr.".

During the expansion of a shadchen pattern matching form, user defined patterns are looked up and their expansions are inserted into the macro expansion. In short, defpattern allows you to define new patterns in terms of old patterns.

This might seem very restrictive, but Shadchen provides primitive patterns that allow you to write arbitrarily complex pattern matchers that can perform rich computations on their way to rejecting or accepting a match.

Implementing concat, a non-trivial pattern

I just used defpattern to implement a pretty complex pattern, concat and it was something of a learning experience. Writing complex patterns definitely takes some thought and practice, but hopefully this tutorial will bootstrap users to a point where their own patterns can be implemented without too much pain.

What is so complicated about a concat pattern? Well, we want concat to match the concatenation of patterns which match strings. Eg:

(concat "dog" "cat")

Should match "dogcat". Writing a pattern that has this behavior is easy:

(defpattern concat (&rest strings)
 (reduce #'concat strings))

This pattern can't match subpatterns that are anything other than strings, however. We'd really like to be able to match, for instance:

(concat (and (or "dog" "cat") which) "dog")

against either "dogdog" or "catdog", binding which to whatever the initial string contents actually are. How can we do this?

Nailing down concat's semantics.

We want concat to function this way:

If the initial pattern is not a string, then try matching that pattern against larger and larger substrings until either you run out of string to match against, or you match. If you match, then match, again using concat with the unused patterns, against whatever is left of the string after you've removed the part that matched. Repeat until all patterns are exhausted and then make sure the string has been completely consumed too.

If the initial pattern is a string, then just cleave off the same length of characters from the input, and if they match, recursively match the rest. Here is the entry point:

(defpattern concat (&rest patterns)
  (cond 
   ((length=0 patterns)
    "")
   ((length=1 patterns)
    `(? #'stringp ,(car patterns)))
   (:otherwise
    (cond 
     ((stringp (car patterns))
      `(simple-concat ,@patterns))
     (:otherwise 
      `(full-concat 0 ,@patterns))))))

The :otherwise has all the meat, but we defer it to to other helper-patterns; simple-concat and full-concat. Simple concat looks like this:

(defpattern simple-concat (&rest patterns)
  (cond 
   ((length=0 patterns)
    "")
   ((length=1 patterns)
    `(? #'stringp ,(car patterns)))
   (:otherwise
    (let* ((the-string (car patterns))
           (static-len (length the-string)))
      `(and 
        (p #'stringp)
        (p (lambda (s)
             (>= (length s) ,static-len)))
        (p 
         (lambda (s)
           (string= (substring s 0 ,static-len) ,the-string)))
        (funcall (lambda (s)
                   (substring s ,static-len))
                 (concat ,@(cdr patterns))))))))

Look at the backquoted expression. It is an and pattern, which only succeeds if all the patterns beneath it also succeed. These patterns are (p #'stringp), which asserts that the input is a string, (p (lambda (s) (string= (substring s 0 ,static-len) ,the-string))) which asserts that the input is at least long enough to contain the string we want to match against. The next form asserts that the substring of the input equal to the pattern string in length is equal to the pattern. If this is true, then the pattern matches, and we use the funcall pattern to match against the rest of the string with the leftover patterns.

The funcall pattern takes the input to the match, applies a function to it, and then matches the output of that function application to the pattern provided as its third slot.

full-concat is more complex. Note that when we invoke full-concat we provide it an numerical first argument. This number tells the pattern how far into the string to match we've looked, so it starts at zero. After all the first pattern could match the empty string. full-concat looks like this2:

(defpattern full-concat (pivot &rest patterns)
  (assert (numberp pivot)
          ()
          "Pivot should be a number.")
  (cond 
   ((length=0 patterns)
    "")
   ((length=1 patterns)
    `(? #'stringp ,(car patterns)))
   (:otherwise
    `(and 
      (p (lambda (s)
           (>= (length s) ,pivot)))
      (or 
       (and (funcall
             (lambda (s)
               (substring s 0 ,pivot))
             ,(car patterns))
            (funcall 
             (lambda (s)
               (substring s ,pivot))
             (concat ,@(cdr patterns))))
       (full-concat ,(+ pivot 1) ,@patterns))))))

Here we use and again. We first check that the input string is long enough to grab the substring indicated by pivot. If this isn't true, the match fails. We then use the or pattern to indicate a branch. Either of the or patterns might succeed, but the first to do so is the only one that will happen. The first pattern to or uses funcall to peel off the substring of the input from 0 to pivot. If the initial pattern matches, then we use funcall again to get the rest of the string, and invoke concat again.

If this match fails, then we invoke full-concat again, but increment the pivot by one, indicating that we want to check against a larger substring.

If this is confusing, and it is understandible if it is, remember the following: when writing defpatterns, or is used for flow control, and is used to assert multiple things about the input, p is used to assert individual arbitrary conditions on the input, and funcall is used to transform the input for further matching. Recursive pattern expansion is used for iteration 1.

And feel free to contact me with questions, if they come up.


1 It is a lot like writing prolog, actually. Pattern matching is a significant distance from lisp to prolog.

2 After writing this I realized we can do better. If we get a match for the initial pattern, and then check the rest of the patterns, its possible they will fail because the initial match didn't consume enough of the string. It is simple to say, "if the subsequent match fails, keep increasing the pivot and trying again." I leave it as an exercise to the reader to figure out how to represent this trivial backtrackingish thing - but you can always check the source for the solution.


          A (almost) Pure Random Demon Name Generator in Racket   

Hey: someone implemented a similar algorithm in one of my favorite languages: Factor! See it here.

Hey. Check out some of these fictional demon names:

("Barzan" "Melkot" "Bandi" "Fek'Ihri" "Krenim")

They are from this list of fictional demon names on the wikipedia.

You see, I've been listening to Songs of the Dying Earth, a short story collection, written by various authors, but set in Jack Vance's Dying Earth setting, which is a far future science fiction/fantasy world where mages insult eachother politely and make mischeif.

Back!

Another thing they do is enslave demons from lower or higher realms. This gave me the idea to eventually write a game where you play as one such enslaved demon, and I thought it would be cute to generate a fresh demon name for each play through. So I wrote such a system, in Racket, and here is a guided tour.

This isn't a very advanced technique, but all the code here is side effect free - so if that floats your boat, or if you are just curious how one programs in such a manner, then read on!

Overview

Our demon name generator will be a modified, ad-hoc Markov Model generator. What does that mean? It is easier to explain than implement, actually. We will take a corpus of demon names (the list from wikipedia) and do some statistics on it. In particular, we'll calculate how often a given letter follows another letter in a name, and then we can generate names based on that table of transitions. The only wrinkle is that the first letter in a name is obviously going to not have a letter before it to bias the choice, and that any letter might lead to the end of the name.

We'll add an additional wrinkle, which improves name generation, which is that we'll maintain a separate transition table for each position in the name. That is, the probability of going from "a" to "b" might be different if "a" is the third character in the name vs the seventh. This will let our model include the fact that certain letters and transitions are more common, for instance, near the front of the name, than near the end.

Populating our Transition Table

Our corpus looks like this:

(define demon-names 
  (list "Abraxas"
        "Abbadon"
        "Agrith-Naar"
        "Aku"
        "Alastair"
        ...))

We need to visit each name in the corpus and scan through it, recording each time, for a given index, that the letter at that index follows whatever letter was previous to the index. We want to maintain a side-effect free discipline, so we will use Racket's purely functional, persistent dictionaries to store the transitions. We will often be incrementing the value at a key in such a dictionary by 1, so lets get that code out of the way now:

(define (dict-update d key fun . args)
  (match args
    ((list)
     (let ((val (dict-ref d key)))
       (dict-set d key (fun val))))
    ((list or-value)
     (let ((val (dict-ref d key (lambda () or-value))))
       (dict-set d key (fun val))))))

(define (at-plus d k)
  (dict-update d k (lambda (x) (+ x 1)) 0))

dict-update takes a dictionary, a key and a function, fetches the value currently at the position, calls the function on it, and sets that value in the dictionary, the new version produced by which action is then returned. If they key isn't there, the first value in the args list is passed to the function instead, allowing us to specify a default value.

at-plus used dict-update to increment the value stored at a key by one, setting it to one if no such value is present.

Our dictionary is going to associate transitions with counts. How shall we represent a transtion? A transition is a triple, in our case, consisting of the index the transition covers, the previous character, and the current character. We will use a Racket struct to represent a transition as a triple:

(struct triple (index from to) #:transparent)

We will use the symbol 'a to represent the character "a" and so one. For the first and last transition, we will use the special values 'start and 'end. The first transition always goes from 'start to a letter, and any subsequent transition can arrive at 'end, in which case the name is over.

The function which populates the transition table looks like this, then:

(define (populate-transition-table names)
  (let loop 
      ((names names)
       (table (make-immutable-hash '())))
    (match names
      ((list) table)
      ((cons name names)
       (loop names
             (foldl 
              (lambda (triple table)
                (at-plus table triple))
              table
              (string->triples name)))))))

It takes of list of names and iterates over them, accumulating the table as we go. make-immutable-hash returns an empty immutable hash table. The heavy work is done by a call to foldl, which accumulates over the result of calling string->triples on name.

string->triples converts the current name to a list of triples.

(define (string->triples s)
  (let loop
      ((i 0)
       (l (append '(start) 
                  (string->list s)
                  '(end)))
       (triples '()))
    (match l
      ((list 'end) (reverse triples))
      ((cons from 
             (and rest
                  (cons to _)))
       (loop 
        (+ i 1)
        rests
        (cons (triple i
                    (char->symbol* from) 
                    (char->symbol* to))
              triples))))))

We initially convert our string to a list, prepend 'start and suffix 'end, and then iterate over the list, looking for 'end to terminate. At each iteration, we grab two values from the list, create a triple, and then recur on the current list minus just one element. Hence the (and rest (cons to _)) pattern match.

Generating Novel Names

Generating a new demon name is easy, now that we have the table. We simply start with an empty name, and consult the table for which character to generate next until we encounter an end.

Most of the work is done by the method next-character, which takes a table, the previous character as a symbol (or 'start) and the index of the character to be generated:

(define (next-character table prev-character index . args)
  (match args
    ((list) (next-character table prev-character
                            index
                            (current-pseudo-random-generator)))
    ((list generator)
     (let* ((sub-table 
             (restrict-table table index prev-character))
            (total-elements (foldl + 0 (dict-values sub-table)))
            (draw (random total-elements generator)))
       (let loop 
           ((draw draw)
            (key/val (dict->list sub-table)))
         (match key/val
           ((cons (cons 
                   (triple _ from to)
                   count) rest)
            (if (or (empty? rest)
                    (<= draw 0))
                to
                (loop (- draw count)
                      rest)))))))))

This function restricts the table to the triples which match the index and previous letter, calculates the total number of possible transitions, generates a random number in that range, and iterates through the possible transitions until that number is zero or less, subtracting away the count of each possible transition as it does so. It takes an option random state so that it can be used purely functionally, if we so desire. By default, it consults the current random state, which isn't completely pure - oh well!

This function returns the symbol generated.

generate-demon-name does the rest of the work:

(define (generate-demon-name table . args)
  (match args
    ((list) (generate-demon-name table (current-pseudo-random-generator)))
    ((list gen)
     (let loop ((ix 0)
                (name-list '(start)))
       (let ((next (next-character table (car name-list) ix gen)))
         (if (eq? next 'end)
             (symbol-list->string (cdr (reverse name-list)))
             (loop 
              (+ ix 1)
              (cons next name-list))))))))

This takes a table and an optional random state and calls next-character until and end is found. When it is, the 'start element is stripped off and the list of symbols is converted into a string, which is returned to the user.

It is invoked like this:

(generate-demon-name (populate-transition-table demon-names)) ;->
"Quinag"

Another function, generate-demon-names allows many to be generated at once:

(generate-demon-names (populate-transition-table demon-names) 10)
'("Azaigair"
  "Qwalboy"
  "Tecasex"
  "Mabuak"
  "Cinofego"
  "Abby"
  "Nurdar"
  "Zarigak"
  "Yahaxae"
  "Yk'legod")

Voila!

Conclusions

I think the result is pretty nice, and if you precache the table population step, it is also a pretty zippy algorithm. The code is on my github, comments are welcome.

For the intreped, try feeding in other seed data. Included in the library is a list of alien races from Star Trek and a list of all the names in the Book of Mormon. Have fun!



             
Brent Simmons: "Dave Winer took a chance on me many years ago, and it was great for me. I sometimes call myself a graduate of UserLand University." I had an algorithm. I started a project and asked for volunteers. Then I hired the smartest guy who was easy to work with, Brent.
          Facebook Is Changing Its News Feed Algorithm Again - Fortune   

Fortune

Facebook Is Changing Its News Feed Algorithm Again
Fortune
Facebook said on Friday it was changing the computer algorithm behind its News Feed to limit the reach of people known to frequently blast out links to clickbait stories, sensationalist websites and misinformation. The move is another step by the world ...
Facebook found a new way to identify spam and false news articles in your News FeedRecode
Facebook News Feed change demotes sketchy links overshared by spammersTechCrunch
Facebook aims to filter more fake news from news feedsUSA TODAY
Marketing Land -SlashGear -Mashable -Adweek
all 16 news articles »

          ALGORITHM   
LED blown glass pendant lamp
          Facebook alters news feed to crack down on spam links   
Facebook announced on Friday a new adjustment to its news feed algorithm that the company says will help crack down on spam links.Adam Mosseri, a Facebook vice president, wrote in a ...
          News Coverage   

The EU -Africa Ministerial Conference on Migration and Development took place last week, November 22-23. I check Google News headlines daily because, for whatever reason, I think (or rather used to think) that it was relatively un-biased and represented the "most important," "need to know" stories, but over the past week I noticed nothing on this conference, which seems like a pretty big deal. Rather, I learned of the conference by way of allAfrica, where it made the front page on November 24th, right under the stories about Rwanda's decision to cut ties with France (more recent story). I wondered, what's up with Google? Have they become just like every other cotton candy news source? So, I Googled "Google News biased" (ironic? maybe), and found this from the USC (that's the University of Southern California, not South Carolina; it has been argued that the University of South Carolina is the original USC because it was established 75 years earlier than the University of Southern California, however I argue that the University of Southern California is the real USC because it aquired the web domain before South Carolina) Annenberg Center for Communication Online Journalism Review. Turns out that Google's algorithms are causing the bias. In trying to be un-biased by using algorithms, Google News is actually perpetuating a bias in news. Hmmm... I guess I'll have to look somewhere else for my news, maybe allAfrica. But wait, what's this? allAfrica is biased? Yup, from my Google search of "Google News biased," I found this, "It's not all Africa @ allAfrica.com." Can good news be found anywhere?


Last night I happened across a television channel, LinkTV, while scanning the tube for something to dull my mind for a bit, to take a break from staring at my computer screen and writing. Scrolling through the channels, I stopped at what looked like a music video, a group of black men singing into the camera. I stopped because rather than being set in the streets of New York, LA, St. Louis, Atlanta, etc, the setting looked like a West African village. I was confused. My first thought was that this was some sort of statement being made by a US hip hop/rap group, deliberately choosing, or creating, an "African" setting, but then I realized the men were not singing in English, I couldn't even recognize the language in which they were singing. By now I had figured out that this was a music video coming out of, that is produced in, Africa, but I was still confused because I couldn't figure out what place it had on the television set in my boyfriend's apartment in Irvine (Orange County), California; he doesn't subscribe to any special networks or packages, just the basic cable that all graduate student residents get with their rent. By this point my boyfriend was also glued to the tv, we kept asking each other, "What is this?" not because we didn't know what it was, but because we didn't know what it was doing in our living room without our solicitation of it. It turned out to be a video from Senegal. We watched a few others, another from Senegal, and one (or two?) from Mali, before a bumper popped up declaring the channel we were watching as, "LinkTV: Television Without Borders." We both expected a commercial and got up to leave the room, but no commercial came, instead an announcement, next up was a program that profiled Chinese restaurants across the world, this installment would be on Turkey
(the country), we sat back down and watched the whole thing all the way through, uninterupted, it was wonderful.


The bumper popped up again, we opened our laptops and googled "LinkTV." There we found out that LinkTV is a channel available via satellite (Wikipedia claims that it reaches 1 in 4 homes in the US), never runs commercials, and is funded by individual and organizational donations. We also found and watched, MOSAIC:World news from the Middle East. The first segment was on the recent increase of violence in Iraq. The tone was similar to that of BBC or CNN, and the segments began as would any FOX or CBS local or national news program, reporting the statistics, how many dead, where, who, by whom, but the video clips were much more extended, and showed more violence, more suffering, more women and children, less men with guns, and something I have never seen on the major network or cable news shows, refugees.


Back to the EU-Africa Conference and allAfrica . . . after reading the articles listed under the Conference headline on allAfrica, the only one I found to be intriguing was this one, "Senegal: 'Mankind is Like This - One Wants to Get Ahead'", which wasn't even directly related to the conference. I liked this article because it addressed migration on a personal level, telling the story of a man trying to migrate from Senegal to the Canary Islands. The story and the man addressed migration as a cultural practice, the other articles took the same, tired stance on Africa, migration, aid and development, even the conference notes did not mention the cultural significance of migration, rather contextualizing it in terms of aid and development from Europe and the US.


Like this blog post, African migration is a process, an activity (as you can probably tell, I really like this concept of activity), that should be analyzed by standing back and looking at the whole picture, while taking the time to zoom into particular practices. This approach produces the potential to recognize grand patterns and contextualize them appropriately, and conversely to recognize specific cases and pattern and contextualize them appropriately. Until then, such conferences and programs on migration, development, aid, etc will continue to target the symptoms, and then only to eradicate or alleviate, rather than accommodate them, not the causes of such "problems."


          Hello, Inbox Zero! Gmail’s New AI Feature Could Save You 13 Hours a Week   

Google wants you to let their algorithm read your emails for you, so you can get back to your "real" job.

The post Hello, Inbox Zero! Gmail’s New AI Feature Could Save You 13 Hours a Week appeared first on Social Media Week.


          Company With Ties To UFO Cover-up Gets Huge Antarctica Contract - Why All The Defense Contractors And Mercenaries?    
http://www.stillnessinthestorm.com

http://www.auroraexpeditions.com.au/images/uploads/expeditions/expeditions-antarctica-new-year.jpg

(Stefan Stanford) According to this interesting new story from the Herald Review, studies that were completed upon sedimentary rock in Antarctica have given us undeniable and conclusive evidence that palm trees once grew there in that land long covered by ice as also heard in the 1st video below. As the author of the Herald story asks, how is that possible when nothing other than primitive vegetation grows there today? As we hear in the 3rd and final video below featuring Clif High of the Web Bot project, the data that he mines every day tells him that something very, very strange is going on in Antarctica, and each day he's getting more and more indications that something huge is ramping up down there.
 Source - All News Pipeline

by Stefan Stanford, June 20th, 2017

Sharing with us evidence he's gotten through his internet word-monitoring project that, beyond many visits made by the elite to Antarctica, including Newt Gingrich himself in February of 2017 as seen in the Twitter screenshot below, he also sees a big ramping up of jobs in Antarctica including highly elite globalist corporations, (one with long ties to the UFO coverup!), and the numbers of military passes there indicate to him a major operation is being prepared for.


Between large tracts of land in Australia and New Zealand being dedicated to something new and still mysterious, he also tells us of the creations of new cargo routes from the areas closest to the land of ice, where it is now winter. High tells us he expects we may see something huge during the Antarctic spring while telling us one of the biggest indicators of what might be happening there now are the huge number of high tech companies becoming involved. He also tells us that whatever it is, the American people may never hear anything about it with the companies now involved.


According to the Professional Overseas Contractors website, a new company called LEIDOS recently took over the massive Antarctic support contract formerly held by deep-state, military-industrial-complex tied Lockheed Martin. Their story interestingly mentions that nowhere on LEIDOS company history page did they mention their parent company, a mega-giant for the 'deep state' called SAIC, the Scientific Applications International Company.


And as High tells us in this video, SAIC has deep ties to the secrecy surrounding UFO's, history that can be traced by those willing to investigate it including a slew of CEO's from the military-industrial-complex including retired US Navy Admiral and CIA Deputy Director Bobby Ray Inman, who's held several influential positions within the intelligence community, including time at SAIC, and who has long been believed to be one of the 'UFO Gatekeepers'.

Interestingly we also learn that the reason SAIC created LEIDOS was that they were unable to bid on certain government contracts as SAIC, but as LEIDOS they were able to bid upon them. Add in the fact that Lockheed Martin's Antarctica contract was supposed to run through 2025 but was 'conveniently' cut short, with LEIDOS taking over, and all kinds of questions arise that need to be answered with the starter: What is REALLY going on down there?


According to Steve Quayle's book "Empire Beneath The Ice", the truth about history has been hidden. In 'Empire Beneath' Quayle persuasively argues that most of what we have learned about World War II and the defeat of Nazi Germany is wrong, and the truth is something not only sinister but are at the root of some of the biggest secrets of our age.

Interestingly, Quayle's book aligns greatly with much that we're hearing from Clif High now via his Web Bot project and High tells us he believes that what's happening down there might somehow be UFO/alien related. He also claims that with most jobs in the Antarctic being seasonal and short term, there's a mathematical certainty that more and more information will be leaking out about what's really going on down there that he'll be able to data mine through his project.

Might Steve Quayle's book have been way ahead of the truth? These main points of his book are shared with us.:
Why the suppressed evidence proves Adolf Hitler didn’t die before Germany surrendered during WWII, and how he eluded capture.

How Nazi SS members, scientists, and soldiers escaped with Hitler to create colonies in other parts of the world to continue their monstrous research.

Why in 1947 Admiral Richard E. Byrd warned that the US should adopt measures to protect against an invasion by hi-tech aircraft coming from the polar regions, adding, “The time has ended when we were able to take refuge in our isolation and rely on the certainty that the distances, the oceans, and the poles were a guarantee of safety.”

How, using advanced technology, Nazi saucers defeated the US military — long after WWII was supposedly over.

Why the US space program was mostly a sham, and why the “UFOs” that started appearing around the world in the late 1940s were (and still are) most likely flown by Nazi pilots.

How key government, manufacturing, pharmaceutical, financial leaders, and institutions helped Hitler come into power, and facilitated the preservation of Nazi wealth and power after WWII.

Why today’s world is secretly controlled by a malevolent shadow government and entire populations are being surreptitiously brainwashed.

How ancient stargates have been duplicated to open portals into spiritual and demonic universes.

Why those controlling our planet have laid the groundwork for a takeover by a dictator who could best be described as the Antichrist of the Bible. Empire Beneath the Icecarefully documents these and many more astounding facts, divulging the truth about what is happening today. It gives you the insights to help prevent this diabolical takeover or, if it occurs, reveals the details and essential actions you and your loved ones must take. Empire Beneath the Ice exposes the dangers our world faces, and will arm you with the tools you need to counter these unspeakable, secret evils.




While High makes sure to reaffirm that we still don't know exactly what's going on down there, he claims that based on the few clues we do have that things are definitely ramping up. The fact that LEIDOS/SAIC refocused a core science group on Antarctica, which did part of their past work in 'reverse engineering', tells him that, while we're now living in very exciting times, a major inflection point is ahead. As he continues, with the old system dying and a new one being born, there are great opportunities along with great risks.


In the 2nd video below our videographer talks with us about the massive military build-up going on down in the Antarctic region including many defense contractors and mercenaries. Also discussed in the eye-opening final video below featuring Clif High are Bitcoin and other digital currencies and the potential for financial unrest ahead. The conversation turns towards Antarctica at the 24 minute mark. For those new to the Web Bot project, Cliff High's cutting edge technology is a set of algorithms used to process variations in the language that can offer insight into the mood of the collective unconscious through “predictive linguistics.”



found on Operation Disclosure
_________________________
Stillness in the Storm Editor's note: Did you find a spelling error or grammar mistake? Do you think this article needs a correction or update? Or do you just have some feedback? Send us an email at sitsshow@gmail.com with the error, headline and urlThank you for reading.

           Facebook changes algorithm to curb top spammers    
The computer algorithm behind News Feed will limit the reach of people known to frequently blast out links to clickbait stories, sensationalist websites and misinformation.
          The Future Of Health Care Could Be Humans, Robots — Or Both   

Minerva Studio / Getty Images

At the well-funded startup Omada Health, coaches teach patients to prevent diabetes by eating better and exercising. They don’t meet face-to-face, but communicate over the internet — and the coaches are increasingly aided by a machine-learning-powered software that provides cues for interacting with the patients.

Since the San Francisco company was founded in 2011, these coaches were a mix of full- and part-time staffers. But in November, Omada let go of all the part-timers and instructed the remaining coaches to rely more on the software, the company told BuzzFeed News.

CEO Sean Duffy insists that his long-term goal isn’t to replace people with software. “The thesis is that we don’t think we’ll ever be at a point in Omada’s trajectory where we’ll ever take people or coaches out of the equation,” he told BuzzFeed News. “But they’ve got really smart systems to help them.”

Like many other tech-enabled health care services that connect patients with experts — coaches, therapists, nurses, doctors — over video chat, email, and text, Omada is trying to navigate a fundamental shift in labor. People are expensive — at least compared to automated, data-driven chatbots that could give advice and diagnose diseases without needing a salary or a college degree. But bots aren’t nearly as good at holding conversations, perceiving emotions and subtext, and delivering sensitive information like, say, a cancer diagnosis. If they want to grow, startups will have to figure out whether their patients and businesses alike will be best served by man, machine, or some blend of the two.

“There’s a spectrum of totally autonomous machine learning and the other side is totally human-driven,” said Mike McCormick, principal at Comet Labs, a venture capital firm that invests in artificial intelligence startups. “And then there’s every shade of gray in the middle of that.”

The previously unreported cuts at Omada last fall were small, and affected 10 to 12 part-time coaches, according to a spokesperson. It has about 60 to 70 full-time coaches and 250 employees overall. In another set of layoffs that Duffy said were unrelated, the startup also laid off roughly 20 workers last week, saying it “had to focus on Omada’s core business and expertise, while orienting the company for long-term success.” Omada has raised $125 million in venture capital, including $50 million this month in a round led by the health insurer Cigna.

“There’s a spectrum of totally autonomous machine learning and the other side is totally human-driven.”

Duffy said that as Omada has treated more patients and collected more of their data, it’s trained an algorithm to detect important behavioral changes. For example, if a person weighed in on a digital scale every day consistently, then stopped weighing in for three days, the system would flag the coach. It’d then “suggest messages they might send to a participant that might result in an outcome” — in this case, to find out how a person is doing and why they’re skipping weigh-ins, Duffy said.

The CEO was quick to note that the machine isn’t prewriting messages down to the word, but rather suggesting a gist to convey. He said that users could tell when a nominally human-written message is computer-generated, and that this makes them lose trust in the system.

Coaches can also say no. “If we get enough coaches declining these suggestions and saying, ‘That violated my intuition as a human being,’ it trains the system to get better and better and give better and better suggestions,” he said.

The part-timers had access to this technology, but Duffy said that the company benefited more from having full-timers who are constantly involved and invested in improving it.

Omada isn’t the only company exploring how to use AI to improve health care. Startups like Babylon Health, HealthTap, and Remedy are developing chatbots to assess patients’ symptoms. Big Health has an entirely automated program called Sleepio, which stars a cartoon professor and is designed to help people with severe insomnia. But these nascent technologies are too new to definitively prove that machines can improve health more than humans can.

To survive, any kind of virtual health service will have to prove that it can get people to sign up, stay involved, and actually improve their health, said Liz Rockett, director of Kaiser Permanente Ventures, which has invested in both Omada and Big Health. “Doing the work of proving efficacy and reach is the best way to define that line of what works and what doesn’t – including on the question of using coaches in the delivery or having an all-virtual offering,” she wrote in an email to BuzzFeed News.

For now, there are way more people-to-people telemedicine services. Ginger.io initially tried to infer behavioral patterns and mental health problems from passively tracked smartphone data, but switched to a text and video-chat model with human therapists.

So in 5 to 10 years, will patients be more likely to interact with a human or a chatbot when they open up a health app? It’ll largely depend on how high stakes the situation is, McCormick said. You’d want to hear that you have cancer from a trained expert with an extremely high degree of accuracy and emotional sensitivity. But for, say, nutrition coaching, he said, “maybe people are ready now ... It has to be nuanced.”


          “Knocking On Resistance” Leads 7 Algos, 3 Short: Holly Morning Huddle   

“Knocking On Resistance” Leads The Daily Regime of Optimized Algorithms Good morning.  Knocking on Resistance, today’s high performing algorithm, headlines the Holly Morning Huddle Report. As of today, Friday, June 30, 2017, the return from my trading performance since January 1 stands … Continue reading

The post “Knocking On Resistance” Leads 7 Algos, 3 Short: Holly Morning Huddle appeared first on Trade-Ideas.


          An Algorithm Helps Protect Mars Curiosity's Wheels   
There are no mechanics on Mars, so the next best thing for NASA's Curiosity rover is careful driving. A new algorithm is helping the rover do just that. The software, referred to as traction control, adjusts the speed of Curiosity's wheels depending on the rocks it's climbing. After 18 months of testing at NASA's Jet Propulsion Laboratory in Pasadena, California, the software was uploaded to the rover on Mars in March. Mars Science Laboratory's mission management approved it for use on June 8 [...]

          Sales Development Representative - Contentful - San Francisco, CA   
Websites, mobile apps, internal enterprise portals, voice or chat bots, machine learning algorithms, VR, IoT or any other new platform....
From Contentful - Thu, 01 Jun 2017 11:11:02 GMT - View all San Francisco, CA jobs
          Non-cryptographic hash functions for .NET   

Creating hashs is quite common to check if content X has changed without looking at the whole content of X. Git for example uses SHA1-hashs for each commit. SHA1 itself is a pretty old cryptographic hash function, but in the case of Git there might have been better alternatives available, because the “to-be-hashed” content is not crypto relevant - it’s just content marker. Well… in the case of Git the current standard is SHA1, which works, but a ‘better’ way would be to use non-cryptographic functions for non-crypto purposes.

Why you should not use crypto-hashs for non-crypto

I discovered this topic via a Twitter-conversation and it started with this Tweet:

Clemens Vasters then came and pointed out why it would be better to use non-crypto hash functions:

The reason makes perfect sense for me - next step: What other choice are available?

Non-cryptographic hash functions in .NET

If you are googleing around you will find many different hashing algorithm, like Jenkins or MurmurHash.

The good part is, that Brandon Dahler created .NET versions of the most well known algorithm and published them as NuGet packages.

The source and everything can be found on GitHub.

Lessons learned

If you want to hash something and it is not crypto relevant, then it would be better to look at one of those Data.HashFunctions - some a pretty crazy fast.

I’m not sure which one is ‘better’ - if you have some opinions please let me know. Brandon created a small description of each algorithm on the Data.HashFunction documentation page.

(my blogging backlog is quite long, so I needed 6 month to write this down ;) )


          Senior Staff Millimeter-Wave/RFIC Design Engineer - Peraso Technologies Inc. - Engineer, BC   
Development of digital pre-distortion and IQ calibration techniques for enhanced 60 GHz TX EVM. Design and verification of calibration algorithms such as DC...
From Peraso Technologies Inc. - Sat, 08 Apr 2017 08:21:46 GMT - View all Engineer, BC jobs
          Per Loenicker releases CHAiOS SYNTH 2 v2.2   

iPhone, iPad / Instruments / Synthesizers : CHAiOS SYNTH 2 is a synthesizer that creates unique melodies with just one tap! Place your finger on the main screen and a melody is generated by the CHAiOS algorithm. If this melody sounds good, it can be looped and the sound can be tweaked using a...


          ABOVE AVERAGE IS OVER:   
Will Robots Rule Finance? (Nafis Alam, Sunway University and Graham Kendall, University of Nottingham | June 29, 2017, Discover)

According to consulting firm Opimas, in years to come it will become harder and harder for universities to sell their business-related degrees. Research shows that 230,000 jobs in the sector could disappear by 2025, filled by "artificial intelligence agents".

Are robo-advisers the future of finance?

Many market analysts believe so.

Investments in automated portfolios rose 210 percent between 2014 and 2015, according to the research firm Aite Group.

Robots have already taken over Wall Street, as hundreds of financial analysts are being replaced with software or robo-advisors.

In the US, claims a 2013 paper by two Oxford academics, 47 percent of jobs are at "high risk" of being automated within the next 20 years - 54 percent of lost jobs will be in finance.

This is not just an American phenomenon. Indian banks, too, have reported a 7 percent decline in head count for two quarters in a row due to the introduction of robots in the workplace.

Perhaps this is unsurprising. After all, the banking and finance industry is principally built on processing information, and some of its key operations, like passbook updating or cash deposit, are already highly digitized.

Now, banks and financial institutions are rapidly adopting a new generation of artificial intelligence-enabled technology (AI) to automate financial tasks usually carried out by humans, like operations, wealth management, algorithmic trading and risk management.

          Talent Acquisition Executive - Decibel Insight - Boston, MA   
Our technology reveals exactly how users behave on websites, and our groundbreaking machine learning algorithms surface the nature of their experiences - be...
From Decibel Insight - Mon, 15 May 2017 18:25:34 GMT - View all Boston, MA jobs
          Client Services Associate, Japanese Speaker - Quid - San Francisco, CA   
Quid algorithms reveal patterns in large, unstructured datasets and then generate beautiful, actionable visualizations....
From Quid - Thu, 18 May 2017 06:21:37 GMT - View all San Francisco, CA jobs
          An algorithm helps protect Mars Curiosity's wheels   
There are no mechanics on Mars, so the next best thing for NASA's Curiosity rover is careful driving.
          Kaspersky Virus Removal Tool   
Mit dem „Kaspersky Virus Removal Tool“ entfernen Sie Viren und andere schädliche Programme wie Trojaner, Würmer, Spyware und Rootkits von Ihrem PC. Nach der Installation wählen Sie aus, welche Bereiche des Systems untersucht werden sollen und starten den Malware-Scan per Mausklick. Für die Virensuche nutzt das „Kaspersky Virus Removal Tool“ die gleichen leistungsfähigen Algorithmen, die auch im kommerziellen Produkt „Kaspersky Anti-Virus“ zum Einsatz kommen. Da das Programm jedoch keinen Echtzeitschutz bietet, empfiehlt es sich, nebenher eine vollwertige Antiviren-Software zu installieren.
          Algorithms try to channel us into repeating our lives   

Molly Sauter (previously) describes in gorgeous, evocative terms how the algorithms in our life try to funnel us into acting the way we always have, or, failing that, like everyone else does. (more…)


          facebook-in-app-browser crashen   
Mein Projekt der Woche ist das hier auf GitHub: https://github.com/Mte90/FB-Android-Crash. An der Oberfläche: ein kleines Coding-Projekt, was durch permanenten Ärger mit dem In-App-Facebook-„Browser“ entstand und durch Raten und Rumprobieren (der Vergleich zur „Black Box“ IExplorer 6 kommt nicht von ungefähr) eben diesen zum Absturz bringt. Ich denke, über die Zeit soll auch nachverfolgt werden können, […]
          kunst gegen facebook-algorithmen   
Ich bin nicht mehr bei Facebook, den Account vor Jahren gelöscht. Es nicht mehr zu nutzen war wie viele sagen: die Plattform lässt man leicht hinter sich. Die wirklichen Freunde dort nicht. Trotzdem war es im Nachhinein eine gute und vernünftige Entscheidung. Und wäre der Zirkel dort wirklich an meiner Meinung als IT-Experte interessiert gewesen, […]
          A big question mark is hanging over the hottest trend in investing   
Roboadvisers, which use algorithms to guide investments, have witnessed impressive growth since...
          Easy to Use, 2-in-1, Portable, compatible with Hero 3/4/5   
VILTA power charging optional adapter can extend gopro camera while in operation of VILTA. Dual micro USB charging port. Handheld Gimbal transforms to a wearable stabilizer with a quick release button. Wearable gimbal is compatible Gopro mounts, to give you the best outdoor cinema stabilize image Auto-Calibration, own patented adaptive sensor algorithm and intelligent calibration […]
          Facebook Is Changing Its News Feed Algorithm Again   
To curb spammers and fake news.
          Google Update: Exact Match Domains   

Click play to watch video Matt Cutts, Head of Google’s Webspam team, tweeted that Google’s algorithm has been changed again: What does this mean? Google is trying to stop websites that use keywords with high search results in their domain names (exact match domains) but with poor quality content, from ranking highly in search results. […]

Google Update: Exact Match Domains | Amica Digital


          Wpa Wps Tester Premium v3.2.4 Cracked APK   
Wpa Wps Tester Premium v3.2.4 Cracked apk for Android. After the success of “Wifi wps wpa tester” comes the Premium version! Try if your wireless network is secure or not! thanks to this App and thanks to the algorithm of wps default (zaochensung) SOME of routers, you can receive the WPA WPA2 WEP set to …
          Dji PHANTOM 4 ADVANCED - Prêt à voler   

DJI PHANTOM 4 ADVANCED

RADIOCOMMANDE VERSION SANS ÉCRAN
La radiocommande du Phantom 4 Advanced contient la technologie de transmission vidéo HD Lightbridge, qui agit sur une distance de 7 km (4,3 mi). L'application DJI GO 4 intégrée permet de diffuser, éditer et partager vos vidéos aériennes et vos photos instantanément.
*Sans obstacle ni interférence, conforme à la norme FC

 PRES

Capteur 1 pouce 20 MP, Temps de vol 30 min, Évitement d’obstacles frontal, Fonctions intelligentes et bien plus. Technologie d'imagerie aérienne avancée.

La caméra est équipée d'un capteur de 1 pouce de 20 mégapixels capable de prendre des vidéos 4K / 60fps et des photos en mode rafale à 14 ips. Le système FlightAutonomy inclut 5 capteurs optiques pour détecter les obstacles dans 2 directions et éviter ceux qui sont à l'avant. Le choix d'un alliage de titane et de magnésium augmente la rigidité du châssis et réduit le poids, rendant le Phantom 4 Advanced plus léger que le Phantom 4.

CAMÉRA AVEC CAPTEUR 1 POUCE 20MP
La caméra embarquée est équipée d'un capteur CMOS 1 pouce 20 mégapixels. L'objectif sur-mesure composé de huit éléments, est arrangé en sept groupes.L'obturateur mécanique élimine la distorsion d'obturateur roulant qui peut survenir lors de la prise d'images de sujets en mouvement rapide ou en vol à grande vitesse. En effet, il est comparable aux nombreuses caméras traditionnelles au sol. Le traitement vidéo plus puissant est compatible avec les vidéos 4K H.264 à 60 ips ou H.265 à 30 ips, les deux formats offrant un débit binaire de 100 Mb/s. Les performances du capteur et des processeurs assurent une plage dynamique élevée et un niveau de détail adapté pour une post-production poussée.

L'objectif sur-mesure composé de huit éléments, est arrangé en sept groupes. La caméra est aussi puissante que la plupart des caméras DSLR traditionnelles.

La taille du capteur est plus importante pour la qualité de l'image que le nombre de pixels, car un plus grand capteur contient plus d'informations dans chaque pixel.Cela améliore la plage dynamique, le rapport signal sur bruit et les performances de faible luminosité. Le capteur CMOS d'un pouce et 20 mégapixels du Phantom 4 Advanced est presque quatre fois plus grand que le capteur 1 / 2,3 pouce du Phantom 4. Il utilise des pixels plus grands et a un ISO maximum de 12800 ainsi qu'un contraste augmenté.La qualité est suffisante pour utiliser les images immédiatement, mais capture aussi suffisamment de détails pour la post-production.

VIDÉO 4K PROFESSIONNELLE
Un système de traitement vidéo amélioré permet de capturer la vidéo dans les standards optimisés DCI du cinéma et de la production : 4K / 60 (4096 x 2160 / 60 ips)à un débit de 100 Mb/s. Vous obtiendrez des plans au ralenti en haute résolution. Le Phantom 4 Advanced prend également en charge le codec vidéo H.265 (résolutionmaximale 4096X2160 / 30 ips). Pour un débit binaire donné, le format de compression H.265 double la quantité de traitement d'image par rapport au H.264 et améliorenettement la qualité d'image. Enregistrer avec une plage dynamique élevée du mode D-log permet de tirer au mieux partie de ces données pour étalonner les couleurs.

OBJECTIF HAUTE RÉSOLUTION
La résolution et le contraste d'un objectif sont essentiels à la qualité de l'image, car seul un objectif de qualité peut capturer des photos nettes et vives en haute résolution.La toute nouvelle caméra du Phantom 4 Advanced dispose d'un objectif grand angle F2.8, pensé pour la photographie aérienne, avec une distance focale de 24 mm. Il comporte huit éléments, dont 2 asphériques, disposés en sept groupes qui s'insèrent dans un cadre plus petit et plus compact. Les images créées présentent une faible distorsion etune faible dispersion, cela garantit aux photos et aux vidéos un rendu net et vif. Les résultats MTF (Modulation Transfer Function) ont été rendus publics afin d'avoir une meilleure compréhension de la performance de la lentille.

CAPTURER CHAQUE INSTANT
L'imagerie aérienne ne sert pas uniquement à capturer des paysages mais aussi à apporter de nouvelles perspectives lorsqu'il s'agit de filmer des scènes d'action ou des courses, par exemple. Capturer des objets se déplaçant à grande vitesse a toujours été un défi pour les caméras volantes à obturateur électronique. Le Phantom 4 Advanced utilise un obturateur mécanique et un objectif à grande ouverture. L'obturateur mécanique d'une vitesse maximale de 1 / 2000s élimine la distorsion d'obturateur roulant qui peut survenir lors de la prise d'images de sujets en mouvement rapide ou en vol à grande vitesse. L'obturateur électronique a également été amélioré avec une vitesse d'obturation maximale de 1/8000 secondes, et un nouveau mode Rafale capable de saisir des clichés à 14 ips et 20 mégapixels pour ne jamais rater l'instant parfait.

5 CAPTEURS OPTIQUES
Composé de 5 capteurs optiques, des positionnements satellites GPS et GLONASS, de capteurs à ultrasons et d'un système de redondance, le système FlightAutonomy donne la capacité au Phantom 4 Advanced de voler en stationnaire avec précision, même dans des endroits sans signal GPS, et de voler dans des environnements complexes. Les deux capteurs optiques avant voient jusqu'à 30 m au devant de l'appareil. Celui-ci peut freiner automatiquement, voler en stationnaire face à des obstacles ou en faire le tour, lorsqu'ils sont dans un rayon de 15 m.
PRES
MODES DE VOLS INTELLIGENTS

DESSIN: est une toute nouvelle technologie pour le contrôle des points de repère. Il suffit de tracer un itinéraire à l'écran et le Phantom 4 Advanced se déplacera dans cette direction tout en gardant son altitude verrouillée. Cela permet au pilote de se concentrer sur le contrôle de la caméra et autorise des plans plus complexes. Il existe deux modes Dessin qui peuvent être utilisés dans différents scénarios.Avancer : L'appareil suit la trajectoire à une vitesse régulière avec la caméra orientée dans la direction du vol. Libre (Free) : L'appareil ne se déplace le long de l'itinéraire que lorsqu'il en reçoit l'ordre. Dans ce mode, la caméra peut être orientée dans n'importe quelle direction pendant un vol.
PRES
ACTIVETRACK: Les objets en déplacement rapide peuvent être très difficiles à suivre, mais les algorithmes avancés de reconnaissance d'image utilisés par le Phantom 4 Advanced lui permettent de reconnaître et de suivre l'objet tout en le gardant dans le cadre. Ce nouvel algorithme reconnaît également plus de sujets, les personnes, les véhicules ou encore les animaux, et ajuste sa dynamique de vol pour réaliser des séquences uniformes.Les pilotes peuvent choisir entre :Tracer – Suit le sujet de derrière ou de devant, évitant les obstacles automatiquement.Profil – Vole à côté d'un sujet depuis différents angles pour obtenir des images du sujet de profil.Projecteur – La caméra est fixée sur le sujet pendant que l'appareil vole quasiment n’importe où.
PRES
TAPFLY: Volez dans n'importe quelle direction visible à l'écran d'un simple appui sur l'écran. Touchez n'importe où sur l'écran pour ajuster doucement la direction du vol tout en évitant automatiquement les obstacles*. Appuyez à nouveau sur l'écran ou utilisez les joysticks pour changer de direction. Une nouvelle fonction route AR (Réalité augmentée) montre la trajectoire de vol de l'appareil en temps réel à mesure que son itinéraire est ajusté. Comme il peut être difficile de contrôler simultanément l'altitude, le cap, la vitesse et l'inclinaison de la caméra à l'aide des joysticks, TapFly Libre permet à un pilote de définir la direction du vol, lui laissant plus de marge pour tourner le Phantom 4 Advanced ou incliner la nacelle comme il faut, sans changer la direction de vol. Au total, il existe maintenant deux modes TapFly :TapFly Avancer – Appuyez pour voler dans la direction sélectionnée. TapFly Libre – Verrouille la direction vers l'avant du Phantom sans verrouiller l'orientation de la caméra, ce qui lui permet de pivoter en volant.*L'évitement d’obstacles n'est pas disponible avec TapFly Libre.
PRES
RETURN TO HOME: Avec le mode Return to Home, le Phantom 4 Advanced peut automatiquement choisir la meilleure trajectoire de retour au point de départ en fonction des conditions environnantes. Il enregistre son parcours tout au long du vol, lui permettant de revenir en suivant la même route et en évitant les obstacles en cas de déconnexion avec la radiocommande. Sur la base de son altitude au moment de la déconnexion, le Phantom 4 Advanced est également en mesure d'ajuster sa trajectoire de vol pour éviter les obstacles qu'il a vu pendant son vol. Au décollage, le Phantom 4 Advanced enregistre la scène en-dessous et compare son enregistrement avec ce qu'il voit à son retour, pour un atterrissage plus précis. Il peut également détecter le relief et constater si l'endroit est approprié pour l'atterrissage. Si des obstacles sont découverts, ou s'il y a de l'eau au sol, il alerte le pilote et vole en stationnaire à une hauteur appropriée, afin d'aider l'appareil à atterrir de façon plus sûre.
PRES
MODE GESTES: En utilisant le mode Gestes, les selfies peuvent être réalisés facilement en utilisant quelques gestes sans radiocommande. La technologie avancée de vision par ordinateur permet au Phantom 4 Advanced de recevoir des instructions par l'intermédiaire de gestes. Le sujet soulève simplement les bras en face de la caméra et l'appareil, reconnaissant ce mouvement, se verrouille alors sur le sujet et le place au centre du cadre. Lorsqu'il est prêt pour une photo, le sujet avertit l'appareil en tendant les bras. Un compte à rebours de trois secondes commence, le temps de prendre la pose, permettant de saisir des instants sans radiocommande.
PRES

PERFORMANCES DE VOL

MODES DE VOL: Selon les conditions, les caractéristiques de vol diffèrent, et le Phantom 4 Advanced offre trois modes de vol : P, A et S. En mode Position, TapFly, ActiveTrack, la détection d'obstacle et les fonctions de positionnement sont disponibles. Le mode Sport ajoute une agilité et une vitesse supplémentaires, atteignant 72 km/h (45 mph). Le mode Atti désactive la stabilisation du satellite et maintient l'altitude du Phantom 4 Advanced. Il est idéal pour les pilotes expérimentés qui cherchent à capturer des images plus lisses. Le mode Trépied (Tripod), qui limite la vitesse à 7 km/h (4 mph), offre un contrôle de précision pour le cadrage minutieux et le vol en intérieur.

IMU REDONDANCE DES CAPTEURS: Le Phantom 4 Advanced comprend deux compas et deux IMU, pour une plus grande fiabilité. Compas et IMU sont des capteurs importants pour assurer un vol stable et le Phantom 4 Advanced compare en permanence les données qu'il reçoit des deux paires. Ces données sont traitées à l'aide d'algorithmes avancés pour vérifier leur exactitude et toute donnée inexacte est simplement rejetée sans affecter l'appareil, et en gardant le vol stable et fiable.

BATTERIES INTELLIGENTES: L'imagerie aérienne professionnelle est plus adaptée aux temps de vol plus longs. Le Phantom 4 Advanced a un temps de vol maximum de 30 minutes, il octroie plus de temps dans les airs pour saisir le cliché parfait. L'application DJI GO 4 indique l'autonomie de la batterie et calcule le temps de vol restant en fonction de la distance parcourue et plus encore. Il émet des alertes lorsqu'il atteint un niveau minimum de puissance pour un voyage sûr jusqu'au point de départ. Un système avancé de gestion de la batterie est également présent pour éviter les surcharges et le drainage excessif. Lorsqu'il est stocké à plus long terme, les batteries se déchargent pour maintenir un état correct.

DJI GO 4: Grâce à l'application DJI GO 4, un grand nombre de modes de vol intelligents sont disponibles. Elle fournit également un contrôle total et manuel de la caméra, comme l'ISO, l'ouverture, la vitesse d'obturation, les formats d'image, et bien plus. Tout changement dans DJI GO 4 apparaît presque instantanément à l'écran. Les données de vol vitales et l'état de la transmission vidéo sont faciles à vérifier dans l'application, améliorant ainsi l'efficacité et le confort d'utilisation.

Phantom 4 Advanced et + SPECS.

Appareil
Poids (batterie et hélices incluses): 1368 g
Diagonale (Sans hélice): 350 mm
Vitesse ascensionnelle max.: Mode S : 6 m/s (19,7 ft/s)
Mode P : 5 m/s (16,4 ft/s)
Vitesse de descente max.: Mode S : 4 m/s
mode P : 3 m/s
Vitesse max.: 72 km/h (45 mph) (Mode S) ; 58 km/h (36 mph) (Mode A) ;
50 km/h (31 mph) (Mode P)
Angle d'inclinaison max.: Mode S : 42°
Mode A : 35°
Mode P : 25°
Vitesse angulaire max.: 250°/s (mode S); 150°/s (mode A)
Plafond pratique max. au-dessus du niveau de la mer: 6000 m (19685 pieds)
Résistance au vent max.: 10 m/s
Temps de vol max.: Environ 30 minutes
Plage de températures de fonctionnement: 0° à 40° C (32° à 104° F)
Systèmes de positionnement satellite: GPS/GLONASS
Plage de précision du vol stationnaire: Verticale : ± 0,1 m (avec Vision Positioning) ; ± 0,5 m (avec GPS Positioning)
Horizontale : ± 0,3 m (avec Vision Positioning) ; ± 1,5 m (avec GPS Positioning)

Nacelle
Stabilisation: 3 axes (inclinaison verticale, roulis, panoramique)
Plage réglable: Inclinaison verticale : de -90° à +30°
Vitesse de contrôle angulaire max.: Inclinaison : 90°/s
Précision du contrôle angulaire: ±0.02°

Caméra
Capteur: CMOS 1"
Pixels effectifs : 20M
Objectif: FOV 84°, 8,8 mm / 24 mm (35 mm format equivalent),
f/2,8 - f/11
mise au point automatique à 1 m - ∞
Plage ISO: Vidéo :
100 - 3200 (Auto)
100 - 6400 (Manuel)
Photo:
100 - 3200 (Auto)
100- 12800 (Manuel)
Vitesse d'obturation mécanique: 8 - 1/2000 s
Vitesse d'obturation électronique: 8 - 1/8000 s
Taille max. de l'image: Proportion 3:2 : 5 472 × 3 648 ; Proportion 4:3 : 4 864 × 3 648 ; Proportion 16:9 : 5 472 × 3 078
Taille d'image PIV (Photo In-video): 4096×2160(4096×2160 24/25/30/48/50p)
3840×2160(3840×2160 24/25/30/48/50/60p)
2720×1530(2720×1530 24/25/30/48/50/60p)
1920×1080(1920×1080 24/25/30/48/50/60/120p)
1280×720(1280×720 24/25/30/48/50/60/120p)
Modes de photographie: Prise unique
Rafale : 3/5/7/10/14 clichés
Bracketing d'exposition(AEB) : 3/5 clichés en bracketing à 0,7 EV
Intervalle : 2/3/5/7/10/15/30/60 s
Modes d'enregistrement vidéo:

H.265
C4K : 4096×2160 24/25/30p @100Mb/s
4K : 3840×2160 24/25/30p @100Mb/s
2,7K : 2720×1530 24/25/30p @65Mb/s
2,7K : 2720×1530 48/50/60p @80Mb/s
FHD : 1920×1080 24/25/30p @50Mb/s
FHD : 1920×1080 48/50/60p @65Mb/s
FHD : 1920×1080 120p @100Mb/s
HD : 1280×720 24/25/30p @25Mb/s
HD : 1280×720 48/50/60p @35Mb/s
HD : 1280×720 120p @60Mb/s

H.264
C4K : 4096×2160 24/25/30/48/50/60p @100Mb/s
4K : 3840×2160 24/25/30/48/50/60p @100Mb/s
2,7K : 2720×1530 24/25/30p @80Mb/s
2,7K : 2720×1530 48/50/60p @100Mb/s
FHD : 1920×1080 24/25/30p @60Mb/s
FHD : 1920×1080 48/50/60p @80Mb/s
FHD: FHD : 1920×1080 120p @100Mb/s
HD : 1280×720 24/25/30p @30Mb/s
HD : 1280×720 48/50/60p @45Mb/s
HD : 1280×720 120p @80Mb/s


Bitrate de stockage vidéo: 100 Mb/s
Systèmes de fichiers pris en charge: FAT32 (≤ 32 Go); exFAT (> 32 Go)
Photo: JPEG, DNG (RAW), JPEG + DNG
Vidéo: MP4/MOV (AVC/H.264;HEVC/H.265)
Cartes SD prises en charge: Micro SD, capacité max. : 128 Go.
Vitesse d'écriture ≥15MB/s, Classe 10 ou type UHS-1 minimum
Plage de températures de fonctionnement: 0° to 40° C (32° to 104° F)

Vision System
Vision System: Système de détection optique avant
Système de détection optique inférieur
Plage de vitesse: ≤ 50 km/h (31 mph) à 2 m (6,6 pieds) au-dessus du sol
Plage d'altitude: 0-10 m (0-33 pieds)
Portée de fonctionnement: 0-10 m (0-33 pieds)
Portée de détection d'obstacles: 0,7 à 30 m (2 à 98 pieds)
FOV: Avant : 60° (horizontal), ±27° (vertical)
Inférieur : 70° (avant et arrière), 50° (gauche et droite)
Fréquence de mesure: Avant : 10 Hz
Inférieur : 20 Hz
Conditions de fonctionnement: Surfaces régulières et bien éclairées (> 15 lux)

Radiocommande
Fréquence de fonctionnement: 2.400 - 2.483 GHz
Distance de transmission max.: 2.400 - 2.483 GHz (sans obstacle ni interférence)
FCC : 7 km (4,3 mi)
CE : 3,5 km (2,2 mi)
SRRC : 4 km (2,5 mi)
Plage de températures de fonctionnement: 0° to 40° C (32° to 104° F)
Batterie: 6000 mAh LiPo 2S
Puissance de l'émetteur (EIRP): 2.400 - 2.483 GHz
FCC : 26 dBm
CE : 17 dBm
SRRC : 20 dBm
Courant et tension de fonctionnement: 1,2 A à 7,4 V
Ports de sortie vidéo: GL300E : HDMI
GL300C : USB
Support pour appareil mobile: GL300E : écran intégré (de 5,5 pouces, 1920 × 1080, 1000 cd/m2, Système Android, 4Go RAM + 16Go ROM)
GL300C : Tablettes et téléphones

Batterie de Vol Intelligente
Capacité: 5870 mAh
Tension: 15,2 V
Type de batterie: LiPo 4S
Énergie: 89,2 Wh
Poids net: 468 g
Plage de températures de fonctionnement: De 5 à 40 °C (41 à 104 °F)
Puissance de charge max.: 100 W

Chargeur
Tension: 17,5V
Puissance nominale: 100 W

Application / Aperçu en direct
Application mobile: DJI GO 4
Fréquence de fonctionnement de l'aperçu en direct: 2,4 GHz ISM
Qualité de l'aperçu en direct: 720P @ 30 ips
Latence: Phantom 4 Advanced : 220 ms (depending on conditions and mobile device)
Phantom 4 Advanced +:160 - 180 ms
Systèmes d'exploitation requis: iOS 9.0 ou version ultérieure
Android 4.4.0 ou version ultérieure
Compatibilité avec appareils mobiles: iOS : iPhone 5s, iPhone SE, iPhone 6, iPhone 6 Plus, iPhone 6s, iPhone 6s Plus, iPhone 7, iPhone 7 Plus, iPad Air, iPad Air Wi-Fi + Cellular, iPad mini 2, iPad mini 2 Wi-Fi + Cellular, iPad Air 2, iPad Air 2 Wi-Fi + Cellular, iPad mini 3, iPad mini 3 Wi-Fi + Cellular, iPad mini 4, et iPad mini 4 Wi-Fi + Cellular. Cette application est optimisée pour iPhone 7 et iPhone 7 Plus. Android : Samsung tabs 705c, Samsung S6, Samsung S5, Samsung NOTE4, Samsung NOTE3, Google Nexus 6p, Nexus 9, Google Nexus 7 II, Ascend Mate7, Huawei P8 Max, LG V20, Nubia Z7 mini, SONY Xperia Z3, MI 3 et MI PAD. *Compatible sur d'autres appareils Android, les tests et le développement continuent.

CONTENU DU PHANTOM 4 ADVANCED:
1 x Drone DJI Phantom 4 Advanced avec caméra 4K
1 x Radiocommande
1 x Support radio pour smartphone
1 x Batterie intelligente 4S 5870mah
1 x Chargeur 100W
1 x Adaptateur secteur
1 x Protection pour caméra
1 x MicroSD 16 Go
1 x Câble micro USB
1 x Câble USB OTG
2 x Jeu de 4 hélices
1 x Guide démarrage rapide

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer


          “Mind Reading” Technology to Decode Complex Thoughts   
Carnegie Mellon University scientists can now use brain activation patterns to identify complex thoughts, such as, “The witness shouted during the trial.” This latest research led by CMU’s Marcel Just builds on the pioneering use of machine learning algorithms with brain imaging technology to “mind read.” The findings indicate that the mind’s building blocks for constructing complex thoughts are formed by the brain’s various sub-systems and are not word-based. Published in Human Brain Mapping and funded by the Intelligence Advanced Research Projects Activity (IARPA), the study offers new evidence that the neural dimensions of concept representation are universal across people
          Artificial Intelligence Python Software Engineer   
MA-Lexington, Solidus is searching for an Artificial Intelligence Python Software Engineer. The candidate will develop AI systems for large multi-sensor and open source data sets. Projects involve system design and architecture and the development of algorithms for machine learning, computer vision, natural language processing, and graph analytics implemented on enterprise big data architectures. The candidate
          Aspen Ideas Festival: Artificial Intelligence and the Future of Humanity   
Work, play, privacy, communication, finance, war, and dating: algorithms and the machines that run them have upended them all. Will artificial intelligence become as ubiquitous as electricity? Is there any industry AI won't touch?
          Measurement of jet $p_{\mathrm{T}}$ correlations in Pb+Pb and $pp$ collisions at $\sqrt{s_{\mathrm{NN}}}=2.76\textrm{ TeV}$ with the ATLAS detector   
Measurements of dijet $p_{\mathrm{T}}$ correlations in Pb+Pb and $pp$ collisions at a nucleon--nucleon centre-of-mass energy of $\sqrt{s_{\mathrm{NN}}}=2.76\textrm{ TeV}$ are presented. The measurements are performed with the ATLAS detector at the Large Hadron Collider using Pb+Pb and $pp$ data samples corresponding to integrated luminosities of 0.14 nb$^{-1}$ and 4.0 pb$^{-1}$, respectively. Jets are reconstructed using the anti-$k_t$ algorithm with radius parameter values $R=0.3$ and $R=0.4$. A background subtraction procedure is applied to correct the jets for the large underlying event present in Pb+Pb collisions. The leading and sub-leading jet transverse momenta are denoted $p_{\mathrm{T_{\mathrm{1}}}}$ and $p_{\mathrm{T_{\mathrm{2}}}}$. An unfolding procedure is applied to the two-dimensional ($p_{\mathrm{T_{\mathrm{1}}}}$, $p_{\mathrm{T_{\mathrm{2}}}}$) distributions to account for experimental effects in the measurement of both jets. Distributions of $(1/N)\mbox{$\mathrm{d}$} N/\mbox{$\mathrm{d}$} x_{\mathrm{J}}$, where $x_{\mathrm{J}}=p_{\mathrm{T}_{2}}/p_{\mathrm{T}_{1}}$, are presented as a function of $p_{\mathrm{T_{\mathrm{1}}}}$ and collision centrality. The distributions are found to be similar in peripheral Pb+Pb collisions and $pp$ collisions, but highly modified in central Pb+Pb collisions. Similar features are present in both the $R=0.3$ and $R=0.4$ results, indicating that the effects of the underlying event are properly accounted for in the measurement. The results are qualitatively consistent with expectations from partonic energy loss models.
          Price Drop: Cubasis 2 - Mobile Music Creation System   
Cubasis 2 - Mobile Music Creation System
Kategorie: Musik
Preis: 54,99 € -> 27,99 €
Version: 2.1
in iTunes öffnen

Beschreibung:
Cubasis 2 ist ein vollausgestattetes iOS-basiertes Musikproduktionssystem mit durchdachter Bedienung, das Ihnen völlig neue Möglichkeiten für die mobile Musikproduktion eröffnet. Ob zum schnellen Festhalten einer musikalischen Idee oder zum Ausproduzieren komplexer Musikstücke – mit den erstklassigen Touch-optimierten Tools macht das Aufnehmen, Bearbeiten, Abmischen und Teilen Ihrer Songs vom ersten Moment an so viel Spaß wie noch nie. Cubasis 2 kommt mit vielen neuen Features, die Sie begeistern werden, darunter Time-Stretching und Pitch-Shifting in Echtzeit, ein Channel Strip in Studioqualität, professionell klingende Effekte, massenhaft neue Sounds für die internen Instrumente, ein überarbeiteter MIDI-Editor und vieles mehr. Drei virtuelle Instrumente sowie zahlreiche professionell produzierte Loops und Instrumentensounds stehen für den sofortigen Einsatz bereit und mit dem integrierten Mischpult und den hochwertigen Studioeffekten bringen Sie Ihre Songs zur Perfektion. Teilen Sie den fertigen Song per Fingertap mit der ganzen Welt oder arbeiten Sie mit Cubase am PC oder Mac daran weiter. Top Features • Unbegrenzte Anzahl Audio- und MIDI-Spuren • 24 zuweisbare Eingänge und Ausgänge • 32-Bit Fließkomma Audio-Engine • Audio I/O Auflösung bis zu 24-Bit/96 kHz • iOS 32- und 64-Bit Unterstützung • Echtzeit Time-Stretching und Pitch-Shifting auf Basis des Zplane Élastique 3 Algorithmus Micrologue Synthesizer mit 126 einsatzbereiten Presets • MicroSonic mit über 120 Sounds bas...
           An Algorithm Helps Protect Mars Curiosity's Wheels    

Traction control testing

A new software program reduces wear and tear on the Curiosity Mars rover wheels.




          Hackers Could Use Brainwaves To Make Educated Guesses On Passwords And PINs   
Clever algorithms can figure out passwords using stolen brainwave data
          New system enables 88x speedup on common parallel-computing algorithms   
New system enables 88x speedup on common parallel-computing algorithmsThe chips in most modern desktop computers have four “cores,” or processing units, which can run different computational tasks in parallel. But the chips of the future could have dozens or even hundreds of cores, ... Read more

          Software Engineers - Algorithms - Intel - Toronto, ON   
As part of Intel, we will continue to apply Moore's Law to drive the future of field-programmable gate array FPGA technology....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all Toronto, ON jobs
          Pit Noack - Labor für Sprechalgorithmen: Zentrale    
Labor für Sprechalgorithmen Für vier Wochen wird die Zentrale zum Laboratorium und Arbeitsraum. Akustisches Spiel- und Experimentierfeld sind sechs Lautsprecher, die algorithmisch entwickelte Klangstrukturen abspielen. Das akustische Rohmaterial besteht ausschließlich aus im Vorfeld und vor Ort mit Besucherinnen und Besuchern aufgenommenen Sprachsamples: einzelne Silben und vokale Klänge wie Zischen, Summen, Brummen, Schnalzen, Quieken... Jedes Sample liegt als separate Audiodatei vor, die jeweiligen Eigenschaften sind im Dateinamen codiert und auf diese Weise als kompositorische Parameter für automatisierte Verfahren verfügbar. Ausgangspunkt für die Kompositionsalgorithmen sind statistische Eigenschaften von sinnhaften wie lautmalerischen Texten, erdachte Klassifikationen vokaler Klänge, einfache rhythmische Pattern und kombinatorische Ordnungen. Wir freuen uns über Mitwirkung bei den Aufnahesessions. Bei Interesse einfach in der Zentrale melden, oder unter mail@pitnoack.de Pit Noack arbeitet als Produzent, Kurator, Autor und Dozent in den Bereichen Klanginstallation, elektroakustische Musik, Medienkunst, Creative Coding und Medientheorie. Seine Projekte sind an den Schnittstellen zwischen Wissenschaft, Kunst und Technik angesiedelt. 2016 kuratierte er die Veranstaltungsreihe “Basis Zwei”, die interdisziplinär die Verbindungen zwischen Digitaltechnik, Kunst- und Wissensproduktion untersuchte. Als c't Autor, Dozent und Initiator der Webseite maschinennah.de vermittelt er Laien die künstlerische Arbeit mit Programmcode. In seinen Arbeiten kombiniert er häufig gebrauchte Audiokomponenten mit aktuellen digitalen Werkzeugen und selbst entwickelter Software. http://pitnoack.de/ http://maschinennah.de/ http://basiszwei.tumblr.com/
          Software Engineers - Algorithms - Intel - Toronto, ON   
As part of Intel, we will continue to apply Moore's Law to drive the future of field-programmable gate array FPGA technology....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all Toronto, ON jobs
          Software Engineers - Algorithms - Intel - Toronto, ON   
As part of Intel, we will continue to apply Moore's Law to drive the future of field-programmable gate array FPGA technology....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all Toronto, ON jobs
          大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理   

大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

作者:Stephen Cui

一、大数据分析在商业上的应用

1、体育赛事预测

世界杯期间,谷歌、百度、微软和高盛等公司都推出了比赛结果预测平台。百度预测结果最为亮眼,预测全程64场比赛,准确率为67%,进入淘汰赛后准确率为94%。现在互联网公司取代章鱼保罗试水赛事预测也意味着未来的体育赛事会被大数据预测所掌控。

“在百度对世界杯的预测中,我们一共考虑了团队实力、主场优势、最近表现、世界杯整体表现和博彩公司的赔率等五个因素,这些数据的来源基本都是互联网,随后我们再利用一个由搜索专家设计的机器学习模型来对这些数据进行汇总和分析,进而做出预测结果。”—百度北京大数据实验室的负责人张桐


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

2、股票市场预测

去年英国华威商学院和美国波士顿大学物理系的研究发现,用户通过谷歌搜索的金融关键词或许可以金融市场的走向,相应的投资战略收益高达326%。此前则有专家尝试通过Twitter博文情绪来预测股市波动。

理论上来讲股市预测更加适合美国。中国股票市场无法做到双向盈利,只有股票涨才能盈利,这会吸引一些游资利用信息不对称等情况人为改变股票市场规律,因此中国股市没有相对稳定的规律则很难被预测,且一些对结果产生决定性影响的变量数据根本无法被监控。

目前,美国已经有许多对冲基金采用大数据技术进行投资,并且收获甚丰。中国的中证广发百度百发100指数基金(下称百发100),上线四个多月以来已上涨68%。

和传统量化投资类似,大数据投资也是依靠模型,但模型里的数据变量几何倍地增加了,在原有的金融结构化数据基础上,增加了社交言论、地理信息、卫星监测等非结构化数据,并且将这些非结构化数据进行量化,从而让模型可以吸收。

由于大数据模型对成本要求极高,业内人士认为,大数据将成为共享平台化的服务,数据和技术相当于食材和锅,基金经理和分析师可以通过平台制作自己的策略。

http://v.youku.com/v_show/id_XMzU0ODIxNjg0.html

3、市场物价预测

CPI表征已经发生的物价浮动情况,但统计局数据并不权威。但大数据则可能帮助人们了解未来物价走向,提前预知通货膨胀或经济危机。最典型的案例莫过于马云通过阿里B2B大数据提前知晓亚洲金融危机,当然这是阿里数据团队的功劳。

4、用户行为预测

基于用户搜索行为、浏览行为、评论历史和个人资料等数据,互联网业务可以洞察消费者的整体需求,进而进行针对性的产品生产、改进和营销。《纸牌屋》选择演员和剧情、百度基于用户喜好进行精准广告营销、阿里根据天猫用户特征包下生产线定制产品、亚马逊预测用户点击行为提前发货均是受益于互联网用户行为预测。

购买前的行为信息,可以深度地反映出潜在客户的购买心理和购买意向:例如,客户 A 连续浏览了 5 款电视机,其中 4 款来自国内品牌 S,1 款来自国外品牌 T;4 款为 LED 技术,1 款为 LCD 技术;5 款的价格分别为 4599 元、5199 元、5499 元、5999 元、7999 元;这些行为某种程度上反映了客户 A 对品牌认可度及倾向性,如偏向国产品牌、中等价位的 LED 电视。而客户 B 连续浏览了 6 款电视机,其中 2 款是国外品牌 T,2 款是另一国外品牌 V,2 款是国产品牌 S;4 款为 LED 技术,2 款为 LCD 技术;6 款的价格分别为 5999 元、7999 元、8300 元、9200 元、9999 元、11050 元;类似地,这些行为某种程度上反映了客户 B 对品牌认可度及倾向性,如偏向进口品牌、高价位的 LED 电视等。

http://36kr.com/p/205901.html

5、人体健康预测

中医可以通过望闻问切手段发现一些人体内隐藏的慢性病,甚至看体质便可知晓一个人将来可能会出现什么症状。人体体征变化有一定规律,而慢性病发生前人体已经会有一些持续性异常。理论上来说,如果大数据掌握了这样的异常情况,便可以进行慢性病预测。

6、疾病疫情预测

基于人们的搜索情况、购物行为预测大面积疫情爆发的可能性,最经典的“流感预测”便属于此类。如果来自某个区域的“流感”、“板蓝根”搜索需求越来越多,自然可以推测该处有流感趋势。

Google成功预测冬季流感:
2009年,Google通过分析5000万条美国人最频繁检索的词汇,将之和美国疾病中心在2003年到2008年间季节性流感传播时期的数据进行比较,并建立一个特定的数学模型。最终google成功预测了2009冬季流感的传播甚至可以具体到特定的地区和州。

7、灾害灾难预测

气象预测是最典型的灾难灾害预测。地震、洪涝、高温、暴雨这些自然灾害如果可以利用大数据能力进行更加提前的预测和告知便有助于减灾防灾救灾赈灾。与过往不同的是,过去的数据收集方式存在着死角、成本高等问题,物联网时代可以借助廉价的传感器摄像头和无线通信网络,进行实时的数据监控收集,再利用大数据预测分析,做到更精准的自然灾害预测。

8、环境变迁预测

除了进行短时间微观的天气、灾害预测之外,还可以进行更加长期和宏观的环境和生态变迁预测。森林和农田面积缩小、野生动物植物濒危、海岸线上升,温室效应这些问题是地球面临的“慢性问题“。如果人类知道越多地球生态系统以及天气形态变化数据,就越容易模型化未来环境的变迁,进而阻止不好的转变发生。而大数据帮助人类收集、储存和挖掘更多的地球数据,同时还提供了预测的工具。

9、交通行为预测

基于用户和车辆的LBS定位数据,分析人车出行的个体和群体特征,进行交通行为的预测。交通部门可预测不同时点不同道路的车流量进行智能的车辆调度,或应用潮汐车道;用户则可以根据预测结果选择拥堵几率更低的道路。

百度基于地图应用的LBS预测涵盖范围更广。春运期间预测人们的迁徙趋势指导火车线路和航线的设置,节假日预测景点的人流量指导人们的景区选择,平时还有百度热力图来告诉用户城市商圈、动物园等地点的人流情况,指导用户出行选择和商家的选点选址。

多尔戈夫的团队利用机器学习算法来创造路上行人的模型。无人驾驶汽车行驶的每一英里路程的情况都会被记录下来,汽车电脑就会保持这些数据,并分析各种不同的对象在不同的环境中如何表现。有些司机的行为可能会被设置为固定变量(如“绿灯亮,汽车行”),但是汽车电脑不会死搬硬套这种逻辑,而是从实际的司机行为中进行学习。

这样一来,跟在一辆垃圾运输卡车后面行驶的汽车,如果卡车停止行进,那么汽车可能会选择变道绕过去,而不是也跟着停下来。谷歌已建立了70万英里的行驶数据,这有助于谷歌汽车根据自己的学习经验来调整自己的行为。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

http://www.5lian.cn/html/2014/chelianwang_0522/42125_4.html

10、能源消耗预测

加州电网系统运营中心管理着加州超过80%的电网,向3500万用户每年输送2.89亿兆瓦电力,电力线长度超过25000英里。该中心采用了Space-Time Insight的软件进行智能管理,综合分析来自包括天气、传感器、计量设备等各种数据源的海量数据,预测各地的能源需求变化,进行智能电能调度,平衡全网的电力供应和需求,并对潜在危机做出快速响应。中国智能电网业已在尝试类似大数据预测应用。

二、大数据分析种类 按照数据分析的实时性,分为实时数据分析和离线数据分析两种。

实时数据分析一般用于金融、移动和互联网B2C等产品,往往要求在数秒内返回上亿行数据的分析,从而达到不影响用户体验的目的。要满足这样的需求,可以采用精心设计的传统关系型数据库组成并行处理集群,或者采用一些内存计算平台,或者采用HDD的架构,这些无疑都需要比较高的软硬件成本。目前比较新的海量数据实时分析工具有EMC的Greenplum、SAP的HANA等。

对于大多数反馈时间要求不是那么严苛的应用,比如离线统计分析、机器学习、搜索引擎的反向索引计算、推荐引擎的计算等,应采用离线分析的方式,通过数据采集工具将日志数据导入专用的分析平台。但面对海量数据,传统的ETL工具往往彻底失效,主要原因是数据格式转换的开销太大,在性能上无法满足海量数据的采集需求。互联网企业的海量数据采集工具,有Facebook开源的Scribe、LinkedIn开源的Kafka、淘宝开源的Timetunnel、Hadoop的Chukwa等,均可以满足每秒数百MB的日志数据采集和传输需求,并将这些数据上载到Hadoop中央系统上。

按照大数据的数据量,分为内存级别、BI级别、海量级别三种。

这里的内存级别指的是数据量不超过集群的内存最大值。不要小看今天内存的容量,Facebook缓存在内存的Memcached中的数据高达320TB,而目前的PC服务器,内存也可以超过百GB。因此可以采用一些内存数据库,将热点数据常驻内存之中,从而取得非常快速的分析能力,非常适合实时分析业务。图1是一种实际可行的MongoDB分析架构。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

图1 用于实时分析的MongoDB架构

MongoDB大集群目前存在一些稳定性问题,会发生周期性的写堵塞和主从同步失效,但仍不失为一种潜力十足的可以用于高速数据分析的NoSQL。

此外,目前大多数服务厂商都已经推出了带4GB以上SSD的解决方案,利用内存+SSD,也可以轻易达到内存分析的性能。随着SSD的发展,内存数据分析必然能得到更加广泛的应用。

BI级别指的是那些对于内存来说太大的数据量,但一般可以将其放入传统的BI产品和专门设计的BI数据库之中进行分析。目前主流的BI产品都有支持TB级以上的数据分析方案。种类繁多。

海量级别指的是对于数据库和BI产品已经完全失效或者成本过高的数据量。海量数据级别的优秀企业级产品也有很多,但基于软硬件的成本原因,目前大多数互联网企业采用Hadoop的HDFS分布式文件系统来存储数据,并使用MapReduce进行分析。本文稍后将主要介绍Hadoop上基于MapReduce的一个多维数据分析平台。

三、大数据分析一般过程

3.1 采集

大数据的采集是指利用多个数据库来接收发自客户端(Web、App或者传感器形式等)的 数据,并且用户可以通过这些数据库来进行简单的查询和处理工作。比如,电商会使用传统的关系型数据库mysql和Oracle等来存储每一笔事务数据,除 此之外,Redis和MongoDB这样的NoSQL数据库也常用于数据的采集。

在大数据的采集过程中,其主要特点和挑战是并发数高,因为同时有可能会有成千上万的用户 来进行访问和操作,比如火车票售票网站和淘宝,它们并发的访问量在峰值时达到上百万,所以需要在采集端部署大量数据库才能支撑。并且如何在这些数据库之间 进行负载均衡和分片的确是需要深入的思考和设计。

3.2 导入/预处理

虽然采集端本身会有很多数据库,但是如果要对这些海量数据进行有效的分析,还是应该将这 些来自前端的数据导入到一个集中的大型分布式数据库,或者分布式存储集群,并且可以在导入基础上做一些简单的清洗和预处理工作。也有一些用户会在导入时使 用来自Twitter的Storm来对数据进行流式计算,来满足部分业务的实时计算需求。
导入与预处理过程的特点和挑战主要是导入的数据量大,每秒钟的导入量经常会达到百兆,甚至千兆级别。

3.3 统计/分析

统计与分析主要利用分布式数据库,或者分布式计算集群来对存储于其内的海量数据进行普通 的分析和分类汇总等,以满足大多数常见的分析需求,在这方面,一些实时性需求会用到EMC的GreenPlum、Oracle的Exadata,以及基于 MySQL的列式存储Infobright等,而一些批处理,或者基于半结构化数据的需求可以使用Hadoop。
统计与分析这部分的主要特点和挑战是分析涉及的数据量大,其对系统资源,特别是I/O会有极大的占用。

3.4 挖掘

与前面统计和分析过程不同的是,数据挖掘一般没有什么预先设定好的主题,主要是在现有数 据上面进行基于各种算法的计算,从而起到预测(Predict)的效果,从而实现一些高级别数据分析的需求。比较典型算法有用于聚类的Kmeans、用于 统计学习的SVM和用于分类的NaiveBayes,主要使用的工具有Hadoop的Mahout等。该过程的特点和挑战主要是用于挖掘的算法很复杂,并 且计算涉及的数据量和计算量都很大,常用数据挖掘算法都以单线程为主。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
四、大数据分析工具

4.1 Hadoop

Hadoop 是一个能够对大量数据进行分布式处理的软件框架。但是 Hadoop 是以一种可靠、高效、可伸缩的方式进行处理的。Hadoop 是可靠的,因为它假设计算元素和存储会失败,因此它维护多个工作数据副本,确保能够针对失败的节点重新分布处理。Hadoop 是高效的,因为它以并行的方式工作,通过并行处理加快处理速度。Hadoop 还是可伸缩的,能够处理 PB 级数据。此外,Hadoop 依赖于社区服务器,因此它的成本比较低,任何人都可以使用。

Hadoop是一个能够让用户轻松架构和使用的分布式计算平台。用户可以轻松地在Hadoop上开发和运行处理海量数据的应用程序。它主要有以下几个优点:

高可靠性。Hadoop按位存储和处理数据的能力值得人们信赖。 高扩展性。Hadoop是在可用的计算机集簇间分配数据并完成计算任务的,这些集簇可以方便地扩展到数以千计的节点中。 高效性。Hadoop能够在节点之间动态地移动数据,并保证各个节点的动态平衡,因此处理速度非常快。 高容错性。Hadoop能够自动保存数据的多个副本,并且能够自动将失败的任务重新分配。

Hadoop带有用 Java 语言编写的框架,因此运行在 linux 生产平台上是非常理想的。Hadoop 上的应用程序也可以使用其他语言编写,比如 C++。

4.2 HPCC

HPCC,High Performance Computing and Communications(高性能计算与通信)的缩写。1993年,由美国科学、工程、技术联邦协调理事会向国会提交了“重大挑战项目:高性能计算与 通信”的报告,也就是被称为HPCC计划的报告,即美国总统科学战略项目,其目的是通过加强研究与开发解决一批重要的科学与技术挑战问题。HPCC是美国 实施信息高速公路而上实施的计划,该计划的实施将耗资百亿美元,其主要目标要达到:开发可扩展的计算系统及相关软件,以支持太位级网络传输性能,开发千兆 比特网络技术,扩展研究和教育机构及网络连接能力。

该项目主要由五部分组成:

高性能计算机系统(HPCS),内容包括今后几代计算机系统的研究、系统设计工具、先进的典型系统及原有系统的评价等; 先进软件技术与算法(ASTA),内容有巨大挑战问题的软件支撑、新算法设计、软件分支与工具、计算计算及高性能计算研究中心等; 国家科研与教育网格(NREN),内容有中接站及10亿位级传输的研究与开发; 基本研究与人类资源(BRHR),内容有基础研究、培训、教育及课程教材,被设计通过奖励调查者-开始的,长期 的调查在可升级的高性能计算中来增加创新意识流,通过提高教育和高性能的计算训练和通信来加大熟练的和训练有素的人员的联营,和来提供必需的基础架构来支 持这些调查和研究活动; 信息基础结构技术和应用(IITA),目的在于保证美国在先进信息技术开发方面的领先地位。

4.3 Storm

Storm是自由的开源软件,一个分布式的、容错的实时计算系统。Storm可以非常可靠的处理庞大的数据流,用于处理Hadoop的批量数据。Storm很简单,支持许多种编程语言,使用起来非常有趣。Storm由Twitter开源而来,其它知名的应用企业包括Groupon、淘宝、支付宝、阿里巴巴、乐元素、Admaster等等。

Storm有许多应用领域:实时分析、在线机器学习、不停顿的计算、分布式RPC(远过程调用协议,一种通过网络从远程计算机程序上请求服务)、 ETL(Extraction-Transformation-Loading的缩写,即数据抽取、转换和加载)等等。Storm的处理速度惊人:经测 试,每个节点每秒钟可以处理100万个数据元组。Storm是可扩展、容错,很容易设置和操作。

4.4 Apache Drill

为了帮助企业用户寻找更为有效、加快Hadoop数据查询的方法,Apache软件基金会近日发起了一项名为“Drill”的开源项目。Apache Drill 实现了 Google’s Dremel.

据Hadoop厂商MapRTechnologies公司产品经理Tomer Shiran介绍,“Drill”已经作为Apache孵化器项目来运作,将面向全球软件工程师持续推广。

该项目将会创建出开源版本的谷歌Dremel Hadoop工具(谷歌使用该工具来为Hadoop数据分析工具的互联网应用提速)。而“Drill”将有助于Hadoop用户实现更快查询海量数据集的目的。

“Drill”项目其实也是从谷歌的Dremel项目中获得灵感:该项目帮助谷歌实现海量数据集的分析处理,包括分析抓取Web文档、跟踪安装在Android Market上的应用程序数据、分析垃圾邮件、分析谷歌分布式构建系统上的测试结果等等。

通过开发“Drill”Apache开源项目,组织机构将有望建立Drill所属的API接口和灵活强大的体系架构,从而帮助支持广泛的数据源、数据格式和查询语言。

4.5 RapidMiner

RapidMiner是世界领先的数据挖掘解决方案,在一个非常大的程度上有着先进技术。它数据挖掘任务涉及范围广泛,包括各种数据艺术,能简化数据挖掘过程的设计和评价。

功能和特点

免费提供数据挖掘技术和库 100%用Java代码(可运行在操作系统) 数据挖掘过程简单,强大和直观 内部XML保证了标准化的格式来表示交换数据挖掘过程 可以用简单脚本语言自动进行大规模进程 多层次的数据视图,确保有效和透明的数据 图形用户界面的互动原型 命令行(批处理模式)自动大规模应用 Java API(应用编程接口) 简单的插件和推广机制 强大的可视化引擎,许多尖端的高维数据的可视化建模 400多个数据挖掘运营商支持

耶鲁大学已成功地应用在许多不同的应用领域,包括文本挖掘,多媒体挖掘,功能设计,数据流挖掘,集成开发的方法和分布式数据挖掘。

4.6 Pentaho BI

Pentaho BI 平台不同于传统的BI 产品,它是一个以流程为中心的,面向解决方案(Solution)的框架。其目的在于将一系列企业级BI产品、开源软件、API等等组件集成起来,方便商务智能应用的开发。它的出现,使得一系列的面向商务智能的独立产品如Jfree、Quartz等等,能够集成在一起,构成一项项复杂的、完整的商务智能解决方案。

Pentaho BI 平台,Pentaho Open BI 套件的核心架构和基础,是以流程为中心的,因为其中枢控制器是一个工作流引擎。工作流引擎使用流程定义来定义在BI 平台上执行的商业智能流程。流程可以很容易的被定制,也可以添加新的流程。BI 平台包含组件和报表,用以分析这些流程的性能。目前,Pentaho的主要组成元素包括报表生成、分析、数据挖掘和工作流管理等等。这些组件通过 J2EE、WebService、SOAP、HTTP、Java、javascript、Portals等技术集成到Pentaho平台中来。 Pentaho的发行,主要以Pentaho SDK的形式进行。

Pentaho SDK共包含五个部分:Pentaho平台、Pentaho示例数据库、可独立运行的Pentaho平台、Pentaho解决方案示例和一个预先配制好的 Pentaho网络服务器。其中Pentaho平台是Pentaho平台最主要的部分,囊括了Pentaho平台源代码的主体;Pentaho数据库为 Pentaho平台的正常运行提供的数据服务,包括配置信息、Solution相关的信息等等,对于Pentaho平台来说它不是必须的,通过配置是可以用其它数据库服务取代的;可独立运行的Pentaho平台是Pentaho平台的独立运行模式的示例,它演示了如何使Pentaho平台在没有应用服务器支持的情况下独立运行;

Pentaho解决方案示例是一个Eclipse工程,用来演示如何为Pentaho平台开发相关的商业智能解决方案。

Pentaho BI 平台构建于服务器,引擎和组件的基础之上。这些提供了系统的J2EE 服务器,安全,portal,工作流,规则引擎,图表,协作,内容管理,数据集成,分析和建模功能。这些组件的大部分是基于标准的,可使用其他产品替换之。

4.7 SAS Enterprise Miner

§ 支持整个数据挖掘过程的完备工具集 § 易用的图形界面,适合不同类型的用户快速建模 § 强大的模型管理和评估功能 § 快速便捷的模型发布机制, 促进业务闭环形成 五、数据分析算法

大数据分析主要依靠机器学习和大规模计算。机器学习包括监督学习、非监督学习、强化学习等,而监督学习又包括分类学习、回归学习、排序学习、匹配学习等(见图1)。分类是最常见的机器学习应用问题,比如垃圾邮件过滤、人脸检测、用户画像、文本情感分析、网页归类等,本质上都是分类问题。分类学习也是机器学习领域,研究最彻底、使用最广泛的一个分支。

最近、Fernández-Delgado等人在JMLR(Journal of Machine Learning Research,机器学习顶级期刊)杂志发表了一篇有趣的论文。他们让179种不同的分类学习方法(分类学习算法)在UCI 121个数据集上进行了“大比武”(UCI是机器学习公用数据集,每个数据集的规模都不大)。结果发现Random Forest(随机森林)和SVM(支持向量机)名列第一、第二名,但两者差异不大。在84.3%的数据上、Random Forest压倒了其它90%的方法。也就是说,在大多数情况下,只用Random Forest 或 SVM事情就搞定了。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

https://github.com/linyiqun/DataMiningAlgorithm

KNN

K最近邻算法。给定一些已经训练好的数据,输入一个新的测试数据点,计算包含于此测试数据点的最近的点的分类情况,哪个分类的类型占多数,则此测试点的分类与此相同,所以在这里,有的时候可以复制不同的分类点不同的权重。近的点的权重大点,远的点自然就小点。详细介绍链接

Naive Bayes

朴素贝叶斯算法。朴素贝叶斯算法是贝叶斯算法里面一种比较简单的分类算法,用到了一个比较重要的贝叶斯定理,用一句简单的话概括就是条件概率的相互转换推导。详细介绍链接

朴素贝叶斯分类是一种十分简单的分类算法,叫它朴素贝叶斯分类是因为这种方法的思想真的很朴素,朴素贝叶斯的思想基础是这样的:对于给出的待分类项,求解在此项出现的条件下各个类别出现的概率,哪个最大,就认为此待分类项属于哪个类别。通俗来说,就好比这么个道理,你在街上看到一个黑人,我问你你猜这哥们哪里来的,你十有八九猜非洲。为什么呢?因为黑人中非洲人的比率最高,当然人家也可能是美洲人或亚洲人,但在没有其它可用信息下,我们会选择条件概率最大的类别,这就是朴素贝叶斯的思想基础。

SVM

支持向量机算法。支持向量机算法是一种对线性和非线性数据进行分类的方法,非线性数据进行分类的时候可以通过核函数转为线性的情况再处理。其中的一个关键的步骤是搜索最大边缘超平面。详细介绍链接

Apriori

Apriori算法是关联规则挖掘算法,通过连接和剪枝运算挖掘出频繁项集,然后根据频繁项集得到关联规则,关联规则的导出需要满足最小置信度的要求。详细介绍链接

PageRank

网页重要性/排名算法。PageRank算法最早产生于Google,核心思想是通过网页的入链数作为一个网页好快的判定标准,如果1个网页内部包含了多个指向外部的链接,则PR值将会被均分,PageRank算法也会遭到LinkSpan攻击。详细介绍链接

RandomForest

随机森林算法。算法思想是决策树+boosting.决策树采用的是CART分类回归数,通过组合各个决策树的弱分类器,构成一个最终的强分类器,在构造决策树的时候采取随机数量的样本数和随机的部分属性进行子决策树的构建,避免了过分拟合的现象发生。详细介绍链接

Artificial Neural Network

“神经网络”这个词实际是来自于生物学,而我们所指的神经网络正确的名称应该是“人工神经网络(ANNs)”。
人工神经网络也具有初步的自适应与自组织能力。在学习或训练过程中改变突触权重值,以适应周围环境的要求。同一网络因学习方式及内容不同可具有不同的功能。人工神经网络是一个具有学习能力的系统,可以发展知识,以致超过设计者原有的知识水平。通常,它的学习训练方式可分为两种,一种是有监督或称有导师的学习,这时利用给定的样本标准进行分类或模仿;另一种是无监督学习或称无为导师学习,这时,只规定学习方式或某些规则,则具体的学习内容随系统所处环境 (即输入信号情况)而异,系统可以自动发现环境特征和规律性,具有更近似人脑的功能。 六、 案例

6.1 啤酒与尿布


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

“啤酒与尿布”的故事产生于20世纪90年代的美国沃尔玛超市中,沃尔玛的超市管理人员分析销售数据时发现了一个令人难于理解的现象:在某些特定的情况下,“啤酒”与“尿布”两件看上去毫无关系的商品会经常出现在同一个购物篮中,这种独特的销售现象引起了管理人员的注意,经过后续调查发现,这种现象出现在年轻的父亲身上。

在美国有婴儿的家庭中,一般是母亲在家中照看婴儿,年轻的父亲前去超市购买尿布。父亲在购买尿布的同时,往往会顺便为自己购买啤酒,这样就会出现啤酒与尿布这两件看上去不相干的商品经常会出现在同一个购物篮的现象。如果这个年轻的父亲在卖场只能买到两件商品之一,则他很有可能会放弃购物而到另一家商店, 直到可以一次同时买到啤酒与尿布为止。沃尔玛发现了这一独特的现象,开始在卖场尝试将啤酒与尿布摆放在相同的区域,让年轻的父亲可以同时找到这两件商品,并很快地完成购物;而沃尔玛超市也可以让这些客户一次购买两件商品、而不是一件,从而获得了很好的商品销售收入,这就是“啤酒与尿布” 故事的由来。

当然“啤酒与尿布”的故事必须具有技术方面的支持。1993年美国学者Agrawal提出通过分析购物篮中的商品集合,从而找出商品之间关联关系的关联算法,并根据商品之间的关系,找出客户的购买行为。艾格拉沃从数学及计算机算法角度提 出了商品关联关系的计算方法——Aprior算法。沃尔玛从上个世纪 90 年代尝试将 Aprior算法引入到 POS机数据分析中,并获得了成功,于是产生了“啤酒与尿布”的故事。

6.2 数据分析帮助辛辛那提动物园提高客户满意度


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

辛辛那提动植物园成立于1873年,是世界上著名的动植物园之一,以其物种保护和保存以及高成活率繁殖饲养计划享有极高声誉。它占地面积71英亩,园内有500种动物和3000多种植物,是国内游客人数最多的动植物园之一,曾荣获Zagat十佳动物园,并被《父母》(Parent)杂志评为最受儿童喜欢的动物园,每年接待游客130多万人。

辛辛那提动植物园是一个非营利性组织,是俄亥州同时也是美国国内享受公共补贴最低的动植物园,除去政府补贴,2600万美元年度预算中,自筹资金部分达到三分之二以上。为此,需要不断地寻求增加收入。而要做到这一点,最好办法是为工作人员和游客提供更好的服务,提高游览率。从而实现动植物园与客户和纳税人的双赢。

借助于该方案强大的收集和处理能力、互联能力、分析能力以及随之带来的洞察力,在部署后,企业实现了以下各方面的受益: 帮助动植物园了解每个客户浏览、使用和消费模式,根据时间和地理分布情况采取相应的措施改善游客体验,同时实现营业收入最大化。 根据消费和游览行为对动植物园游客进行细分,针对每一类细分游客开展营销和促销活动,显著提高忠诚度和客户保有量。. 识别消费支出低的游客,针对他们发送具有战略性的直寄广告,同时通过具有创意性的营销和激励计划奖励忠诚客户。 360度全方位了解客户行为,优化营销决策,实施解决方案后头一年节省40,000多美元营销成本,同时强化了可测量的结果。 采用地理分析显示大量未实现预期结果的促销和折扣计划,重新部署资源支持产出率更高的业务活动,动植物园每年节省100,000多美元。 通过强化营销提高整体游览率,2011年至少新增50,000人次“游览”。 提供洞察结果强化运营管理。例如,即将关门前冰激淋销售出现高潮,动植物园决定延长冰激淋摊位营业时间,直到关门为止。这一措施夏季每天可增加2,000美元收入。 与上年相比,餐饮销售增加30.7%,零售销售增加5.9%。 动植物园高层管理团队可以制定更好的决策,不需要 IT 介入或提供支持。 将分析引入会议室,利用直观工具帮助业务人员掌握数据。

6.3 云南昭通警察打中学生事件舆情分析

起因:

5月20日,有网友在微博上爆料称:云南昭通鲁甸二中初二学生孔德政,对着3名到该校出警并准备上车返回的警察说了一句“打电话那个,下来”,车内的两名警员听到动静后下来,追到该学生后就是一顿拳打脚踢。

5月26日,昭通市鲁甸县公安局新闻办回应此事:鲁甸县公安局已对当事民警停止执行职务,对殴打学生的两名协警作出辞退处理,并将根据调查情况依法依规作进一步处理。同时,鲁甸县公安局将加大队伍教育管理力度,坚决防止此类事件的再次发生。

经过:


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

5月26日,事件的舆情热度急剧上升,媒体报道内容侧重于“班主任称此学生平时爱起哄学习成绩差”“被打学生的同学去派出所讨说法”“学校要求学生删除照片”等方面,而学校要求删除图片等行为的曝光让事件舆情有扩大化趋势。

5月26日晚间,新华网发布新闻《警方回应“云南一学生遭2名警察暴打”:民警停职协警辞退》,中央主流网络媒体公布官方处置结果,网易、新浪、腾讯等门户网站予以转发,从而让官方的处置得以较大范围传播。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

昭通警察打中学生事件舆论关注度走势(抽样条数:290条)

总结:

“警察打学生,而且有图有真相,在事发5天后,昭通市鲁甸县警方最终还是站在了舆论的风口浪尖。事发后当地官方积极回应,并于5月26日将涉事人予以处理,果断的责任切割较为有效地抚平了舆论情绪,从而较好地化解了此次舆论危机。

从事件的传播来看,事发时间是5月20日,舆论热议则出现在25日,4天的平静期让鲁甸警方想当然地以为事件就此了结,或许当事人都已淡忘此事。如果不是云南当地活跃网友“直播云南”于5月25日发布关于此事的消息,并被当地传统媒体《生活新报》关注的话,事情或许真的就此结束,然而舆情发展不允许假设的存在。这一点,至少给我们以警示,对微博等自媒体平台上的负面信息要实时监测,对普通草根要监测,对本地实名认证的活跃网友更需监测。从某种角度看,本地实名认证的网友是更为强大的“舆论发动机”,负面消息一旦经他们发布或者转发,所带来的传播和形成的舆论压力更大。

在此事件中,校方也扮演着极为重要的角色。无论是被打学生的班主任,还是学校层面,面对此事件的回应都欠妥当。学校层面的“删除照片”等指示极易招致网友和学生的反感,在此反感情绪下,只会加剧学生传播事件的冲动。班主任口中该学生“学习不好、爱起哄”等负面印象被理解成“该学生活该被打”,在教师整体形象不佳的背景下,班主任的这些言论是责任感缺失的一种体现。校方和班主任的不恰当行为让事件处置难度和舆论引导难度明显增加,实在不该。“ — 人民网舆情监测室主任舆情分析师朱明刚

七、大数据云图展示
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

End.

转载请注明来自36大数据(36dsj.com):36大数据 大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理


          Facebook News Feed change demotes sketchy links overshared by spammers   
 Technically, Facebook can’t suspend people’s accounts just for sharing 50-plus false, sensational or clickbaity news articles per day. It doesn’t want to trample anyone’s right to share. But there’s nothing stopping it from burying those links low in the News Feed so few people ever see them. Today Facebook announced an algorithm change that does just that. Read More

          Benefits of Organic Search Engine Optimization (object writer)   
SEO means search engine optimization is considered as one of the most popular ways that are utilized by companies and webmasters for ranking their websites and obtain a lot of traffic. Due to the recent algorithm changes adopted by the search engine so called Google, but still organic SEO is regarded as the effective strategy for any website to rank and obtain traffic.
          Your Secret Weapon To Powerful Link Building for SEO   

Due to recent algorithm changes at Google, most website owners and agencies are petrified of notion to manually build backlinks to your website.  And rightly so!  For the most part, manually built backlinks are full of deceptive and spammy techniques that can get your […]

The post Your Secret Weapon To Powerful Link Building for SEO appeared first on .


          FDA Clears New Insulin-Dosing Software System    
Automated insulin adjustment is a rapidly growing "hot spot" for diabetes technology; algorithm assists in optimizing both basal and bolus insulin dosing.
News Alerts
          Global Gesture Recognition For Mobile Devices Market Share Analysis Report 2017   
Global Gesture Recognition For Mobile Devices Market Share Analysis Report 2017 Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses

          Comment on pyFitsidi – Python FITS IDI writing by Hayden   
Nice work. I'm trying to use your pyfitsidi to convert visibilities created with numpy into a fitsidi format. I'm having some difficulty with the HDF5 format required by your algorithm. Is the HDF5 data be N (rows) x M (columns) where N is time, channels, baselines, polarisation, then data=(real, imaginary) and M is the number of visibilities?
          Cyber-Physical Attack-Resilient Wide-Area Monitoring, Protection, and Control for the Power Grid   
Cybersecurity and resiliency of wide-area monitoring, protection, and control (WAMPAC) applications is critically important to ensure secure, reliable, and economical operation of the bulk power system. WAMPAC relies heavily on the security of measurements and control commands transmitted over wide-area communication networks for real-time operational, protection, and control functions. The current “N–1” security criterion for grid operation is inadequate to address malicious cyber events; therefore, it is important to fundamentally redesign WAMPAC and to enhance energy management system applications to make them attack resilient. In this paper, we present three key contributions to enhance the cybersecurity and resiliency of WAMPAC. First, we describe an end-to-end attack-resilient cyber–physical security framework for WAMPAC applications encompassing the entire security life cycle including risk assessment, attack prevention, attack detection, attack mitigation, and attack resilience. Second, we describe a defense-in-depth architecture that incorporates attack resilience at both the infrastructure layer and the application layer by leveraging domain-specific security approaches at the WAMPAC application layer in addition to traditional cybersecurity measures at the information technology infrastructure layer. Third, we discuss several attack-resilient algorithms for WAMPAC that leverage measurement design and cyber–physical system model-based anomaly detection and mitigation along with illustrative case studies. We believe that the research issues and solutions identified in this paper will open up several avenues for research in this area. In particular, the proposed framework, architectural concepts, and attack-resilient algorithms would serve as essential building blocks to transform the “fault-resilient” grid of toda- into an “attack-resilient” grid of the future.
          Just because I like Top Gear doesn’t mean that I’m a Misogynist, or does it?    

When you read my blogs you would probably get the impression that I am a misogynist “Woman Hater” but you would be wrong, I am exactly the opposite! I can find a quality I like about even the most appalling Harpies. Over the years this has been my biggest downfall, and just because I like one quality doesn’t mean to say they don’t drive me mad with numerous other things I don’t like.

I think I’m a good host, I don’t ask a lot from a house guest or girlfriend, just two rules, don’t touch the remote controls, and when I’m watching a program, only talk during commercials, if you have something urgent to say Text me, now is that too much to ask? Just to clarify “You’re a shit boyfriend, I never want to see you again, and I’m going home “ is not an urgent message, I could get the gist from the, Tuts, the door slamming and the tyre squeal, without missing any of the program .

In my defence it’s imperative that you watch a Motoring Program (Top Gear) live because If I had a quid for every time I’ve sat down to watch a film or TV program that I set my video to record, and just before the end “somebody” has changed the channel and Coronation Street, Eastenders or some other drivel comes on, I would be a millionaire. Even Sky Plus Falls foul of the 8 Oclock weekdays phenomena when two equally shit Womens programs are broadcast on different channels, so instead of Wayne Carini in an Episode of Chasing Classic Cars, you get a message saying this “Program failed to record due to a Programming Conflict” (Which roughly translated means, we did manage to record your Girlfriends Programs But Hey, Have a Great Night!) ”

When a Womans watching a program, it’s like turning a shark on its back, (I watch The Discovery Channel a lot too) they go into an almost catatonic state, except for the odd “Bastard” muttered at the guy in the program who's done something to upset one of the female characters, then they give a sideways glance in your direction which really means “You even think of doing that and I will be after you with the kitchen scissors!

When you’re watching a program it’s a different thing, women can multitask, for multitasking substitute “do several things that annoy you all at the same time” they can read a magazine, hum a tune, and flick the pages over so fast that each page sounds like a whip crack. Incidentally did you know that a whip cracks, because the end is travelling faster than the speed of sound and it creates a mini Sonic Boom, if you didn’t know that, the chances are its because you were watching the program "Little Known Facts" with your girlfriend.

They don’t speak too often but when they do, it’s at the precise moment that Jeremy Clarkson is sharing some very important information and you miss it, no point in saying “shush” as that just makes things worse, “Shush, why Shush, what’s happening, what’s he talking about that’s so important that I’ve got to shush, has Richard Hammond invented a cure for Cancer? By which time you’ve missed what he was saying anyway and you are even more in the dog house. Then just when you don’t want it, it’s time to go to bed, and you get the Library treatment, which was precisely what you wanted when you were watching TV

 I have a pair of Bose “Quiet Comfort Noise Cancelling Headphones” which I always take on Holiday. They work by use of analog circuits or digital signal processing. Adaptive algorithms are designed to analyze the waveform of the background aural or nonauralnoise, then based on the specific algorithm generate a signal that will either phase shift or invert the polarity of the original signal. This inverted signal (in antiphase) is then amplified and a transducer creates a sound wave directly proportional to the amplitude of the original waveform, creating destructive interference. This effectively reduces the volume of the perceivable noise. Translated this means they eliminate the sound of the plane’s engines, and you don’t have to turn the volume of the in-flight movie up to an unacceptable level.

As I said before I really do like Women, it would probably be less stressful for me If I didn't and I was to become a Monk, but that’s not the life for me so In order to survive I believe that Mother Nature has taken pity on me and allowed my hearing to evolve. I seem to have developed noise cancelling ears. Although my hearing is perfect, my ears work in exactly the opposite way to my Noise cancelling headphones. I can hear every note of a Formula One cars engine as the gears change, the pitch alters and it screams down the track. Watching the opening sequence of Point Break, I can hear the next cartridge slide into the chamber as he cocks his pump action shotgun, in preparation for his next shot. I hear his bullet tear through the target and the hollow tinkling of a spent bullet casing ejected from Johnny Utah's (Keanu Reeves) semi automatic Sig Sauer P226 9 mm, as it hits the ground, bouncing  twice before it settles and becomes silent.




No matter how softly they talk, I never miss a word spoken by Jeremy, Richard, James, Tiff and Jason. If she’s sat opposite me I can see my girlfriend’s lips moving but I can’t hear a word she’s saying until the commercial break or 'Jessica' by the 'Allman Brothers' starts to play.

Noise cancelling ears do have a couple of disadvantages, and I must remember to write to the TV broadcasters and ask if they could put subtitles on the screen when Vicky Butler-Henderson is on Fifth Gear and the same when Rachel Riley is on The Gadget Show so that I know what they’re talking about.

          The Ultimate Data Infrastructure Architect Bundle for $36   
From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
Expires June 01, 2022 23:59 PST
Buy now and get 94% off

Learning ElasticSearch 5.0


KEY FEATURES

Learn how to use ElasticSearch in combination with the rest of the Elastic Stack to ship, parse, store, and analyze logs! You'll start by getting an understanding of what ElasticSearch is, what it's used for, and why it's important before being introduced to the new features of Elastic Search 5.0.

  • Access 35 lectures & 3 hours of content 24/7
  • Go through each of the fundamental concepts of ElasticSearch such as queries, indices, & aggregation
  • Add more power to your searches using filters, ranges, & more
  • See how ElasticSearch can be used w/ other components like LogStash, Kibana, & Beats
  • Build, test, & run your first LogStash pipeline to analyze Apache web logs

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Ethan Anthony is a San Francisco based Data Scientist who specializes in distributed data centric technologies. He is also the Founder of XResults, where the vision is to harness the power of data to innovate and deliver intuitive customer facing solutions, largely to non-technical professionals. Ethan has over 10 combined years of experience in cloud based technologies such as Amazon webservices and OpenStack, as well as the data centric technologies of Hadoop, Mahout, Spark and ElasticSearch. He began using ElasticSearch in 2011 and has since delivered solutions based on the Elastic Stack to a broad range of clientele. Ethan has also consulted worldwide, speaks fluent Mandarin Chinese and is insanely curious about human cognition, as related to cognitive dissonance.

Apache Spark 2 for Beginners


KEY FEATURES

Apache Spark is one of the most widely-used large-scale data processing engines and runs at extremely high speeds. It's a framework that has tools that are equally useful for app developers and data scientists. This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup.

  • Access 45 lectures & 5.5 hours of content 24/7
  • Learn the Spark programming model through real-world examples
  • Explore Spark SQL programming w/ DataFrames
  • Cover the charting & plotting features of Python in conjunction w/ Spark data processing
  • Discuss Spark's stream processing, machine learning, & graph processing libraries
  • Develop a real-world Spark application

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.

Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.

Designing AWS Environments


KEY FEATURES

Amazon Web Services (AWS) provides trusted, cloud-based solutions to help businesses meet all of their needs. Running solutions in the AWS Cloud can help you (or your company) get applications up and running faster while providing the security needed to meet your compliance requirements. This course leaves no stone unturned in getting you up to speed with administering AWS.

  • Access 19 lectures & 2 hours of content 24/7
  • Familiarize yourself w/ the key capabilities to architect & host apps, websites, & services on AWS
  • Explore the available options for virtual instances & demonstrate launching & connecting to them
  • Design & deploy networking & hosting solutions for large deployments
  • Focus on security & important elements of scalability & high availability

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Wayde Gilchrist started moving customers of his IT consulting business into the cloud and away from traditional hosting environments in 2010. In addition to consulting, he delivers AWS training for Fortune 500 companies, government agencies, and international consulting firms. When he is not out visiting customers, he is delivering training virtually from his home in Florida.

Learning MongoDB


KEY FEATURES

Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

  • Access 64 lectures & 40 hours of content 24/7
  • Master data management, queries, post processing, & essential enterprise redundancy requirements
  • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
  • Delve into SSL security & programmatic access using various languages
  • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

Learning Hadoop 2


KEY FEATURES

Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousand of machines. In this course, you'll learn Hadoop 2, introducing yourself to the powerful system synonymous with Big Data.

  • Access 19 lectures & 1.5 hours of content 24/7
  • Get an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, & Hive
  • Install & configure a Hadoop environment
  • Explore Hue, the graphical user interface of Hadoop
  • Discover HDFS to import & export data, both manually & automatically
  • Run computations using MapReduce & get to grips working w/ Hadoop's scripting language, Pig
  • Siphon data from HDFS into Hive & demonstrate how it can be used to structure & query data sets

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children.

ElasticSearch 5.x Cookbook eBook


KEY FEATURES

ElasticSearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. Through this ebook, you'll be guided through comprehensive recipes covering what's new in ElasticSearch 5.x as you create complex queries and analytics. By the end, you'll have an in-depth knowledge of how to implement the ElasticSearch architecture and be able to manage data efficiently and effectively.

  • Access 696 pages of content 24/7
  • Perform index mapping, aggregation, & scripting
  • Explore the modules of Cluster & Node monitoring
  • Understand how to install Kibana to monitor a cluster & extend Kibana for plugins
  • Integrate your Java, Scala, Python, & Big Data apps w/ ElasticSearch

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.

In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).

In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDBengine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.

Fast Data Processing with Spark 2 eBook


KEY FEATURES

Compared to Hadoop, Spark is a significantly more simple way to process Big Data at speed. It is increasing in popularity with data analysts and engineers everywhere, and in this course you'll learn how to use Spark with minimum fuss. Starting with the fundamentals, this ebook will help you take your Big Data analytical skills to the next level.

  • Access 274 pages of content 24/7
  • Get to grips w/ some simple APIs before investigating machine learning & graph processing
  • Learn how to use the Spark shell
  • Load data & build & run your own Spark applications
  • Discover how to manipulate RDD
  • Understand useful machine learning algorithms w/ the help of Spark MLlib & R

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at http://cadenttech.tv/, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit, Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order, Big Data Analytics—Best of the Worst, predicting NFL, Spark, Data Science, Machine Learning, Social Media Analysis as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at https://doubleclix.wordpress.com/. His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.

MongoDB Cookbook: Second Edition eBook


KEY FEATURES

MongoDB is a high-performance, feature-rich, NoSQL database that forms the backbone of the systems that power many organizations. Packed with easy-to-use features that have become essential for a variety of software professionals, MongoDB is a vital technology to learn for any aspiring data scientist or systems engineer. This cookbook contains many solutions to the everyday challenges of MongoDB, as well as guidance on effective techniques to extend your skills and capabilities.

  • Access 274 pages of content 24/7
  • Initialize the server in three different modes w/ various configurations
  • Get introduced to programming language drivers in Java & Python
  • Learn advanced query operations, monitoring, & backup using MMS
  • Find recipes on cloud deployment, including how to work w/ Docker containers along MongoDB

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Amol Nayak is a MongoDB certified developer and has been working as a developer for over 8 years. He is currently employed with a leading financial data provider, working on cutting-edge technologies. He has used MongoDB as a database for various systems at his current and previous workplaces to support enormous data volumes. He is an open source enthusiast and supports it by contributing to open source frameworks and promoting them. He has made contributions to the Spring Integration project, and his contributions are the adapters for JPA, XQuery, MongoDB, Push notifications to mobile devices, and Amazon Web Services (AWS). He has also made some contributions to the Spring Data MongoDB project. Apart from technology, he is passionate about motor sports and is a race official at Buddh International Circuit, India, for various motor sports events. Earlier, he was the author of Instant MongoDB, Packt Publishing.

Cyrus Dasadia always liked tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB started in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects are written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. He likes spending his spare time trying to reverse engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.

Learning Apache Kafka: Second Edition eBook


KEY FEATURES

Apache Kafka is simple describe at a high level bust has an immense amount of technical detail when you dig deeper. This step-by-step, practical guide will help you take advantage of the power of Kafka to handle hundreds of megabytes of messages per second from multiple clients.

  • Access 120 pages of content 24/7
  • Set up Kafka clusters
  • Understand basic blocks like producer, broker, & consumer blocks
  • Explore additional settings & configuration changes to achieve more complex goals
  • Learn how Kafka is designed internally & what configurations make it most effective
  • Discover how Kafka works w/ other tools like Hadoop, Storm, & more

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Nishant Garg has over 14 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data R&D Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

Nishant has also undertaken many speaking engagements on big data technologies and is also the author of HBase Essestials, Packt Publishing.

Apache Flume: Distributed Log Collection for Hadoop: Second Edition eBook


KEY FEATURES

Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It's used to stream logs from application servers to HDFS for ad hoc analysis. This ebook start with an architectural overview of Flume and its logical components, and pulls everything together into a real-world, end-to-end use case encompassing simple and advanced features.

  • Access 178 pages of content 24/7
  • Explore channels, sinks, & sink processors
  • Learn about sources & channels
  • Construct a series of Flume agents to dynamically transport your stream data & logs from your systems into Hadoop

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (http://orbitz.com/).

          Podcast448: Artificial Reality, Free Online Learning Channels & STEAM Studio   
Welcome to the November 5, 2016, podcast episode of "Moving at the Speed of Creativity" with Wesley Fryer, which explores topics relating to artificial reality, free online learning channels and a STEAM Studio reflection. Wes discusses Steven Levy's recent article for Backchannel, "The Google Assistant Needs You," and our current "transition era" as artificial intelligence (AI) technologies mature and become normalized in our lives. He also discusses Elon Musk's recent announcement about camouflaged solar roof panels, and a recent video interview with Musk in which he discussed his reasons for starting the OpenAI (@openai) initiative. Musk's concerns about mature AI technologies are not limited to a RoboCop-style malicious AI future, but also include the danger of AI technologies being tightly controlled by a small number of entities. To guard against the dangers latent in that future scenario, Musk wants more groups, individuals and nations to have access to powerful AI algorithms and capabilities through the open source movement. In part two of the podcast, Wes discussed some of his favorite podcast channels and websites which provide fantastic opportunities for free, online learning. This begins with the K12 Online Conference (@k12online) which launched its 2016-17 mini-conference series on October 21st with a 3 part keynote and live panel discussion on YouTube Live with Julie Lindsay (@julielindsay). This first strand of the conference this year focuses on global collaboration. Strand two will focus on "Learning Spaces" and starts November 14th with a keynote by David Jakes (@djakes). Favorite tech podcasts mentioned by Wes in this episode include Clockwise by RelayFM (@clockwisepod), The Committed (@CommittedShow), and Note to Self (@notetoself). Wes also mentioned his weekly podcast and live webshow (on most Wednesday nights) The EdTech Situation Room (@edtechSR). The third part of this podcast features a recorded reflection by elementary art teacher Megan Thompson (@seeingnewshapes) and Wes discussing the "STEAM Studio" after-school enrichment class the co-taught this past semester together. They discuss things that went well, things they would change, and success stories from this STEAM (Science, Technology, Engineering, Art and Math) collaboration. If you listen to and enjoy this episode, please reach out to Wes with a comment or via a Twitter reply to @wfryer. Thanks for listening to "Moving at the Speed of Creativity!"
          Podcast427: Battlecode Coding Competition at MIT with Jonah Casebeer   
This podcast features an interview with Jonah Casebeer, a 17 year old family friend and rising senior at Thomas Jefferson High School for Science and Technology (TJ) in Alexandria, Virginia. Jonah and three of his classmates entered the annual "Battlecode" Artificial Intelligence Programming Competition at MIT along with about 400 other teams in the fall of 2014. Their team was selected among 16 finalists to fly to MIT for the live, final competition, which was livestreamed on Twitch.TV. They were the only team comprised of high school students in the 2015 competition. All the other teams (ranging from 1 to 4 members) included undergraduate and graduate students, along with adults out of college. According to the Battlecode.org website, "The 6.370 Battlecode programming competition (also 6.147) is a unique challenge that combines battle strategy, software engineering and artificial intelligence. In short, the objective is to write the best player program for the computer game Battlecode. Battlecode, developed for 6.370, is a real-time strategy game. Two teams of robots roam the screen managing resources and attacking each other with different kinds of weapons. However, in Battlecode each robot functions autonomously; under the hood it runs a Java virtual machine loaded up with its team's player program. Robots in the game communicate by radio and must work together to accomplish their goals. Teams of one to four students enter 6.370 and are given the Battlecode software and a specification of the game rules in early January. Each team develops a player program, which will be run by each of their robots during Battlecode matches. Contestants often use artificial intelligence, pathfinding, distributed algorithms, and/or network communications to write their player. At the final tournaments, the autonomous players are pitted against each other in a dramatic head-to-head tournament. The final rounds of the MIT tournament are played out in front of a live audience, with the top teams receiving cash prizes. The total prize pool is over $50,000." In this podcast interview, Jonah tells about his experiences in the 2015 Battlecode competition as well as about the preparatory courses he took at TJ which prepared him and his teammates for this extremely fun and challenging activity. Check out the podcast shownotes for links to referenced sites, videos and resources. In the 3.5 hour video of the 2015 Battlecode Tournament, Jonah's team "Puzzle" takes the stage at 26:05.
          An implicit multishift QR-algorithm for Hermitian plus low rank matrices   
Vandebril, Raf and Del Corso, Gianna M. (2009) An implicit multishift QR-algorithm for Hermitian plus low rank matrices. Technical Report del Dipartimento di Informatica . Università di Pisa, Pisa, IT.
          Robust Portfolio Asset Allocation: models and algorithmic approaches   
Recchia, Raffaella and Scutellà, Maria Grazia (2009) Robust Portfolio Asset Allocation: models and algorithmic approaches. Technical Report del Dipartimento di Informatica . Università di Pisa, Pisa, IT.
          Algorithme : suite numérique, château et spirale des nombres CP/CE1   
De la bande numérique au château des nombres.



Pour en voir plus cliquez ici . . .
          Facebook 'Hate Speech' Rules Protect Races And Sexes -- So, Yes, White Men Are Going To Be 'Protected'   

ProPublica recently obtained some internal documents related to Facebook's hate speech moderation. Hate speech -- as applied to Facebook -- isn't a statutory term. Much of what Facebook removes is still protected speech. But Facebook is a private company and is able to remove whatever it wants without acting as a censorial arm of the government.

That being said, there's a large number of government officials around the planet who feel Facebook should be doing more to remove hate speech -- all of it based on very subjective views as to what that term should encompass.

It's impossible to make everyone happy. So, Facebook has decided to apply a set of rules to its moderation that appear to lead to completely wrong conclusions about what posts should be removed. A single image included in the ProPublica article went viral. But the explanation behind it did not. The rules Facebook uses for moderation lead directly to increased protections for a historically well-protected group.

[If you can't read/see the image, the slide says "Which of the below subsets do we protect?" with the choices being "female drivers," "black children," and "white men." The answer -- to the great internet consternation of many -- is: "white men."]

Given Facebook's general inability to moderate other forms of "offensiveness" (mainly female breasts) without screwing it all up, the answer to this quiz question seems like more Facebook moderation ineptitude. But there's more to it than this one question. The rest of the quiz is published at ProPublica and it shows the "white men" answer is, at least, internally consistent with Facebook's self-imposed rules.

Facebook must define "hate speech" before it can attempt to moderate it, since there are no statutes (at least in the United States) that strictly apply to this content. Here's how Facebook defines it:

Protected category + attack = hate speech

These are the protected categories:

  • Sex
  • Race
  • Religious affiliation
  • Ethnicity
  • National origin
  • Sexual orientation
  • Gender identity
  • Serious disability/disease

Here's what's not considered "protected" by Facebook:

  • Social class
  • Occupation
  • Continental origin
  • Political ideology
  • Appearance
  • Religions
  • Age
  • Countries

"White men" have both race and sex going for them. Any "attack" on white men can be deleted by Facebook. "Black children" only have race. Age is not a protected category. An attack on black men would be deleted but black children are, apparently, fair game. The same goes for white children. In the category "female drivers," only the "female" part is considered protected.

The quiz goes on to explain other facets of hate speech moderation. Calling for acts of physical violence against protected categories is hate speech. If any component of the group targeted is "unprotected," the call for violence will be allowed to stay online. The rules also cover "degrading generalization," "dismissive" speech, cursing, and slurs. If any of these target a protected class (or quasi-protected class, i.e., migrants whose nationality may be in flux), moderators can take down the posts. The QPCs have only slightly more protection than entirely unprotected classes, so they can receive more posted abuse before hate speech protections kick in.

These rules lead to all sorts of things that seem unfair, if not completely wrong:

In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

Religions are unprotected. Races are. That's why this happens. At best, it would seem like both should be taken down, or the less violent of the two remain intact. But that's not the way the rules work. People who criticize Facebook's moderation efforts are asking for something worse than is already in place. To right the perceived wrongs of everything listed above, the rules would have to be replaced by subjectivity -- setting up every moderator, all over the world, with their own micro-fiefdom to run as they see fit. If people don't like it now, just wait until thousands of additional biases are injected into the mix.

That's the other issue: Facebook is a worldwide social platform. Protecting white men may seem pointless here in the US, but the United States isn't the only country with access to Facebook.

“The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”

This is the unfortunate byproduct of a job that's impossible to do to everyone's satisfaction. Blanket rules may seem dumb on a case-by-case basis, but the alternative would be even worse. If a company is going to proactively protect sexes and races, it's inevitably going to have to stand up for white men, even if the general feeling is white men are in no need of extra protection.



Permalink | Comments | Email This Story

          Maybe Trump’s behavior is explained by a simple Machine Learning (A.I.) algorithm.    
Burton offers an intriguing explanation for our inability to predict Donald Trump’s next move suggesting:
...that Trump doesn’t operate within conventional human cognitive constraints, but rather is a new life form, a rudimentary artificial intelligence-based learning machine. When we strip away all moral, ethical and ideological considerations from his decisions and see them strictly in the light of machine learning, his behavior makes perfect sense.
Consider how deep learning occurs in neural networks such as Google’s Deep Mind or IBM’s Deep Blue and Watson. In the beginning, each network analyzes a number of previously recorded games, and then, through trial and error, the network tests out various strategies. Connections for winning moves are enhanced; losing connections are pruned away. The network has no idea what it is doing or why one play is better than another. It isn’t saddled with any confounding principles such as what constitutes socially acceptable or unacceptable behavior or which decisions might result in negative downstream consequences.
Now up the stakes…ask a neural network to figure out the optimal strategy…for the United States presidency. In this hypothetical, let’s input and analyze all available written and spoken word — from mainstream media commentary to the most obscure one-off crank pamphlets. After running simulations of various hypotheses, the network will serve up its suggestions. It might show Trump which areas of the country are most likely to respond to personal appearances, which rallies and town hall meetings will generate the greatest photo op and TV coverage, and which publicly manifest personality traits will garner the most votes. If it determines that outrage is the only road to the presidency, it will tell Trump when and where his opinions must be scandalous and offensively polarizing.
Following the successful election, it chews on new data. When it recognizes that Obamacare won’t be easily repealed or replaced, that token intervention in Syria can’t be avoided, that NATO is a necessity and that pulling out of the Paris climate accord may create worldwide resentment, it has no qualms about changing policies and priorities. From an A.I. vantage point, the absence of a coherent agenda is entirely understandable. For example, a consistent long-term foreign policy requires a steadfastness contrary to a learning machine’s constant upgrading in response to new data.
As there are no lines of reasoning driving the network’s actions, it is not possible to reverse engineer the network to reveal the “why” of any decision. Asking why a network chose a particular action is like asking why Amazon might recommend James Ellroy and Elmore Leonard novels to someone who has just purchased “Crime and Punishment.” There is no underlying understanding of the nature of the books; the association is strictly a matter of analyzing Amazon’s click and purchase data. Without explanatory reasoning driving decision making, counterarguments become irrelevant.
Once we accept that Donald Trump represents a black-box, first-generation artificial-intelligence president driven solely by self-selected data and widely fluctuating criteria of success, we can get down to the really hard question confronting our collective future: Is there a way to affect changes in a machine devoid of the common features that bind humanity?

          With Whole Foods acquisition, Amazon could 'change the grocery shopping experience forever'   

Earlier this month, Amazon announced plans to acquire upscale grocery chain Whole Foods Market, sending shockwaves through the retail industry and signaling a reboot of Amazon's effort to sell groceries online.

The move has also raised questions about the future of brick-and-mortar retail and the role machine-learning and artificial intelligence will play in the future of the grocery business.

"Amazon epitomizes a tech company's relentless pursuit of disrupting traditional businesses and comes with a strong historical track record."
Jim Kyung-Soo Liew
Carey Business School assistant professor

Jim Kyung-Soo Liew, an assistant professor at the Johns Hopkins Carey Business School who specializes in entrepreneurial finance and hedge fund strategies, said the deal could be a game-changer for the grocery industry.

"Amazon epitomizes a tech company's relentless pursuit of disrupting traditional businesses and comes with a strong historical track record," Liew said. "Amazon has made the online purchasing experience second to none, with low prices for goods and fast delivery, netting a great shopping experience for consumers."

Liew said he expects Amazon to apply the same principles and strategies to Whole Foods, which he said will result in a new kind of grocery experience: "The explosive combination of machine learning and data will change the grocery shopping experience forever," he said.

"Shoppers' behavior will be a key ingredient that will power Amazon's machine-learning algorithms, which will calibrate new recommendation systems using consumers' grocery shopping demands," he added. "If Amazon allows shoppers to make purchases on mobile phones within the store, this will fuel the algorithms even more."

Liew said the market's reaction suggests that investors believe Amazon and Whole Foods will integrate successfully—the price of Amazon's shares has risen slightly since the announcement. He also noted that an increase in the price of Whole Foods shares could signal a second bidder.

"Typically, the acquirer company's price performance is flat to slightly down on a merger announcement date, so Amazon being up is a very good sign," he said. "That Whole Foods prices have drifted higher tells us that lurking behind the scenes could be another bidder or that Amazon may need to raise their original bid to get this deal done. Maybe Walmart will counter after they fully digest how powerful the Amazon/Whole Foods combination will become."

Liew said the move could serve as a serious disruptor to the grocery industry.

"Other retail grocery stores will now have to move, so expect some serious consolidation in this industry," he said. "Look for cash-rich technology companies to make bids for traditional brick-and-mortar stores, then apply their deep set of machine learning and AI algorithms on this newly acquired data set. One day we will look back at this 'retail shopping singularity' event—the deal between Amazon and Whole Foods."


          A Common-Sense Guide to Data Structures and Algorithms   

If you last saw algorithms in a university course or at a job interview, you're missing out on what they can do for your code. Learn different sorting and searching techniques, and when to use each. Find out how to use recursion effectively. Discover structures for specialized applications, such as trees and graphs. Use Big O notation to decide which algorithms are best for your production environment. Beginners will learn how to use these techniques from the start, and experienced developers will rediscover approaches they may have forgotten.


          Aviso Brings Artificial Intelligence to Sales Forecasting   

Aviso extended the reach of its cloud-based predictive sales forecasting capabilities with the release of Aviso Sales Vision (ASV) platform, which enlists artificial intelligence and machine learning algorithms to give sales people the data they need to drive sales performance. The Menlo Park, Calif.

Continue reading...
          天龙旗舰级DVD/SACD兼容型播放机DVD-A11   



DVD-A11是天龙公司第四台兼容型的光碟播放机,对于该公司已经非常出色的兼容型播放机产品线来说,A11的价值并不只在于是产品线的扩充,而在于它是一台真正明星级的机器,可以满足音乐发烧友和视频发烧友的严苛要求。天龙公司还有两台机型的功能设置与A11非常接近,一台是同时兼容THX-Ultra标准和DVD-Audio播放的DVD-A1;另一台是DVD-2900,兼容DVD-V、DVD-A和SACD碟片播放。实际上,这三种型号机器的内部结构也很相似,然而,DVD-A11所拥有的一些关键技术令其表现更上一层楼。
通过DVD-A11的操作和连接界面,可以让人了解它的出色之处。DVD-A11装备了天龙独家专用的第二代“Denon Link”连接端子,用于传输大部分的数字音频信号。第一代连接端子则是在前旗舰机型DVD-A1上得到首次采用,但不能够输出经过加密保护的DVD-Audio数据流。在第二代的连接端子中,已经能够输出DVD-Audio数据流,但还不能够输出SACD所使用的DSD数据流。据说,如果能够得到版权拥有者的授权,天龙未来的第三代连接端子将拥有输出DSD数据流的功能,也许未来还会包括Genloc时钟同步的视频信号传输功能。
尽管索尼和先锋公司在技术的某些方面尚不够成熟,但两家公司都选择了兼容DVD-A和SACD的基于火线(i.Link)端子的数字音频信号传输界面。在DVD-A11中,天龙则是避免把赌注押在某一种连接方式上,而是同时配备了“Denon Link”连接端子和火线连接端子。不过天龙所使用的火线端子并不完全等同于先锋公司所采用的火线端子,虽然可以与先锋和索尼的相应接口连接使用,但却不具备一些独家才会拥有的特殊功能(如先锋公司开发的“Clock Link”)。
在视频方面,DVD-A11配备了DVI数字视频信号输出端子。同时,天龙过去在高端DVD播放机上所采用的Silicon Image视频处理技术并未在DVD-A11上得到采用,而转而使用Faroudja FL2310 DCDi(指向性关联消隔行)视频处理芯片,完全支持PAL和NTSC制式的逐行扫描输出。Silicon Image视频处理技术一般来讲被认为是特别合适3:2下拉式图像处理,在把电影胶片传换为60Hz NTSC视频信号时重组画面的像场,以输出平滑的图像。然而,在50Hz的PAL制式下,使用Faroudja DCDi作图像处理时,人工痕迹更少一些,更能再现原汁原味的电路画面。
DVD播放机中的图像处理技术,同电脑领域的图像处理技术一样,随时处于升级过程中。DVD-A11所使用的高规格视频转换器内置有6只12Bit/216MHz的DAC,为整机提供了丰富的视频调整功能,包括多种解析度的扫描方式转换输出(NTSC制式下最高隔行1080线、逐行720线输出;PAL制式下逐行576线输出)、色度延迟调整、黑度级/白度级调整等等。

相对于视频处理部分,DVD-A11的音频处理部分也一点不弱,机内几乎一半的空间都被复杂的数字、模拟供电组件所占据。数字音频硬件包括专为天龙公司定制的Burr-Brown 24Bit/192kHz PCM/DSD混血DAC,以及天龙独家拥有的采用AL24波形圆滑算法(AL24 curve-smoothing algorithm)的滤波电路。对于DVD-Video碟片来说,本机内置有Dolby Digital和DTS环绕声解码功能,而且对于DVD-Audio、DVD-Video和SACD播放来说,更具备有出人意料丰富的低频管理功能。然而,对于SACD的低频管理功能来说,需要把DSD格式先行转换为PCM格式,因此转换过程中的数据损失也是不可避免的。
DVD-A11的背后板设置非常丰富合理,音视频接线端子包括Scarts、BNC、逐行信号输出端子各两组,Denon Link一只,火线输出端子一对,外加S视频和复合视频输出端子各两组,以及标准的光纤、同轴数字音频输出端子。另外,背板上还设有RS232和系统控制信号传输接口,对应AMX和Crestron工业系统控制标准。

音质表现

DVD-A11内置有关闭视频处理电路的功能,在播放音频碟片的时候特别有用。在播放DVD-Audio碟片时,声音平顺干净,极具实体感。即使播放如Fleetwood Mac乐队的《传言》这种经过精心数字化再处理的老录音碟片,与备有第二代Denon Link端子的天龙AVC-A1SR环绕声功放相搭配,通过Denon Link还原出的增强多声道音质也明显好于通过模拟音频端子还原的音质。前者的声音更加细滑和具有形体感,乐器和人声规模更为庞大和更具感染力,充满了激情。
播放最新的数字录音DVD-Audio碟片,如西蒙·拉特尔指挥的马勒《十交》(EMI出品)以及肖斯塔科维奇的《爵士组曲》(NAXOS出品),Denon Link端子输出和模拟端子输出的音质差异同样很明显,不过相较于播放老录音的DVD-Audio碟片,声音的细腻程度和准确度均有所提升。当然了,这种差异很难说是不是因为AVC-A1SR内置的D/A转换器的音质好于DVD-A11的D/A转换器,不过采用Denon Link端子连接时,音频信号传输的距离远远小于其他连接方式,也足以解释音质提升的原因。
用DVD-A11试听SACD碟片时,所搭配的放大器没有一台可以同时提供火线输入端子(天龙公司目前正在向这一目标努力),所以无法比较直接输出数字信号和模拟信号的音质。不过呢,DVD-A11播放SACD时的音质仍然可以与另一公司的专用SACD播放机相比较,同时,播放CD时则是与另一款专用高级CD播放机作比较。比较的结果则是各有胜负,播放SACD碟片时,两台SACD机的声音显出相当大的个性差异,Denon DVD-A11在音色平衡感上更温暖饱满一些,细节解析力则明显要差一些。另外,相较于DVD-A11播放DVD-Audio碟片和SACD专用播放机播放SACD碟片来说,DVD-A11播放SACD碟片时具有更宽厚的节奏感,而且在播放Grainger的《The Warriors》一曲时,整个音场内乐队成员的细微呼吸声也更加明显。试播CD碟片时,与专用CD播放机相比,通过Denon Link端子输出的音质特别出色,具有稳定平滑的特征,音色更为细腻和富于感情,形体感表现则与专用CD播放相当,“数码味”明显减弱,听起来更象是优质黑胶唱片的声音,但同时又集合了数字音频处理的大部分优势。实际上,DVD-A11采用Denon Link输出CD信号的音质表现非常接近天龙公司的中价位CD专用播放机。不过呢,DVD-A11播放CD的音质还是无法与相同曲目的双声道SACD音质相比。与DVD-A11相比,专用SACD播放机的优势在于其纯净的音色和极为丰富的细节还原,而专用CD播放机的优势则在于其高超的音乐表现力和音场分离度,这也是后两者的价格分别卖到4000多英镑和6500英镑的原因所在。三台播放机不仅价格差异巨大,而且声音表现也迥然不同,总的来说,目前的大多数DVD播放机的音质还是不足以完全击败High-End级别的CD播放机。

画质表现

如果你为DVD-A11搭配的是传统电视机,就算是最好的电视机,天龙视频处理技术的优势也会损失在画面噪声和数字加工的过程中。所以,你应该为DVD-A11搭配一台等离子或LCD显示屏,当然最好的选择是象这次试用过程中使用的带DVI输入端子的马兰士VP12S2前投影机。
DVD-A11的画质是顶级的表现,Faroudja DCDi视频处理技术对于消除移动画面的边缘锯齿现象非常有效。如果逐行视频信号输入的设置得当,电影画面显得细节和色彩非常丰富,饱和的深红色也控制得很好,而在某些DVD播放机上这种红色会出现轻微抖动的毛病。《怪物公司》中的动画画面极为平滑流畅,《指环王:护戒使者》和《冰河时期》中的白色雪景也有着非常细微的灰度级表现。而把DVD-Video或DVD-Audio中的静止画面投到大约80英寸的大小,简直就象是一幅挂在墙上的图画。
在内置Dolby Digital和DTS环绕声解码状态下,DVD-A11的声音干净而极具扩展感,足以产生一个包围感强烈、细节丰富的逼真音场。当然了,如果你拥有天龙AVC-A1SR这种高级别的环绕声功放,还是应该首选采用功放内部的解码电路进行还原。
总的来看,天龙这台DVD-A11兼容型播放机提供了同类机种中近乎完美的音画质表现,显然是目前兼容型机器中表现最佳的代表,也明显强于同厂的DVD-2900,而且所具备的数字音视频端子的高度兼容性和扩展性也大大增加了本机的实用潜力。




          Is your AI racist? A lawmaker wants to know   
The promise of fintech is that it might offer underbanked consumers access to financial products. But some are worried that relying on algorithms to make credit decisions could open up problems of its own.
          Comment on Andy Mitchell and Facebook’s weird state of denial about news by Nikohl Vandel   
So, algorithmics aside, doesn't G+ and twitter, snap chat and tumblr have the same potential?
          Zebra Pattern   
Last time I explained the One Ring Pattern, where only one thread handles data coming from a queue. I also explained the reasons why such a pattern might be preferred.

One of those reasons is when ordering is important. Some events must keep the order in which they arrive in the queue, so handling them with one thread is a must. But what If there are many events, and I would really like to handle them on several threads? And what if the ordering is not compulsory between all events, but only between some of them? Using again the example from last time with market prices, order must be kept between prices on the same instruments. However, prices coming for different instruments can be handled in any order.

That’s when the Zebra Pattern comes handy. Imagine that each stripe of the Zebra is a different thread, with its own queue. When a price arrives, you put it on the queue reserved for this instrument. In that way, prices for one instrument will be ordered between them, while different instruments will be handled by different threads. To reduce the number of threads, you can use some modulo calculation, using an algorithm similar to the way hashmaps are dispatching keys between their buckets.

If this Pattern interests you, have a look at Heinz Kabutz’s Striped Executor Service.

We have so far tried to tie execution of data to one thread. What if we go the opposite way? Wait for the Star Trek Anti-Pattern.


          Le Meilleur Dev de France   
Yesterday I took part in the programming competition called The Best Developer in France. The place was the new hyped school 42, with its so-called “swimming pool”, two big rooms filled with Macs with big screens.

The aim was to solve 11 algorithmic problems within two hours, using Isograd’s TOSA system. Due to the limitation of this web-based editor, we had Eclipse installed (at least for Java developers), and we copy pasted the code back and forth between the two editors. The accepted languages were Java, C# and PHP, but my guess is that we Java developers were the lucky ones, because we had a decent editor (I’m not sure how you can develop in C# under Mac, and I saw several PHP devs struggling with the Eclipse debugger).

The first hurdles appeared before the competition. Being rather a PC user, the Mac keyboard was not easy to get used to, mainly all those different shortcuts, sometimes using the Control key, sometimes the Command key, sometimes the Option key . Some people brought their own keyboards, but it proved even more difficult because they did not manage to have all the keys at their correct place. The mouse’s lack of right click button made also my life difficult, although someone told me how to configure it afterwards.

Came the first exercise, the most difficult one according to me, since its subject was also a bit misleading. You had to write a Sort Iterator, which receives a list of iterators, and was supposed to give their values sorted. The idea was not to get all values first, and then sort them, since some iterators were giving values indefinitely! In fact, the aim was to get the next value from each iterator, and give back the smallest value which was still higher than or equal to the previous one. So tricky that I saw several contestants stuck on this first one till the end of the contest.

First problem during this first exercise, it seems that the server was restarted meanwhile, and several of us had to re-login. Another problem at the end of the second exercise: a dialog box pops-up, telling us that we already solved the problem, and another one will be proposed. But nothing happens when you press the OK button. We had to wait for several minutes till the problem was solved, and then we were back to execise number one. I was lucky that I kept the code for each exercise in separate files, so I could simply copy/paste my answers again. But I saw some others pressing frantically Ctrl-Z (or Command-Z or whatever) to get back to the previous code.

This dialog box problem happened to me again after the third exercise, and again after the fourth. Lots of minutes were lost like this. After exercise 1, the others were not too difficult, except maybe for one involving weights and scales which required some thinking. At the end of the two hours I made it to exercise number 7, where a pop-up informed me that I was more than 2000 seconds behind the number one. I wonder if he was hindered by all these problems as most of us were.

All together, it was a nice experience, with good hardware, whatever peculiar, good software too, although marred with several system problems, and there was a good mood among the developers themselves. Now I’m waiting for the ranking that should be sent to us by e-mail anytime soon.

Update: David Gageot, the originator of Coding Story, wrote about it on his blog. It seems like the failures were worst than I thought.

Update 2: Isograd published an explanation of their bug here. Meanwhile, they also sent me my ranking by mail. I am number 7. Not bad :)

          TC Electronic Alter Ego X4   

TC Electronic Alter Ego X4

Since man first put signal to magnetic tape, countless engineers have dedicated myriad hours to as many refinements. Since the production of the first compact standalone delay machine for musicians—the Maestro Echoplex—much time has been spent perfecting the delay effect, one of the most ubiquitous sounds in musical history.

It is with that level of care that we at Pro Guitar Shop approached the design of the Alter Ego X4. Starting with the acquisition of several vintage delay units three years ago, we undertook the Alter Ego X4 project in the most organic way possible—deriving sounds from the actual units to be modeled. Several classic units are represented within the newest Alter Ego: From classic units such as the Binson Echorec and the Roland RE-301 Space Echo to more obscure pieces such as the Watkins Copicat and the Tel-Ray Super Organ Tone, every flavor of echo is represented from every era of the delay continuum.

The process of developing the algorithms of the new X4 took over a year's time. A group effort, each setting was painstakingly dialed in, fine-tuned with an array of guitars, compared with the original unit, and then shelved. Some time later, the algorithm would be unearthed and the process began anew with fresh ears. If it wasn't right, the algorithm was tweaked until it was, then shelved again. As time went on, the time between revisions became longer and longer. After several iterations of this, the adjustments became fine enough to where they became trivial. It is then that we knew we were finished, and each mode now stands in its current form. When the project was completed, we unanimously agreed that the Alter Ego X4 had met the credo that we had started with: "This is the delay pedal that WE would want."

Keeping in the tradition of the Flashback X4, the other controls of the Flashback X4—including the four user-assignable TonePrint settings, subdivisions and looper mode—remain unchanged. The unit is true bypass, with standard 9v DC operation and 300mA current draw.

Settings:

  • Binson Echorec 1
  • Binson Echorec 2
  • Electro-Harmonix blue face Deluxe Memory Man (chorus)
  • Electro-Harmonix blue face Deluxe Memory Man (vibrato)
  • Tel-Ray Super Organ Tone
  • TC Electronic 2290 with modulation
  • Reverse delay with modulation
  • Boss DM-2
  • Watkins Copicat
  • Maestro EP-2 Echoplex
  • Electro Harmonix Echoflanger (think The Police – “Walking on the Moon”)
  • Roland RE-201 Space Echo
  • 3 User Programmable Preset Switches
  • Tap Tempo
  • 40 Second Looper with Undo
  • MIDI In and Thru
  • Expression Pedal Input, MIDI, USB
  • Stereo Input/Output
  • Switchable True or Buffered Bypass
  • 9.25” (23.49 cm) x 5.375” (13.65 cm) x 2.25” (5.71 cm)
  • 9vDC/300mA (No Batteries)

The TC Electronic Flashback X4 Manual can be used on the Alter Ego X4.

Alter Ego X4 Manual

Regular Price: $279.95

Special Price: $149.99


          Red Panda Context Reverb   

Red Panda Context Reverb

Context provides classic reverb sounds to place your instrument in a room, hall, cathedral, metal plate, or less natural surroundings. All of the algorithms are adjustable so that you can get the right combination of pre-delay, reverb time, and frequency response. The plate algorithm gives you control over low- and high-frequency response for crisp, defined reverb even with bass instruments. Gated reverb gives you big booming drums from the 1980's, but is also great for adding power to guitar while maintaining transparency and space between notes. Finally, there is delay plus reverb for that versatile combination in a single pedal.

Context adds depth to your sound without unwanted coloration. The dry signal passes through Burr-Brown op amps and WIMA poly film caps in a 100% analog signal path. An internal switch selects between true bypass and trails. True bypass completely removes the Context from your signal path when not needed. Trails allows the reverb to decay naturally, and also works as a high-quality buffer. With trails on, the blend control is active in bypass to maintain a consistent volume.

Controls

blend adjusts wet/dry blend, up to 100% wet.
delay sets pre-delay, to simulate natural spaces or increase presence.
decay adjusts reverb time, from very tight to Packard Plant sized.
damping adjusts high-frequency response, to keep the reverb from taking over other instruments' space or create dark ambience.

Modes

Room - fast buildup.
Hall - slow buildup with moderate diffusion.
Cathedral - bright reverb with extended response.
Gated - adjustable gate time with nonlinear decay.
Plate - bright and dense reverb with adjustable reverb time, low- and high-frequency response.
Delay - adjustable delay time, repeats, and reverb amount.

Technical Specs

  • True bypass switching or trails, selectable via internal switch.
  • Minimal signal path.
  • Mono in/out.
  • 4.7" x 3.7" enclosure, drilled, powder coated, and laser etched in-house.
  • 24 bit A/D and D/A converters.
  • High quality components (Burr-Brown op amps, WIMA poly film caps, Neutrik jacks).
  • Requires 9V center negative 100 mA power supply (not included). Does not take batteries.
  • Made in USA, from circuit boards to final assembly.

You can download the user manual here.
$225.00

          History and How It Moves Us   
Well, guys, the blog is having the best week it has had in months, so I’m grateful to you all. The last year has been a trial, some say that Google changed the algorithms to hurt conservative blogs. I have no idea if that is so, which is why I haven’t written about it, but […]
          Xavier Mertens: BSides Athens 2017 Wrap-Up   

The second edition of BSides Athens was planned this Saturday. I already attended the first edition (my wrap-up is here) and I was happy to be accepted as a speaker for the second time!  This edition moved to a new location which was great. Good wireless, air conditioning and food. The day was based on three tracks: the first two for regular talks and the third one for the CTP and workshops. The “boss”, Grigorios Fragkos introduced the 2nd edition. This one gave more attention to a charity program called “the smile of the child” which helps Greek kids to remain in touch with tmosthe new technologies. A specific project is called “ODYSSEAS” and is based on a truck that travels across Greek to educate kids to technologies like mobile phones, social networks, … The BSides Athens donated to this project. A very nice initiative that was presented by Stefanos Alevizos who received a slot of a few minutes to describe the program (content in Greek only).


The keynote was assigned to Dave Lewis who presented “The Unbearable Lightness of Failure”. The main fact explained by Dave is that we fail but…we learn from our mistakes! In other words, “failure is an acceptable teaching tool“. The keynote was based on many facts like signs. We receive signs everywhere and we must understand how to interpret them or the famous Friedrich Nietzsche’s quote: “That which does not kill us makes us stronger“. We are facing failures all the time. The last good example is the Wannacry bad story which should never happen but… You know the story! Another important message is that we don’t have to be afraid t fail. We also have to share as much as possible not only good stories but also bad stories. Sharing is a key! Participate in blogs, social networks, podcasts. Break out of your silo! Dave is a renowned speaker and delivered a really good keynote!

Then talks were split across the two main rooms. For the first one, I decided to attend the Thanissis Diogos’s presentation about “Operation Grand Mars“. In January 2017, Trustwave published an article which described this attack. Thanassis came back on this story with more details. After a quick recap about what is incident management, he reviewed all the fact related to the operation and gave some tips to improve abnormal activities on your network. It started with an alert generated by a workstation and, three days later, the same message came from a domain controller. Definitively not good! The entry point was infected via a malicious Word document / Javascript. Then a payload was download from Google docs which is, for most of our organizations, a trustworthy service. Then he explained how persistence was achieved (via autorun, scheduled tasks) and also lateral movements. The pass-the-hash attack was used. Another tip from Thanissis: if you see local admin accounts used for network logon, this is definitively suspicious! Good review of the attack with some good tips for blue teams.

My next choice was to move to the second track to follow Konstantinos Kosmidis‘s talk about machine learning (a hot topic today in many conferences!). I’m not a big fan of these technologies but I was interested in the abstract. The talk was a classic one: after an introduction to machine learning (that we already use every day with technologies like the Google face recognition, self-driving card or voice-recognition), why not apply this technique to malware detection. The goal is to: detect, classify but, more important, to improve the algorithm! After reviewing some pro & con, Konstantinos explained the technique he used in his research to convert malware samples into images. But, more interesting, he explained a technique based on steganography to attack this algorithm. The speaker was a little bit stressed but the idea looks interesting. If you’re interested, have a look at his Github repository.

Back to the first track to follow Professor Andrew Blyth with “The Role of Professionalism and Standards in Penetration Testing“. The penetration testing landscape changed considerably in the last years. We switched to script kiddies search for juicy vulnerabilities to professional services. The problem is that today some pentest projects are required not to detect security issues and improve but just for … compliance requirements. You know the “checked-case” syndrome. Also, the business evolves and is requesting more insurance. The coming GDP European regulation will increase the demand in penetration tests.  But, a real pentest is not a Nessus scan with a new logo as explained Andrew! We need professionalism. In the second part of the talk, Andrew reviewed some standards that involve pentests: iCAST, CBEST, PCI, OWASP, OSSTMM.

After a nice lunch with Greek food, back to talks with the one of Andreas Ntakas and Emmanouil Gavriil about “Detecting and Deceiving the Unknown with Illicium”. They are working for one of the sponsors and presented the tool developed by their company: Illicium. After the introduction, my feeling was that it’s a new honeypot with extended features.  Not only, they are interesting stuff but, IMHO, it was a commercial presentation. I’d expect a demo. Also, the tool looks nice but is dedicated to organization that already reached a mature security level. Indeed, before defeating the attacker, the first step is to properly implement basic controls like… patching! What some organizations still don’t do today!

The next presentation was “I Thought I Saw a |-|4><0.-” by Thomas V. Fisher.  Many interesting tips were provided by Thomas like:

  • Understand and define “normal” activities on your network to better detect what is “abnormal”.
  • Log everything!
  • Know your business
  • Keep in mind that the classic cyber kill-chain is not always followed by attackers (they don’t follow rules)
  • The danger is to try to detect malicious stuff based on… assumptions!

The model presented by Thomas was based on 4 A’s: Assess, Analyze, Articulate and Adapt! A very nice talk with plenty of tips!

The next slot was assigned to Ioannis Stais who presented his framework called LightBulb. The idea is to build a framework to help in bypassing common WAF’s (web application firewalls). Ioannis explained first how common WAF’s are working and why they could be bypassed. Instead of testing all possible combinations (brute-force), LightBuld relies on the following process:

  • Formalize the knowledge in code injection attacks variations.
  • Expand the knowledge
  • Cross check for vulnerabilities

Note that LightBulb is available also as a BurpSuipe extension! The code is available here.

Then, Anna Stylianou presented “Car hacking – a real security threat or a media hype?“. The last events that I attended also had a talk about cars but they focused more on abusing the remote control to open doors. Today, it focuses on ECU (“Engine Control Unit”) that are present in modern cars. Today a car might have >100 ECU’s and >100 millions lines of code which means a great attack surface! They are many tools available to attack a car via its CAN bus, even the Metasploit framework can be used to pentest cars today! The talk was not dedicated to a specific attack or tools but was more a recap of the risks that cars manufacturers are facing today. Indeed, threats changed:

  • theft from the car (breaking a window)
  • theft of the cat
  • but today: theft the use of the car (ransomware)

Some infosec gurus also predict that autonomous cars will be used as lethal weapons! As cars can be seen as computers on wheels, the potential attacks are the same: spoofing, tampering, repudiation, disclosure, DoS or privilege escalation issues.

The next slot was assigned to me. I presented “Unity Makes Strength” and explained how to improve interconnections between our security tools/applications. The last talk was performed by Theo Papadopoulos: A “Shortcut” to Red Teaming. He explained how .LNK files can be a nice way to compromize your victim’s computer. I like the “love equation”: Word + Powershell = Love. Step by step, Theo explained how to build a malicious document with a link file, how to avoid mistakes and how to increase chances to get the victim infected. I like the persistence method based on assigning a popular hot-key (like CTRL-V) to shortcut on the desktop. Windows will trigger the malicious script attached to the shortcut and them… execute it (in this case, paste the clipboard content). Evil!

The day ended with the CTF winners announce and many information about the next edition of BSides Athens. They already have plenty of ideas! It’s now time for some off-days across Greece with the family…

[The post BSides Athens 2017 Wrap-Up has been first published on /dev/random]


          Boys of Tech 366: Journey through the Nevada Desert   
CES booth raided by US marshals and stock confiscated, programming a drone to land autonomously on a moving vehicle, Germany gets Google, Facebook and Twitter to agree on removing hate speech, hackers break fuel discount voucher algorithm.
          Google no longer bolding keywords in domain names   

Another blow to domain names in search. Domain names and URLs that contain terms people search for on Google have long been thought to have some sort of advantage. For a while, Google seemed to give preference to “exact match domains”. Then it tweaked the algorithm to downplay this, at least for low-quality sites that […]

The post Google no longer bolding keywords in domain names appeared first on Domain Name Wire | Domain Name News & Views.


          Boys of Tech 336: Awkward   
The Oculus Rift comes bundled with Xbox controller, computer algorithm to detect influences in paintings, WWDC announcement that Apple Pay to launch in the UK, Apple announces maps for transit, smartphone survives after 1 week in lake.
          Lintcode 43 - Maximum Subarray III   
Related: Lintcode 41,42 - Maximum Subarray I,II
http://www.lintcode.com/en/problem/maximum-subarray-iii/
Given an array of integers and a number k, find k non-overlapping subarrays which have the largest sum.
The number in each subarray should be contiguous.
Return the largest sum.
 Notice
The subarray should contain at least one number
Example
Given [-1,4,-2,3,-2,3]k=2, return 8
Time: O(k * n^2)
Space: O(k * n)
Using sums[i][j] to denote the maximum total sum of choosing i subarrays from the first j numbers.
We can update by sums[i][j] = max(sums[i - 1][t] + maxSubarraySum(nums[t+1...j])), which means using the first t numbers to choose i - 1 subarrays, and plus the maximum sum from remaining numbers(nums[t]...nums[j-1]). We want to try all possible split point, so t ranges from nums[i-1] to nums[j-1].
In the most inner loop, it will try to examine the max sum from the subarray nums[t] to nums[j-1], where t goes from j-1 down to i-1. We can compute for each t the maximum sum. However, if we scan from right to left instead of left to right, we only needs to update the maximum value incrementally. For example, t's range is [1..5], so at first, the max sum is pick from [5], then it's picked from [4...5], ..., finally picked from [1...5]. By scanning from right to left, we are able to include the new number into computing on the fly.
  public int maxSubArray(ArrayList<Integer> nums, int k) {
if (nums == null || nums.size() < k) {
return 0;
}
int len = nums.size();
int[][] sums = new int[k + 1][len + 1];
for (int i = 1; i <= k; i++) {
for (int j = i; j <= len; j++) { // at least need one number in each subarray
sums[i][j] = Integer.MIN_VALUE;
int sum = 0;
int max = Integer.MIN_VALUE;
for (int t = j - 1; t >= i - 1; t--) {
sum = Math.max(nums.get(t), sum + nums.get(t));
max = Math.max(max, sum);
sums[i][j] = Math.max(sums[i][j], sums[i - 1][t] + max);
}
}
}
return sums[k][len];
}
d[i][j]代表0->i-1元素中j个subarray的maxsum  (注意不包含元素i)
d[i][j] = max(d[i][j], d[m][j-1] + max) (m = j-1 .... i-1; max需要单独求,是从元素i-1到m的max subarray, 用求max subarray的方法,需要从后往前算)
  1. public int maxSubArray(ArrayList<Integer> nums, int k) {  
  2.     int n = nums.size();  
  3.     int[][] d = new int[n+1][k+1];  
  4.     for (int j = 1; j <= k; j++) {  
  5.         for (int i = j; i <= n; i++) {  
  6.             d[i][j] = Integer.MIN_VALUE;  
  7.             int max = Integer.MIN_VALUE;  
  8.             int localMax = 0;  
  9.             for (int m = i-1; m >= j-1; m--) {  
  10.                 localMax = Math.max(nums.get(m), nums.get(m)+localMax);  
  11.                 max = Math.max(localMax, max);  
  12.                 d[i][j] = Math.max(d[i][j], d[m][j-1] + max);  
  13.             }  
  14.         }  
  15.     }  
  16.     return d[n][k];  
  17. }  
http://www.cnblogs.com/lishiblog/p/4183917.html
DP. d[i][j] means the maximum sum we can get by selecting j subarrays from the first i elements.
d[i][j] = max{d[p][j-1]+maxSubArray(p+1,i)}
we iterate p from i-1 to j-1, so we can record the max subarray we get at current p, this value can be used to calculate the max subarray from p-1 to i when p becomes p-1.

 7     public int maxSubArray(ArrayList<Integer> nums, int k) {
8 if (nums.size()<k) return 0;
9 int len = nums.size();
10 //d[i][j]: select j subarrays from the first i elements, the max sum we can get.
11 int[][] d = new int[len+1][k+1];
12 for (int i=0;i<=len;i++) d[i][0] = 0;
13
14 for (int j=1;j<=k;j++)
15 for (int i=j;i<=len;i++){
16 d[i][j] = Integer.MIN_VALUE;
17 //Initial value of endMax and max should be taken care very very carefully.
18 int endMax = 0;
19 int max = Integer.MIN_VALUE;
20 for (int p=i-1;p>=j-1;p--){
21 endMax = Math.max(nums.get(p), endMax+nums.get(p));
22 max = Math.max(endMax,max);
23 if (d[i][j]<d[p][j-1]+max)
24 d[i][j] = d[p][j-1]+max;
25 }
26 }
27
28 return d[len][k];
31 }
Use one dimension array.
 7     public int maxSubArray(ArrayList<Integer> nums, int k) {
8 if (nums.size()<k) return 0;
9 int len = nums.size();
10 //d[i][j]: select j subarrays from the first i elements, the max sum we can get.
11 int[] d = new int[len+1];
12 for (int i=0;i<=len;i++) d[i] = 0;
13
14 for (int j=1;j<=k;j++)
15 for (int i=len;i>=j;i--){
16 d[i] = Integer.MIN_VALUE;
17 int endMax = 0;
18 int max = Integer.MIN_VALUE;
19 for (int p=i-1;p>=j-1;p--){
20 endMax = Math.max(nums.get(p), endMax+nums.get(p));
21 max = Math.max(endMax,max);
22 if (d[i]<d[p]+max)
23 d[i] = d[p]+max;
24 }
25 }
26
27 return d[len];
30 }

X.
http://hehejun.blogspot.com/2015/01/lintcodemaximum-subarray-iii.html
跟之前一样这里我们维护两个东西,localMax[i][j]为进行了i次partition前j个数字的最大subarray,并且最后一个subarray以A[j - 1](A为输入的数组)结尾;globalMax[i][j]为进行了i次partition前j个数字的最大subarray(最后一个subarray不一定需要以j - 1结尾)。类比之前的DP方程,我们推出新的DP方程:
  • globalMax[i][j] = max(globalMax[i][j - 1], localMax[i][j]);
  • localMax[i][j] = max(globalMax[i - 1][k] + sumFromTo(A[k + 1], A[j])) 其中 0< k < j;
第一眼看上去对于第二个DP方程我们需要每次构建localMax[i][j]的时候把后面都扫一遍,这样总的复杂度会达到O(n^2),但是我们有方法把这一步优化到O(n)。
设想如下例子:
  • globalMax[i - 1]: 3, 5, -1, 8, 7
  • A[]:                    1, 2, 6, -2, 0
我们可以看到当算完A[2] max(globalMax[i - 1][k] + sumFromTo(A[k + 1], A[j])) = 11。当我们算到A[3]的时候,如果我们不考虑新加的globalMax[i - 1][2] + A[3]的组合, 之前的所有组合(globalMax[i - 1][0] + sumFromTo(A[1], A[2]), globalMax[i - 1][1] + sumFromTo(A[2], A[2]))都只需要再加一个A[3]就是新的globalMax[i - 1][k] + sumFromTo(A[k + 1], A[j])的值,所以之前的最大的值,到新的还是原来所组合的最大值,只需要和新加的比一下就可以了,所以这个值我们从最左边向右边扫一遍一直维护是可行的,并且只需要维护一个变量localMax而不是数组。时间复杂度O(k * n),空间复杂度O(k * n),空间应该可以维护一个滚动数组来优化到n不过这道题的逻辑比较复杂,为了维护逻辑的清晰性,我们不优化空间

https://zhengyang2015.gitbooks.io/lintcode/maximum_subarray_iii_43.html
local[i][k]表示前i个元素取k个子数组并且必须包含第i个元素的最大和。
global[i][k]表示前i个元素取k个子数组不一定包含第i个元素的最大和。
local[i][k]的状态函数:
max(global[i-1][k-1], local[i-1][k]) + nums[i-1]
有两种情况,第一种是第i个元素自己组成一个子数组,则要在前i-1个元素中找k-1个子数组,第二种情况是第i个元素属于前一个元素的子数组,因此要在i-1个元素中找k个子数组(并且必须包含第i-1个元素,这样第i个元素才能合并到最后一个子数组中),取两种情况里面大的那个。
global[i][k]的状态函数:
max(global[i-1][k],local[i][k])
有两种情况,第一种是不包含第i个元素,所以要在前i-1个元素中找k个子数组,第二种情况为包含第i个元素,在i个元素中找k个子数组且必须包含第i个元素,取两种情况里面大的那个
    public int maxSubArray(int[] nums, int k) {
if(nums.length < k){
return 0;
}

int len = nums.length;

//local[i][k]表示前i个元素取k个子数组并且必须包含第i个元素的最大和。
int[][] localMax = new int[len + 1][k + 1];
//global[i][k]表示前i个元素取k个子数组不一定包含第i个元素的最大和。
int[][] globalMax = new int[len + 1][k + 1];

for(int j = 1; j <= k; j++){
//前j-1个元素不可能找到不重叠的j个子数组,因此初始化为最小值,以防后面被取到
localMax[j - 1][j] = Integer.MIN_VALUE;
for(int i = j; i <= len; i++){
localMax[i][j] = Math.max(globalMax[i - 1][j - 1], localMax[i - 1][j]) + nums[i - 1];
if(i == j){
globalMax[i][j] = localMax[i][j];
}else{
globalMax[i][j] = Math.max(globalMax[i - 1][j], localMax[i][j]);
}
}
}

return globalMax[len][k];
}

https://leilater.gitbooks.io/codingpractice/content/dynamic_programming/maximum_subarray_iii.html
dp[i][j]表示从前i个数( i.e., [0, i-1]) 中取j个subarray得到的最大值,那么要求的结果就是dp[n][k]:从前n个数中取k个subarray得到的最大和。
状态转移:从前i个中取j个,那么我们可以从前p个数(i.e., [0, p-1])中取j-1个subarray,再加上从P到i-1这个subarray中的最大和(也就是Maximum Subarray问题)。从这句话里我们也可以读出来p的范围应该是j-1~i-1
    int maxSubArray(vector<int> nums, int k) {
if(nums.empty()) return 0;
int n = nums.size();
if(n < k) return 0;
vector<vector<int> > max_sum(n+1, vector<int>(k+1, INT_MIN));
//max_sum[i][j]: max sum of j subarrays generated from nums[0] ~ nums[i-1]
//note that j is always <= i
//init
for(int i=0; i<=n; i++) {
max_sum[i][0] = 0;
}
for(int i=1; i<=n; i++) {
for(int j=1; j<=min(i,k); j++) {
max_sum[i][j] = max_sum[i - 1][j];
//max_sum[i][j] equals to 1) max sum of j-1 subarrays generated from nums[0] ~ nums[p-1]
//plus 2) max sum of the subarry from nums[p] ~ nums[i-1]
//p can be j-1 ~ i-1
for(int p = i-1; p >= j-1; p--) {
//compute the max of of the subarry from nums[p] ~ nums[i-1]
int global = maxSubArray(nums, p, i-1);
max_sum[i][j] = max(max_sum[i][j], max_sum[p][j-1] + global);
}
}

          CMU Scientists Harness 'Mind Reading' Technology to Decode Complex Thoughts   

Researchers have combined machine-learning algorithms with brain-imaging technology to "mind read."


          MIT Researchers Offer Algorithm for Picking 'Winning' Startups   

A new algorithmic scheme for venture capital investment pairs the unpredictabilty of Brownian motion with large datasets about startup founders, investors, and performance.


          An Algorithm Helps Protect Mars Curiosity's Wheels   

There are no mechanics on Mars, so the next best thing for NASA's Curiosity rover is careful driving.


          Senior Staff Millimeter-Wave/RFIC Design Engineer - Peraso Technologies Inc. - Engineer, BC   
Development of digital pre-distortion and IQ calibration techniques for enhanced 60 GHz TX EVM. Design and verification of calibration algorithms such as DC...
From Peraso Technologies Inc. - Sat, 08 Apr 2017 08:21:46 GMT - View all Engineer, BC jobs
          SSDPCM1-Super by Algorithm (2017)   
Released by: Algorithm
Release date: 28 June 2017
Type: C64 Demo
Download | Discuss
SSDPCM1-Super
          [PROBLEMS]riches-dragon.com - Min 100 Rublos (50% for 5 days) RCB 50% P.M.,PY,ADV   
New project: riches-dragon.com Start of the project: 24.06.2017 (12:00) About the Project: Riches Dragon is a legal company that provides online investment services, using innovative trading strategies to obtain the maximum result on the world's largest stock and commodity exchanges, specializing in its trading activities at bid binary options. The company actively uses trading strategies, which initially laid not only the algorithms for obtaining the maximum profit from the transaction, ...
          DAP 1.045 released   

This is an interim release while I work on 1.05.

The reason for the release is because of some good feedback from Paul Blackman. Thanks, Paul!

This release makes a change to the JS-Min algorithm to avoid conditional comments found in some JavaScripts.

  Download the latest DAP file here


          Domino Accelerator Pack 1.04 released   

Apart from some cosmetic changes this release is all about the JS-Minify support.

Users of the 1.03 release should now have a link on their System Overview page telling there is a new version available.

From the manual:

DAP supports an algorithm developed by Douglas Crockford that removes all comments and
unnecessary white space from JavaScripts or CSS files. This will reduce scripts and style sheets another 10-15% when combined with GZip. You can read more about the algorithm here:
http://www.crockford.com/javascript/jsmin.html

DAP uses a slightly modified algorithm that is a bit more conservative around the hash (#) char. This is to counter Internet Explorer’s parsing of CSS files that did not work otherwise.

To use JSMin set a HTTP-header:  js-min: true Example:

@SetHTTPHeader(“js-min”;”true”);

This will tell DAP to minify the JavaScript or CSS before GZip compression. Do not attempt to minify other content than JavaScript or CSS it will probably break the content.

Be aware that unlike GZip minifying is not lossless. The content will be changed. In some cases JavaScript or CSS files can behave different after minifying. You will have to test that your code works after you attempt to minify. Pay extra attention if you are using conditional comments in JavaScript. This will most probably not work as comments are removed.

Note: JS-Minifying is an option for some content, use only where you know no harm can be done it is not worth the space saved if it means introducing bugs in your code.


          Very fast sorting algorithm   

There is an abundance of sorting algorthms 'out there'.
Too often I find myself, by force of habit, using bubble-sort.
For large arrays, the sorting algorithms below - shell sort - has proven itself to be extremely fast.

Sub shellSort( ar( ) As String )
Dim Lower As Integer
Dim Upper As Integer
Dim botMax As Integer
Dim i As Integer
Dim k As Integer
Dim h As Integer
Dim v As String

Lower% = Lbound( ar$( ) )
Upper% = Ubound( ar$( ) )

h% = 1
Do
h% = (3*h%) + 1
Loop Until h% > Upper%-Lower%+1
Do
h% = h% \ 3
botMax% = Lower% + h% - 1
For i% = botMax% + 1 To Upper%
v$ = ar$( i% )
k% = i%
While ar$( k% - h% ) > v$
ar$( k% ) = ar$( k% - h% )
k% = k% - h%
If (k% <= botMax%) Then Goto wOut
Wend
wOut:
If (k% <> i%) Then ar$(k%) = v$
Next
Loop Until h% = 1
End Sub

          The future of array-oriented computing in Haskell — The Result!   

I recently posted a survey concerning The future of array-oriented computing in Haskell. Here is a summary of the responses.

It is not surprising that basically everybody (of the respondents — who surely suffer from grave selection bias) is interested in multicore CPUs, but I’m somewhat surprised to see about 2/3 to be interested in GPGPU. The most popular application areas are data analytics, machine learning, and scientific computing with optimisation problems and physical simulations following close up.

The most important algorithmic patterns are iterative numeric algorithms, matrix operations, and —the most popular— standard aggregate operations, such as maps, folds, and scans. (This result most surely suffers from selection bias!)

I am very happy to see that most people who tried Repa or Accelerate got at least some mileage out of them. The most requested backend feature for Repa are SIMD instructions (aka vector instructions) and the most requested feature for Accelerate is support for high-performance CPU execution. I did suspect that and we really like to provide that functionality, but it is quite a bit of work (so will take a little while). The other major request for Accelerate is OpenCL support — we really need some outside help to realise that, as it is a major undertaking.

As far as extending the expressiveness of Accelerate goes, there is strong demand for nested data parallelism and sparse data structures. This also requires quite a bit of work (and is conceptual very hard!), but the good news is that PLS has got a PhD student working on just that!

NB: In the multiple choice questions permitting multiple answers, the percentages given by the Google Docs summary is somewhat misleading.


          Non-cryptographic hash functions for .NET   

Creating hashs is quite common to check if content X has changed without looking at the whole content of X. Git for example uses SHA1-hashs for each commit. SHA1 itself is a pretty old cryptographic hash function, but in the case of Git there might have been better alternatives available, because the “to-be-hashed” content is not crypto relevant - it’s just content marker. Well… in the case of Git the current standard is SHA1, which works, but a ‘better’ way would be to use non-cryptographic functions for non-crypto purposes.

Why you should not use crypto-hashs for non-crypto

I discovered this topic via a Twitter-conversation and it started with this Tweet:

Clemens Vasters then came and pointed out why it would be better to use non-crypto hash functions:

The reason makes perfect sense for me - next step: What other choice are available?

Non-cryptographic hash functions in .NET

If you are googleing around you will find many different hashing algorithm, like Jenkins or MurmurHash.

The good part is, that Brandon Dahler created .NET versions of the most well known algorithm and published them as NuGet packages.

The source and everything can be found on GitHub.

Lessons learned

If you want to hash something and it is not crypto relevant, then it would be better to look at one of those Data.HashFunctions - some a pretty crazy fast.

I’m not sure which one is ‘better’ - if you have some opinions please let me know. Brandon created a small description of each algorithm on the Data.HashFunction documentation page.

(my blogging backlog is quite long, so I needed 6 month to write this down ;) )


          Google search changes push for an increase in Ad Words   
With Google constantly changing their algorithms and processes, the online world is one that you need to be constanstly learning in order to be on top of it... And we have some recent changes to discuss today: Google Local Search Display.  There's been a few recent changes that Google have made to how local search results are shown, meaning your SEO and Ad Words strategy may need an update.  Recent Google Search Changes: The 'local search pack" on the results page now shows 3 listings instead of 7 (the number of organic results remains unchange...
          Should you still be link building in your blogs?   
Including links in your blog posts, both to external and internal sources, has long been a mainstay of content creation. But as Google continually updates its search algorithm and blatant SEO manipulation tactics are derided, many people have started to ponder removing links from their posts altogether. So should you still include links in your posts? The short answer is yes. Help your readers If you're talking about a news report, statistic, a post on another blog or one of your previous articles, it can build the credibility and be useful for your readers to have a dir...
          Service Manager at AFM Recruit   
AFMRecruit is a subsidiary of Afmining concepts a registered Nigerian company. Our expertise is in recruitment. Matching talent to jobs and companies using our proprietary afm360 Algorithm , which uses the candidates skills and interview grade to match our clients needs. At AFMRecruit our mission is to provide staffing in order to assist our clients in achieving business critical solutions. By providing excellent customer service, innovation, experience of our staff and keeping clients at the center of our services, we will help businesses achieve their goals and consistently deliver a high return on investment
          Graph Theory with Algorithms and Its Applications in Applied Science and Technology   

          Introduction to Algorithms   

          Practical Analysis of Algorithms   

          Quantum Algorithms via Linear Algebra   

          Algorithmic Mathematics   

          USB MiniPro TL866CS BIOS programozó EEPROM FLASH - Jelenlegi ára: 13 094 Ft   
Features: Brand new and high quality.
Well-designed cheap professional programmer .
Production of high-density SMD technology .
A unified user interface, easy to use.
Fully functional, reliable program running of application software, ultra-small code,runs faster.
Supports bilingual(English and Chinese).
It can automatically identify the operating system to install and run under WIN 2000/WIN XP/WIN 2003/WIN 2008/WIN VISTA/WIN7.
USB MiniPro TL866CS Universal BIOS Programmer support more than 12000 chips!
Note: The following performance list: The great part of other development programmers are not comparable to its functions,that is not easy to reach.
High Speed Programming: This programmer has Built-in MCU with high-performance and high-capacity USB interface, at the communication speed of 12Mbps, being in line with ( For each chip) well-designed programming algorithm and USB high-speed communications.
Package Included: 1 X TL866CS Programmer
1 X USB wire
1 X softerware CD
NO Retail Box. Packed Safely in Bubble Bag.
P075337
Vásárlással kapcsolatos fontos információk:
Köszöntjük oldalunkon!
Az adásvétel megkönnyítése érdekében, kérjük olvassa el vásárlási feltételeinket, melyeket rendelésével automatikusan elfogad.
Kedvezmény: Amennyiben termékeink közül minimum 50 db-ot vásárol, kedvezményt biztosítunk. Kérjük igényelje a kedvezményt ügyfélszolgálatunktól.
US hálózati csatlakozós termékeink esetén, külön rendelhető a termékeink között található US-EU átalakító adapter.
Fontos! Ha a leírásban NEM szerepel, hogy ? We dont offer color/pattern/size choice? (szín/minta/méret nem választható), akkor rendeléskor kérjük mindenképp írja bele a megjegyzés rovatba a kiválasztott színt/mintát/méretet, ellenkező esetben kollégáink véletlenszerűen postázzák. Ez esetben utólagos reklamációt nem fogadunk el.
Ahol a ? We dont offer color/pattern/size choice? kijelentés szerepel, sajnos nincs lehetőség szín/minta/méret kiválasztására. Ilyenkor kollégáink véletlenszerűen küldik a termékeket.
Kommunikáció: minden esetben kizárólag email-ben, mert így visszakövethetőek a beszélgetések.
Hibás termék: visszautaljuk a vételárat vagy újrapostázzuk a terméket megállapodástól függően, miután visszapostázta a megadott címre.
Visszautalás: a vételárat visszautaljuk, vagy a terméket újraküldjük ha nem érkezik meg a termék.
Ez esetben kérjük jelezze email-en keresztül, hogy megoldást találhassunk a problémára!
Garancia: 3 hónap! Amennyiben valóban hibás a termék, kérjük vegye fel velünk a kapcsolatot és kicseréljük vagy visszavásároljuk a terméket megegyezéstől függően.
Számlázás: Az elektronikus számlát (pdf. formátumú) Angliában regisztrált cégünk állítja ki, az ÁFA nem kimutatható, az utalás magyar céges számlánkra történik.
A szállítási idő: az összeg átutalása után 9-12 munkanap, de a postától függően előfordulhat a 25-35 munkanap is! A posta szállítási idejéért cégünk nem tud felelősséget vállalni, az említett szállítási idő tájékoztató jellegű!
Nagyon fontos! Kérjük ne vásároljanak akkor, ha nem tudják kivárni az esetleges 35 munkanap szállítási időt!
strong>Postázás: Termékeinket külföldről postázzuk.
Nagy raktárkészletünk miatt előfordulhat, hogy egy-két termék átmenetileg vagy véglegesen elfogy raktárunkból, erről mindenképp időben értesítjük és megfelelő megoldást kínálunk.
Utalás: Kizárólag átutalást (házibank, netbank) fogadunk el (bankszámláról bankszámlára),   Banki/Postai készpénz befizetést/Rózsaszín csekket ill. egyéb NEM!
Átutalásnál a rendelésszámot feltétlenül adja meg a közlemény rovatba, ellenkező esetben előfordulhat, hogy nem tudjuk visszakeresni a rendelését. Ebben az esetben nyilvánvalóan nem tudjuk a terméket postázni ill. Önt sem tudjuk értesíteni, hiszen nincs kiindulópontunk!
Fizetés/szállítás:
-2000Ft felett (postaköltséggel együtt) CSAK es KIZÁRÓLAG ajánlottan postázzuk a terméket az alábbiak szerint:
-Ajánlott posta esetén az első termékre a posta 890Ft , minden további 250 Ft/db.
- Sima Levélként 2000Ft alatt: az első termékre a posta 250Ft, minden további termék posta díja 250Ft/db.
Átvétel: azoknak a vásárlóknak akik nem veszik át a rendelt terméket a postától és visszaküldésre kerül a termék cégünkhöz, a postaköltség újbóli megfizetésével tudjuk csak újraküldeni, illetve amennyiben az összeget kéri vissza, a termékek árát tudjuk csak visszautalni, postaköltség nélkül. A termék átvétele az Ön felelőssége! Amennyiben a Mi hibánkból nem tudja átvenni, pl téves címzés miatt, így a postaköltség minket terhel.
Amennyiben a megrendelést követő 24 órán belül nem kap emailt tőlünk, ez azt jelenti, hogy az email cím (freemail és citromail esetén főleg) visszadobta a küldött email-t. Ilyenkor küldjön üzenetet egy másik e-mail címről.
Kellemes Vásárlást Kívánunk!
USB MiniPro TL866CS BIOS programozó EEPROM FLASH
Jelenlegi ára: 13 094 Ft
Az aukció vége: 2017-07-22 03:19
          OAuth2, JWT, Open-ID Connect and other confusing things   

Disclaimer

If feel I have to start this post with an important disclaimer: don’t trust too much what I’m about to say.
The reason why I say this is because we are discussing security. And when you talk about security anything other then 100% correct statements risks to expose you to some risk of any sort.
So, please, read this article keeping in mind that your source of truth should be the official specifications, and that this is just an overview that I use to recap this topic in my own head and to introduce it to beginners.

Mission

I have decided to write this post because I have always found OAuth2 confusing. Even now that I know a little more about it, I found some of its part puzzling.
Even if I was able to follow online tutorials from the likes of Google or Pinterest when I need to fiddle with their APIs, it always felt like some sort of voodoo, with all those codes and Bearer tokens.
And each time they mentioned I could make my own decisions for specific steps, choosing among the standard OAuth2 approach, my mind tended to go blind.

I hope I’ll be able to fix some idea, so that from now on, you will be able to follow OAuth2 tutorials with more confidence.

What is OAuth2?

Let’s start from the definition:

OAuth 2 is an authorisation framework that enables applications to obtain limited access to user accounts on an HTTP service.

The above sentence is reasonably understandable , but we can improve things if we pinpoint the chose terms.

The Auth part of the name, reveals itself to be Authorisation(it could have been Authentication; it’s not).
Framework can be easily overlooked since the term framework is often abused; but the idea to keep here is that it’s not necessarily a final product or something entirely defined. It’s a toolset. A collection of ideas, approaches, well defined interactions that you can use to build something on top of it!
It enable applications to obtain limited access. The key here is that it enables applications not humans.
limited access to user accounts is probably the key part of the definition that can help you to remember and to explain what OAuth2 is:
the main aim is to allow a user to delegate access to a user owned resource. Delegating it to an application.

OAuth2 is about delegation.

It’s about a human, instructing a software to do something on her behalf.
The definition also mentions limited access, so you can imagine of being able to delegate just part of your capabilities.
And it concludes mentioning HTTP services. This authorisation-delegation, happens on an HTTP service.

Delegation before OAuth2

Now that the context should be clearer, we could ask ourselves: How were things done before OAuth2 and similar concepts came out?

Well, most of the time, it was as bad as you can guess: with a shared secret.

If I wanted a software A to be granted access to my stuff on server B, most of the time the approach was to give my user/pass to software A, so that it could use it on my behalf.
This is still a pattern you can see in many modern software, and I personally hope it’s something that makes you uncomfortable.
You know what they say: if you share a secret, it’s no longer a secret!

Now imagine if you could instead create a new admin/password couple for each service you need to share something with. Let’s call them ad-hoc passwords.
They are something different than your main account for a specific service but they still allow to access the same service as they were you. You would be able, in this case, to delegate, but you would still be responsible of keeping track of all this new application-only accounts you need to create.

OAuth2 - Idea

Keeping in mind that the business problem that we are trying to solve is the “delegation” one, we want to extend the ad-hoc password idea to take away from the user the burden of managing these ad-hoc passwords.
OAuth2 calls these ad-hoc passwords tokens.
Tokens, are actually more than that, and I’ll try to illustrate it, but it might be useful to associate them to this simpler idea of an ad-hoc password to begin with.

OAuth2 - Core Business

Oauth 2 Core Business is about:

  • how to obtain tokens

OAuth2 - What’s a token?

Since everything seems to focus around tokens, what’s a token?
We have already used the analogy of the ad-hoc password, that served us well so far, but maybe we can do better.
What if we look for the answer inside OAuth2 specs?
Well, prepare to be disappointed. OAuth2 specs do not give you the details of how to define a token. Why is this even possible?
Remember when we said that OAuth2 was “just a framework”? Well, this is one of those situation where that definition matters!
Specs just tell you the logical definition of what a token is and describe some of the capabilities it needs to posses.
But at the end, what specs say is that a token is a string. A string containing credentials to access a resource.
It gives some more detail, but it can be said that most of the time, it’s not really important what’s in a token. As long as the application is able to consume them.

A token is that thing, that allows an application to access the resource you are interested into.

To point out how you can avoid to overthink what a token is, specs also explicitly say that “is usually opaque to the client”!
They are practically telling you that you are not even required to understand them!
Less things to keep in mind, doesn’t sound bad!

But to avoid turning this into a pure philosophy lesson, let’s show what a token could be

{
   "access_token": "363tghjkiu6trfghjuytkyen",
   "token_type": "Bearer"
}

A quick glimpse show us that, yeah, it’s a string. JSON-like, but that’s probably just because json is popular recently, not necessarily a requirement.
We can spot a section with what looks like a random string, an id: 363tghjkiu6trfghjuytkyen. Programmers know that when you see something like this, at least when the string is not too long, it’s probably a sign that it’s just a key that you can correlate with more detailed information, stored somewhere else.
And that iss true also in this case.
More specifically, the additional information it will be the details about the specific authorisation that that code is representing.

But then another thing should capture your attention: "token_type": "Bearer".

Your reasonable questions should be: what are the characteristics of a Bearer token type? Are there other types? Which ones?

Luckily for our efforts to keep things simple, the answer is easy ( some may say, so easy to be confusing… )

Specs only talk about Bearer token type!

Uh, so why the person who designed a token this way, felt that he had to specify the only known value?
You might start seeing a pattern here: because OAuth2 is just a framework!
It suggests you how to do things, and it does some of the heavy lifting for you making some choice, but at the end, you are responsible of using the framework to build what you want.
We are just saying that, despite here we only talk about Bearer tokens, it doesn’t mean that you can’t define your custom type, with a meaning you are allowed to attribute to it.

Okay, just a single type. But that is a curious name. Does the name imply anything relevant?
Maybe this is a silly question, but for non-native English speakers like me, what Bearer means in this case could be slightly confusing.

Its meaning is quite simple actually:

A Bearer token is something that if you have a valid token, we trust you. No questions asked.

So simple it’s confusing. You might be arguing: “well, all the token-like objects in real world work that way: if I have valid money, you exchange them for the good you sell”.

Correct. That’s a valid example of a Bearer Token.

But not every token is of kind Bearer. A flight ticket, for example, it’s not a Bearer token.
It’s not enough having a ticket to be allowed to board on a plane. You also need to show a valid ID, so that your ticket can be matched with; and if your name matches with the ticket, and your face match with the id card, you are allowed to get on board.

To wrap this up, we are working with a kind of tokens, that if you posses one of them, that’s enough to get access to a resource.

And to keep you thinking: we said that OAuth2 is about delegation. Tokens with this characteristic are clearly handy if you want to pass them to someone to delegate.

A token analogy

Once again, this might be my non-native English speaker background that suggests me to clarify it.
When I look up for the first translation of token in Italian, my first language, I’m pointed to a physical object.
Something like this:

Token

That, specifically, is an old token, used to make phone calls in public telephone booths.
Despite being a Bearer token, its analogy with the OAuth2 tokens is quite poor.
A much better picture has been designed by Tim Bray, in this old post: An Hotel Key is an Access Token
I suggest you to read directly the article, but the main idea, is that compared to the physical metal coin that I have linked first, your software token is something that can have a lifespan, can be disabled remotely and can carry information.

Actors involved

These are our actors:

  • Resource Owner
  • Client (aka Application)
  • Authorisation Server
  • Protected Resource

It should be relatively intuitive: an Application wants to access a Protected Resource owned by a Resource Owner. To do so, it requires a token. Tokens are emitted by an Authorisation Server, which is a third party entity that all the other actors trust.

Usually, when a read something new, I tend to quickly skip through the actors of a system. Probably I shouldn’t, but most of the time, the paragraph that talks describe, for example, a “User”, ends up using many words to just tell me that it’s just, well, a user… So I try to look for the terms that are less intuitive and check if some of them has some own characteristic that I should pay particular attention to.

In OAuth2 specific case, I feel that the actor with the most confusing name is Client.
Why do I say so? Because, in normal life (and in IT), it can mean many different things: a user, a specialised software, a very generic software…

I prefer to classify it in my mind as Application.

Stressing out that the Client is the Application we want to delegate our permissions to. So, if the Application is, for example, a server side web application we access via a browser, the Client is not the user or the browser itself: the client is the web application running in its own environment.

I think this is very important. Client term is all over the place, so my suggestion is not to replace it entirely, but to force your brain to keep in mind the relationship Client = Application.

I also like to think that there is another not official Actor: the User-Agent.

I hope I won’t confuse people here, because this is entirely something that I use to build my mental map.
Despite not being defined in the specs, and also not being present in all the different flows, it can help to identify this fifth Actor in OAuth2 flows.
The User-Agent is most of the time impersonated by the Web Browser. Its responsibility is to enable an indirect propagation of information between 2 systems that are not talking directly each other.
The idea is: A should talk to B, but it’s not allowed to do so. So A tells C (the User-Agent) to tell B something.

It might be still a little confusing at the moment, but I hope I’ll be able to clarify this later.

OAuth2 Core Business 2

OAuth2 is about how to obtain tokens.

Even if you are not an expert on OAuth2, as soon as someone mentions the topic, you might immediately think about those pages from Google or the other major service providers, that pop out when you try to login to a new service on which you don’t have an account yet, and tell Google, that yeah, you trust that service, and that you want to delegate some of your permissions you have on Google to that service.

This is correct, but this is just one of the multiple possibly interactions that OAuth2 defines.

There are 4 main ones it’s important you know. And this might come as a surprise if it’s the first time you hear it:
not all of them will end up showing you the Google-like permissions screen!
That’s because you might want to leverage OAuth2 approach even from a command line tool; maybe even without any UI at all, capable of displaying you an interactive web page to delegate permissions.

Remember once again: the main goal is to obtain tokens!

If you find a way to obtain one, the “how” part, and you are able to use them, you are done.

As we were saying, there are 4 ways defined by the OAuth2 framework. Some times they are called flows, sometimes they are called grants.
It doesn’t really matter how you call them. I personally use flow since it helps me reminding that they differ one from the other for the interactions you have to perform with the different actors to obtain tokens.

They are:

  • Authorisation Code Flow
  • Implicit Grant Flow
  • Client Credential Grant Flow
  • Resource Owner Credentials Grant Flow (aka Password Flow)

Each one of them, is the suggested flow for specific scenarios.
To give you an intuitive example, there are situation where your Client is able to keep a secret(a server side web application) and other where it technically can’t (a client side web application you can entirely inspect it’s code with a browser).
Environmental constraints like the one just described would make insecure ( and useless ) some of the steps defined in the full flow. So, to keep it simpler, other flows have been defined when some of the interactions that were impossible or that were not adding any security related value, have been entirely skipped.

OAuth2 Poster Boy: Authorisation Code Flow

We will start our discussion with Authorisation Code Flow for three reasons:

  • it’s the most famous flow, and the one that you might have already interacted with (it’s the Google-like delegation screen one)
  • it’s the most complex, articulated and inherently secure
  • the other flows are easier to reason about, when compared to this one

The Authorisation Code Flow, is the one you should use if your Client is trusted and is able to keep a secret. This means a server side web application.

How to get a token with Authorisation Code Flow

  1. All the involved Actors trust the Authorisation Server
  2. User(Resource Owner) tells a Client(Application) to do something on his behalf
  3. Client redirects the User to an Authorisation Server, adding some parameters: redirect_uri, response_type=code, scope, client_id
  4. Authorisation Server asks the User if he wishes to grant Client access some resource on his behalf(delegation) with specific permissions(scope).
  5. User accepts the delegation request, so the Auth Server sends now an instruction to the User-Agent(Browser), to redirect to the url of the Client. It also injects a code=xxxxx into this HTTP Redirect instruction.
  6. Client, that has been activated by the User-Agent thanks to the HTTP Redirect, now talks directly to the Authorisation Server (bypassing the User-Agent). client_id, client_secret and code(that it had been forwarded).
  7. Authorisation Server returns the Client (not the browser) a valid access_token and a refresh_token

This is so articulated that it’s also called the OAuth2 dance!

Let’s underline a couple of points:

  • At step 2, we specify, among the other params, a redirect_uri. This is used to implement that indirect communication we anticipated when we have introduced the User-Agent as one of the actors. It’s a key information if we want to allow the Authorisation Server to forward information to the Client without a direct network connection open between the two.
  • the scope mentioned at step 2 is the set of permissions the Client is asking for
  • Remember that this is the flow you use when the client is entirely secured. It’s relevant in this flow at step 5, when the communication between the Client and the Authorisation Server, avoids to pass through the less secure User-Agent (that could sniff or tamper the communication). This is also why, it makes sense that for the Client to enable even more security, that is to send its client_secret, that is shared only between him and the Authorisation Server.
  • The refresh_token is used for subsequent automated calls the Client might need to perform to the Authorisation Server. When the current access_token expires and it needs to get a new one, sending a valid refresh_token allows to avoid asking the User again to confirm the delegation.

OAuth2 Got a token, now what?

OAuth2 is a framework remember. What does the framework tells me to do now?

Well, nothing. =P

It’s up to the Client developer.

She could (and often should):

  • check if token is still valid
  • look up for detailed information about who authorised this token
  • look up what are the permissions associated to that token
  • any other operation that it makes sense to finally give access to a resource

They are all valid, and pretty obvious points, right?
Does the developer have to figure out on her own the best set of operations to perform next?
She definitely can. Otherwise she can leverage another specification: OpenIDConnect(OIDC). More on this later.

OAuth2 - Implicit Grant Flow

It’s the flow designed for Client application that can’t keep a secret. An obvious example are client side HTML applications. But even any binary application whose code is exposed to the public can be manipulated to extract their secrets.
Couldn’t we have re-used the Authorisation Code Flow?
Yes, but… What’s the point of step 5) if secret is not a secure secret anymore? We don’t get any protection from that additional step!
So, Implicit Grant Flow, is just similar to Authorisation Code Flow, but it doesn’t perform that useless step 5.
It aims to obtain directly access_tokens without the intermediate step of obtaining a code first, that will be exchanged together with a secret, to obtain an access_token.

It uses response_type=token to specific which flow to use while contacting the Authorisation Server.
And also that there is no refresh_token. And this is because it’s assumed that user sessions will be short (due to the less secure environment) and that anyhow, the user will still be around to re-confirm his will to delegate(this was the main use case that lead to the definition of refresh_tokens).

OAuth2 - Client Credential Grant Flow

What if we don’t have a Resource Owner or if he’s indistinct from the Client software itself (1:1 relationship) ?
Imagine a backend system that just wants to talk to another backend system. No Users involved.
The main characteristic of such an interaction is that it’s no longer interactive, since we no longer have any user that is asked to confirm his will to delegate something.
It’s also implicitly defining a more secure environment, where you don’t have to be worried about active users risking to read secrets.

Its type is response_type=client_credentials.

We are not detailing it here, just be aware that it exist, and that just like the previous flow, it’s a variation, a simplification actually, of the full OAuth dance, that you are suggested to use if your scenario allows that.

OAuth2 - Resource Owner Credentials Grant Flow (aka Password Flow)

Please raise your attention here, because you are about to be confused.

This is the scenario:
The Resource Owner, has an account on the Authorisation Server. The Resource Owner gives his account details to the Client. The Client use this details to authenticate to the Authorisation Server…

=O

If you have followed through the discussion you might be asking if I’m kidding you.
This is exactly the anti-pattern we tried to move away from at the beginning of our OAuth2 exploration!

How is it possible to find it listed here as possible suggested flow?

The answer is quite reasonable actually: It’s a possible first stop for migration from a legacy system.
And it’s actually a little better than the shared password antipattern:
The password is shared but that is just a mean to start the OAuth Dance used to obtain tokens.

This allows OAuth2 to put its foot into the door, if we don’t have better alternatives.
It introduces the concept of access_tokens, and it can be used until the architecture will be mature enough (or the environment will change) to allow a better and more secure Flow to obtain tokens.
Also, please notice that now tokens are the ad-hoc password that reaches the Protected Resource system, while in the fully shared password antipattern, it was our password that needs to be forwarded.

So, far from ideal, but at least we justified by some criteria.

How to chose the best flow?

There are many decision flow diagrams on the internet. One of those that I like the most is this one:

OAuth2 Flows from https://auth0.com

It should help you to remember the brief description I have gave you here and to chose the easiest flow based on your environment.

OAuth2 Back to tokens - JWT

So, we are able to get tokens now. We have multiple ways to get them. We have not been told explicitly what to do with them, but with some extra effort and a bunch of additional calls to the Authorisation Server we can arrange something and obtain useful information.

Could things be better?

For example, we have assumed so fare that our tokens might look like this:

{
   "access_token": "363tghjkiu6trfghjuytkyen",
   "token_type": "Bearer"
}

Could we have more information in it, so to save us some round-trip to the Authorisation Server?

Something like the following would be better:

{
  "active": true,
  "scope": "scope1 scope2 scope3",
  "client_id": "my-client-1",
  "username": "paolo",
  "iss": "http://keycloak:8080/",
  "exp": 1440538996,
"roles" : ["admin", "people_manager"],
"favourite_color": "maroon",
... : ...
}

We’d be able to access directly some information tied to the Resource Owner delegation.

Luckily someone else had the same idea, and they came out with JWT - JSON Web Tokens.
JWT is a standard to define the structure of JSON based tokens representing a set of claims. Exactly what we were looking for!

Actually the most important aspect that JWT spec gives us is not in the payload that we have exemplified above, but in the capability to trust the whole token without involving an Authorizatin Server!

How is that even possible? The idea is not a new one: asymmetric signing (pubkey), defined, in the context of JWT by JOSE specs.

Let me refresh this for you:

In asymmetric signing two keys are used to verify the validity of information.
These two keys are coupled, but one is secret, known only to the document creator, while the other is public.
The secret one is used to calculate a fingerprint of the document; an hash.
When the document is sent to destination, the reader uses the public key, associated with the secret one, to verify if the document and the fingerprint he has received are valid.
Digital signing algorithms tell us that the document is valid, according to the public key, only if it’s been signed by the corresponding secret key.

The overall idea is: if our local verification passes, we can be sure that the message has been published by the owner of the secret key, so it’s implicitly trusted.

And back to our tokens use case:

We receive a token. Can we trust this token? We verify the token locally, without the need to contact the issuer. If and only if, the verification based on the trusted public key passes, we confirm that token is valid. No question asked. If the token is valid according to digital signage AND if it’s alive according to its declared lifespan, we can take those information as true and we don’t need to ask for confirmation to the Authorisation Server!

As you can imagine, since we put all this trust in the token, it might be savvy not to emit token with an excessively long lifespan:
someone might have changed his delegation preferences on the Authorisation Server, and that information might not have reached the Client, that still has a valid and signed token it can based its decision onto.
Better to keep things a little more in sync, emitting tokens with a shorter life span, so, eventual outdated preferences don’t risk to be trusted for long periods.

OpenID Connect

I hope this section won’t disappoint you, but the article was already long and dense with information, so I’ll keep it short on purpose.

OAuth2 + JWT + JOSE ~= OpenID Connect

Once again: OAuth2 is a framework.
OAuth2 framework is used in conjunction with JWT specs, JOSE and other ideas we are not going to detail here, the create OpenID Connect specification.

The idea you should bring back is that, more often you are probably interested into using and leveraging OpenID Connect, since it puts together the best of the approaches and idea defined here.
You are, yes, leveraging OAuth2, but you are now the much more defined bounds of OpenID Connect, that gives you richer tokens and support for Authentication, that was never covered by plain OAuth2.

Some of the online services offer you to chose between OAuth2 or OpenID Connect. Why is that?
Well, when they mention OpenID Connect, you know that you are using a standard. Something that will behave the same way, even if you switch implementation.
The OAuth2 option you are given, is probably something very similar, potentially with some killer feature that you might be interested into, but custom built on top of the more generic OAuth2 framework.
So be cautious with your choice.

Conclusion

If you are interested into this topic, or if this article has only confused you more, I suggest you to check OAuth 2 in Action by Justin Richer and Antonio Sanso.
On the other side, if you want to check your fresh knowledge and you want to try to apply it to an open source Authorisation Server, I will definitely recommend playing with Keycloak that is capable of everything that we have described here and much more!


          Facebook will punish if you ask for like and share   
We all have seen admins of various pages begging for likes and  shares and using various tricks and emotional dramas to get likes on their Facebook page and posts. Recently Facebook has updated its algorithm and Facebook says that if some page is asking for likes and shares in the posts then the algorithm will crush its presence in the news feed of its fan base. As mentioned by Facebook Product Manager Chris Turitzin and Facebook software engineer in a blog post, This measure was taken to punish Admins who “deliberately try and game the news feed algorithm to get more distribution than they normally would,” Those pages will also be punished who post copied and repeated contents like memes and images of cute animals and asking [&hellip
          Good Things About Bad Reviews   
You wake up on a Saturday morning, pour yourself a cup of coffee, and glance at your Amazon review page. And, lo and behold, there's a nice, shiny, new little nasty-gram for you on the page. What do you do?
(a) Cry
(b) Throw things
(c) Vow to stop writing forever
(d) Shout, "Woohoo! A bad review!" 
If you picked (d), I think you've got the right idea. Here are some reasons why.

First of all, you need bad reviews. You have to have bad reviews on your page to show prospective readers that the reviews on your site are real. There is no such thing as a book that is beloved by every single person who reads it. A review page that has only four- and five-star reviews sends the message that those reviews all came from the author's friends and not from legitimate readers. The negative reviews prove that strangers are really reading your book, and they give credibility to the positive reviews. I also suspect that the Amazon algorithms probably think the same way, and that a book with a mix of reviews will probably get more face-time on the site than one that only has positive reviews, but that's just my guess and I could be wrong.

Second, as authors we are lucky to have a beautiful system in place for receiving honest customer feedback--something every good business should have, in my opinion. Reviews are that system.

I know a lot of authors who don't even read their reviews. Personally, I read every single review I get. And if it's harsh or critical, I read it several times. Then, I have a fairly systematic approach for dealing with it.

First and foremost, if I'm truly hurt or offended by the review and this is clouding my ability to look at it objectively, I have a top secret (until now) way for getting over that. I go to the Amazon pages of some of my very favorite authors and read some of *their* one-star reviews. This tends to put things into perspective. Then, once the initial pain has subsided, I mentally place the negative review into one of three buckets:
(1) This reviewer is just mean, offers no constructive criticism, and might even perhaps have some kind of emotional issue causing him/her to lash out at strangers online
(2) This reviewer has some very good points which I want to keep in mind when writing my next book
(3) A lot of other people specifically liked the things this reviewer disliked, and I did those things on purpose, so I'm probably not going to change anything based on this review. Sorry, reviewer!
The bucket (1) reviews get ignored or laughed at a bit. They can be quite entertaining sometimes, even if they aren't very helpful. You know the ones I'm talking about.

The bucket (2) reviews get a lot of attention, because those are the ones I intend to learn from. A good bad reviewer can call my attention to something I wasn't even aware of which I don't want to repeat in future books. Thank you, good bad reviewer!!

The bucket (3) reviews are a bit tricky. If the negative points are one-off (i.e. of all the reviews on the novel, this person seems to be the only person who felt that way...) I tend to shrug and think, well, my book isn't for this person, and that's OK. However, if I start seeing a lot of people making the same points, I will *consider* the notion of changing something in future books. Because there has to be a balance between writing for one's self and writing books people want to read, if you're interested in selling them.

The bottom line is that a negative review can help you grow as an author if you use it. If you're Stephen King, you probably don't need to pay attention to your bad reviews, because readers already expect a certain signature style from you and that style sells. So that's going to be your style until the day you stop writing books. But if you're not Stephen King, you're lucky. You still have the opportunity and the freedom to develop your voice as an author. Reviews, good and bad, are a priceless tool for doing this.

Authors, do you read your reviews? How do you deal with the negative ones?





          Ist die JPEG2000-Kompression für PDF-Druckvorlagen geeignet?   
Und noch was zu den gemachten Tests: Wer verwendet bei Effizienz- und Qualitätsuntersuchungen die Adobe’schen JPEG Stufen Maximal respektive 12?

Die dabei verwendeten Störungsalgorithmen sind bei diesbezüglich kritischen Motiven doch bekanntermassen kontraproduktiv – und bei nachträglichen Umkomprimierungen auf niedrigere JPEG Qualitätsstufen eine Katastrophe.


MfG

Thomas


Und wenn dir geholfen wurde, hilf uns, dies auch weiterhin zu können.
http://www.hilfdirselbst.ch/info/
... PDF in der Druckvorstufe 30. Jun 2017, 12:25 (Thomas Richard)
          More on Google's Supplemental Index (Yellow Band/Secondary Index)   
In case you missed it, there was a good debate ongoing in the numerous comments of a recent blog post by Mary McKnight. The debate closed in on a basic question - Is there more than one class of index in Google?And if so, if you have a larger percentage of pages in the "secondary index", would it affect content discoverability? The answer is clearly yes - read on.I've done a little more research and found that the idea of a yellow band (as coined by Mary), and secondary index (as suggested by me) is more commonly known as the "supplemental index"."Hey, pages get added to the supplemental index using automatic algorithms. You can imagine a lot of useful criteria, including that we saw a url during the main crawl but didn't have a have a chance to crawl it when we first saw it. Think of this as icing on the cake. If there's an obscure search, we're willing to do extra work with this new experimental feature to turn up more results. The net outcome is more search results for people doing power searches." - GoogleGuy, Aug 27, 2003 (this is the first indication Google started experimenting with the supplemental index)"As Google explains it, it’s a question of priorities. Supplemental results have a secondary priority. So they’re spidered less frequently and may well have less information held about them in the database. Google says that the PageRank is unaffected [by the supplemental index]. Currently there seem to be few supplemental results showing in typical keyword searches. That suggests to me it’s better to do what it takes to get your web pages into the regular index and avoid the supplemental index." - Barry Welford (Supplemental Results - A Word to the Wise)I noticed on a forum that one webmaster grappeling with the supplemental index wrote: "With a casual inspection I could see that all these pages in the supplemental were the php based dynamic URLs. Google does not seem to index them and though they are linked to high pagerank pages, they can not get out of the supplemental. So the only way to reduce such instances is to rewrite your applications which generate the dynamic URLS and make them search engine friendly."This is untrue. Blogsite is 100% dynamic and our customers average more than 90% of all their blogsite pages in the primary index; they achieve this by doing nothing special - they just blog. We believe our high rate of success is related to the architecture of our presentation layer (i.e., the way our platform generates HTML). Not many folks realize it but the MyST platform (the foundation of Blogsite and Real Estate Blogsites) was designed for knowledge menagement and high search optimization.Shimon Sandler offers a list of reasons why pages get shoved into the supplemental index:You have little unique text on your webpages (maybe a lot of images, and little text),Duplicate content,Your Title and Description meta tags are all identical,Your pages have similar header, sidebar, and footer sections,Your pages are dynamically generated from a database,Possibly most of your links are reciprocal links (not one way incoming links),Orphaned web pages, which are pages that no one links to, including yourself.Many of these points suggest (although not conclusively) architectural issues concerning your HTML affect your ability to avoide the supplemental index. This seems to corroborate what we see with Blogsite.The best evidence and overview of the supplemental index can be found at SEO Adept."The supplemental index is not a good place for your pages to be, as pages in the supplemental index have almost no chance of ranking for good keywords." - Staying Out of Google's Supplemtal IndexSEO Adept also offers these tips to help you get those pages out of the supplemental class.Make sure that your pages have enough content. Extremely short blog posts and other very brief pages sometimes end up in the supplemental index. Make sure that your pages have unique content, from each other and from other pages on the Internet. Make sure that no one is duplicating your pages elsewhere on the Internet. You can run a search on some of the unique phrases in your page to see if other pages may be similar. Try to acquire more and better links to your supplementally indexed pages. Try to get keywords that people are search for in the anchor text of links coming from authoritative, similary themed pages.These tips all make sense of course - nothing new here. What *is* new (to me anyway) is that the supplemental index is apparently quite real and avoiding it is an important success factor in terms of your online marketing strategy. Given this understanding, I'm going to continue to use the ratio of pages in the primary index to total pages in the index as a measure of index penetration success. This seems to be an excellent measure of blogging success because blogging already does a good job of addressing many of the [apparent] reasons that pages get supplementalized.
          Facebook says it will identify and demote links from users who post more than 50 times a day in News Feed (Kurt Wagner/Recode)   

Kurt Wagner / Recode:
Facebook says it will identify and demote links from users who post more than 50 times a day in News Feed  —  People who post 50-plus times per day are likely sharing spam or false news, Facebook says.  —  Facebook has a new way of identifying false news and spam in users' feeds …


          Tech News Today 1801: A Bunch of Bought Bots   

Tech News Today (MP3)

Facebook, Twitter, and other social media companies could face fines of up to 57 million dollars in Germany if they fail to remove illegal, racist, or slanderous comments within 24 hours of receiving report of the offense.

The Washington Post says Twitter is prototyping a new feature that would allow its users to actively flag tweets containing misleading information and fake news in an attempt to control the network effect that comes from bot tweeting, among other related problems. Twitter says they currently have no plans to launch this product.

ReCode reports that Facebook is changing its algorithm to reduce the reach of anyone who posts fifty times a day or more, regardless of the content of the links. Facebook VP Adam Mosseri says they've identified a small group of people, not bots, who post spammy links, which he says are likely to contain low quality content such as clickbait, sensationalism, and misinformation.

Plus, Project Fi might get supported by a non-Google phone, the Surface Mini that never was is shown off in leaked photos,and Sam Machkovech from Ars Technica talks about Blizzard's remaster of Starcraft and the fate of CastAR.

Hosts: Megan Morrone and Jason Howell

Guest: Sam Machkovech

Download or subscribe to this show at https://twit.tv/shows/tech-news-today.

Thanks to CacheFly for the bandwidth for this show.


          Comparison of secondary arcs for reclosing applications   
This paper presents a comparison of secondary electric arcs behavior for reclosing applications. The main objective of the paper is to show the differences in secondary arcs during simulations, laboratory tests and real life events. These comparisons can help engineers develop adaptive reclosing algorithms and improve power system stability. A software arc model, a laboratory test result and field events' harmonic contents were analyzed. Although the harmonics may differ in each method, they all provide enough information to be used in the reclosing algorithms.
          Aging feature extraction of oil-impregnated insulating paper using image texture analysis   
Under long-term synergy effect of multi-factors, especially the thermal stress, insulating paper will be degraded and its insulation performance will decline due to carbonization and degradation of cellulose. This paper presents an optical approach for aging feature extraction of the insulating paper, where one of the image processing methods called texture analysis is utilized. By conducting laboratory accelerated thermal aging tests, insulating paper samples with different aging conditions for both Nomex and Kraft, evaluated with the aging time, are prepared. After taking optical microscopic images of insulating paper samples belong to different aging groups, up to 14 texture features are extracted using the gray-level co-occurrence matrix (GLCM). With different feature selection methods applied, several of them are finally selected to represent the aging condition of insulating. Numerical tests with both supervised and unsupervised algorithms, as well as a linear regression method verifies the validity of these features in characterizing the aging condition of the insulating paper.
          Surface charge inversion algorithm based on bilateral surface potential measurements of cone-type spacer   
To study the surface charge distribution of high-voltage direct-current (HVDC) spacers for gas insulated lines (GIL), a surface charge inversion algorithm based on surface potential measurements is necessary. However, previous studies on inversion calculation of surface charge density only considered the surface potential of one side, neglecting the effect of residual surface charge on the other side of the spacer; this leads to an inaccurate inverted surface charge distribution. In this paper, the inversion algorithm methods used to determine surface charges over the past few decades were summarized, and their advantages and disadvantages were discussed. Based on the previous charge inversion algorithms, an improved surface charge inversion algorithm considering the surface potential of both sides of a cone-type model spacer was developed. The improved method was verified experimentally using a cone-type model spacer. Compared with that obtained by previous charge inversion algorithms, the charge distribution derived from the improved algorithm was closer to the situation determined from theoretical analysis. This work has practical relevance for the surface charge measurement of cone-type spacers and offers a new orientation in the surface charge analytical algorithm.
          The June 25 Google Update: What You Should Do Now by @beaupedraza   

Were you impacted by the June 25 Google algorithm update? If so, take these steps to protect your website.

The post The June 25 Google Update: What You Should Do Now by @beaupedraza appeared first on Search Engine Journal.


          TransTech Staffing posted a blog post   
TransTech Staffing posted a blog post

4 Ways to Compare Your IT Salary

In most instances, job seekers who are capable of framing their technical experience and interpersonal skills as a high value service receive the most job offers. And like any premium service, it deserves fair compensation. Rather than over or underbidding, job seekers need to know how to compare IT salaries to recognize offers that align with their skills.Researching Online Salary WebsitesWhat is the most complete source for information technology salary data? Job boards, social media platforms, and human capital websites are all using their own dedicated data to build IT salary reports. Each major provider has its own advantages and disadvantages which need to be consider as you compare IT salaries.PayScale – The PayScale market data comes straight from tech professionals. Real-time salary surveys administered through their website gather 150,000 new records each month. Job titles are standardized in a way that has both positive and negative outcomes. PayScale’s algorithms determine which big pillar skillsets fall into what jobs (i.e. C++ Developer equals Software Engineer), but users need to be specific about skills, locations, and certifications to get more precise reports.Salary.com – All of Salary.com’s market data is aggregated from hundreds of employer-reported surveys. Each report is an averaged overview of local, industry, and association sources reporting actual tech employee compensation. Their sources are kept confidential (due to non-disclosure agreements), but they are considered an authority on IT salary comparisons and other industries.LinkedIn Salary – With 500 million users, LinkedIn has a wealth of IT professionals to leverage and is encouraging users to report their own tech salary data with regular survey requests. LinkedIn strives to make their salary data user friendly. Breakdowns by industry, company size, education level, and location are available in easy to explore visuals. Though the volume of their data is still growing, the social network will be an increasingly critical resource as people compare IT salaries.Exploring the Bureau of Labor Statistics’ DataThe federal government does a solid job of gathering reliable statistics on national and local marketplaces. Though their figures are less nuanced when it comes to particular technologies, the Bureau of Labor Statistics (BLS) provides some peripheral information that is just as important as you compare IT salary information by state and city.The BLS provides the median pay and entry level education that most of the salary websites offer. It’s a good orbital level view of industry trends, though it does not account for specific technologies or certifications. What is provided are the job outlooks for specific roles, the number of jobs expected to be added to the market, and visual data comparing states and metro IT compensation.Look at their data for an Application Developer as an example. Their interactive U.S. maps show the total Application Developers employed, their market dispersal, and the annual mean wages on both a state and city level. For example, the Chicago-Naperville-Arlington Heights metro area ranks among the top 10 metropolises nationwide for their combined employment, density, and annual mean wage. There is even a way to see what the top and bottom percentiles are based on experience. It’s a great way to fill in the gaps in your knowledge with reputable data.Measuring Your Own ExperienceYour own work history and experience are the best predictors of what to expect from any future tech salary. It’s your own blend of titles, technical capabilities, and former employers that influence the range of job offers you get. Appraise your selling points accurately, and the possible compensation for your technical skills will reach their peak. Ask these questions to gain some self-awareness into what will be your most rewarding job search:Do your past employers appeal to one industry over others? If you’ve overseen cybersecurity for a healthcare provider or analyzed data for an oil and gas company, there will be more brand recognition among their peers.Are your technical skills worth more to a certain type of industry? Some niche skills or cross-section of talents will earn you far more in one sector than another.Does your official title accurately convey your skill set? If not, find one that more clearly depicts your worth.Working with a RecruiterThis combines all of the others into a single step. Recruiters research tech compensation rates from private and public sources to help candidates better earn competitive compensation rates and companies recognize when their offer is below market value. They work with thousands of candidates in the local market and are well acquainted with the compensation that tech pros want.Want to quickly learn if your tech salary is competitive for the Chicago market? Contact one of our recruiters. TransTech IT Staffing takes pride in knowing the local market inside and out. Let’s take your career to the next level.See More

          How will IoT change your life?   



An IoT (Internet of Things) setup has devices connected to the internet catering to multiple use case scenarios. These can be monitoring of assets, executing tasks and services to support day to day human requirements, ensuring life & safety through alerts and responses, city infrastructure management through command control centers for emergency response, enabling efficient governance through process automation, provisioning healthcare and enabling sustainable energy management thereby addressing environment conservation concerns.

A platform which caters to all above use cases from devices and sensors to management functionalities qualifies to be a Smart city platform.

Cloud computing is a popular service that comes with many characteristics and advantages. Basically, cloud computing is like DIY (Do It Yourself) service wherein a user/consumer can subscribe to computing resources based on demand/requirement whilst the services are delivered entirely over the Internet.

IoT and Cloud computing go hand-in-hand though they are two different technologies which are already part of our life. Both being pervasive qualifies them as the internet of future. Cloud merged with IoT together is foreseen as new and disruptive paradigm.

Just like cloud is available as a service, whether it is infrastructure, platform or software .Similarly IoT is seen as every(thing) as a service for future since it also fulfills the smart city initiative. First and foremost requirements of any IoT platform for smart city is on demand self-service which enables usage based subscription to computing resources(hardware) that manage and run automation , platform functions, software features and algorithms that form part of city management infrastructure.

Characteristics of such an IoT on Cloud scenario are …

·     Broad network access- to enable any device connectivity whether it is laptop, tablet, nano/micro/pico gateways or actuators or sensors.
       Resource pooling - for on demand access of compute resources like assign identity to device in pool.
      Rapid elasticity - to enable quickly edit software features providing elastic computing- storage & networking demands.
      Measured service - pay for only resources and or services used based on duration/volume/quantity of usage.   

Advantage of any IoT Cloud setup is that it doesn’t involve upfront CAPEX from a service consumer point of view in terms of building entire infrastructure from ground zero. Rather it is based on subscribe-operate-scale-pay model. This enables stakeholders and decision maker’s instant access to actual environment which helps them gauge prospective investment and expenditure at the same time technology teams are geared up to anticipate which component of the IoT setup needs to be scaled rather than replicating entire setup to fulfill growing demands.

Dockers (which are basically containers having associated compute, storage and software module with runtime environments required to run software module of overall software) and Micro services (are independent services having own data persistence and dependency software elements which can run independently or provide service to monolithic systems) are some of the features that help manage scalability aspect of IoT platform on cloud catering to smart city use case. Individual modules and components within IoT platform can be preconfigured as Dockers and Micro services.  Once there is traction on the platform, respective Docker or Micro services gets provisioned to handle surge in data traffic thus individual functionality of the platform becomes horizontally scalable. Hence to address such ad-hoc scalability requirements, unlike monolithic systems wherein entire platform needs replication, here only individual module of the platform can be scaled which saves substantial resources and OPEX for stakeholders.

This platform architecture can be implemented on a cloud infrastructure reusing legacy hardware or over commodity computing infrastructure.

Any smart city deployment of IoT platform demands fail safe high availability setup. As a result computing infrastructure has to be clustered (grouping of similar functional units/modules within software system).With the surge in number of clusters and Dockers of each functional modules, managing such disparate clustered environments becomes a challenge. Technologies such as Kubernetes and Mesos address these challenges.

Mesos and Kubernetes enable efficient management, scalability and optimization of Dockers micro services and APIs which are exposed as PaaS or SaaS over cloud infrastructure, thereby fulfilling on- the- fly auto scaling demands from service consumers.

Pacific controlsGalaxy2021 platform built using open source technologies has adopted most of the above mentioned technologies and best practices. This forms a unique value proposition enabling early adoption to latest technology innovations in the open source world that is either related to IoT or cloud computing. Galaxy2021 platform is horizontally scalable and is capable to manage disparate IoT applications of various stakeholders. It can handle high volume data communication originating at a high frequency from various sensors devices and gateways installed across various smart city assets.

Galaxy2021 Platformhas been deployed and available on different cloud infrastructures in public, private and hybrid models catering to customers ranging from Government, Utility companies to OEMs across Middle-East, US and Oceana.




          ALGORITHM   
Lámpara colgante LED de vidrio soplado
          Algorithm Developer munkakörbe keresünk munkatársat. | Feladatok: Research and develop algorith...   
Algorithm Developer munkakörbe keresünk munkatársat. | Feladatok: Research and develop algorithms for highly automated driving, and driver assistance • Concept development and implementation of sensor fusion, and environmental modelling algorithms used in highly automated driving • Create mathematical models for describing the perceived environment • Optimize algorithms for embedded real time usage • Discuss technical solutions as part of an international team • Analyze test results and measurements from the field • Work in an international environment • Gain an overview in specific, dedicated automated driving system • Close collaboration in agile teams. | Mit ajánlunk: Flexible work-time options, strong cooperation with an international colleagues in Automated Driving area, participation on different conferences, inspiring technologies, friendly working environment, different benefits and services, various health and sports opportunities, on-site parking, catering facilities | Elvárások: BSc or MSc degree in computer science or comparable • Knowledge of tracking algorithms, and/or machine learning techniques • Knowledge of sensor fusion algorithms • Strong interest about automated driving trends and systems • Knowledge of object oriented programming • SW development skills C++, Python/Matlab • Good coordination and communication skills • Ability to efficiently gain an overview on complex technical content • Ability to work independently • Willingness to travel • Fluent English, active usage of the language | További infó és jelentkezés itt: www.profession.hu/allas/1034050
          Data scientist munkakörbe keresünk munkatársat. | Feladatok: Interact with customers to underst...   
Data scientist munkakörbe keresünk munkatársat. | Feladatok: Interact with customers to understand their requirements and identify emerging opportunities. • Take part in high and detailed level solution design to propose solutions and translating them into functional and technical specifications. • Convert large volumes of structured and unstructured data using advanced analytical solutions into actionable insights and business value. • Work independently and provide guidance to less experienced colleagues/employees. • Participate in projects, closely work and collaborate effectively with onsite and offsite teams at different worldwide locations in Hungary/China/US while delivering and implementing solutions. • Continuously follow data scientist trends and related technology evolutions in order to develop knowledge base within team.. | Mit ajánlunk: To be a member of dynamically growing site and enthusiastic team. • Professional challenges and opportunities to work with prestigious multinational companies. • Competitive salary and further career opportunities. | Elvárások: Bachelor?s/Master?s Degree in Computer Science, Math, Applied Statistics or a related field. • At least 3 years of experience in modeling, segmentation, statistical analysis. • Demonstrated experience in Data Mining, Machine Learning, additionally Deep Learning Tensorflow or Natural Language Processing is an advantage. • Strong programming skills using Python, R, SQL and experience in algorithms. • Experience working on big data and related tools Hadoop, Spark • Open to improve his/her skills, competencies and learn new techniques and methodologies. • Strong analytical and problem solving skills to identify and resolve issues proactively • Ability to work and cooperate onsite and offsite teams located in different countries Hungary, China, US and time zones. • Strong verbal and written English communication skills • Ability to handle strict deadlines and multiple tasks. | További infó és jelentkezés itt: www.profession.hu/allas/1033284
          Software Developer C#,.NET munkakörbe keresünk munkatársat. | Feladatok: Understand client requ...   
Software Developer C#,.NET munkakörbe keresünk munkatársat. | Feladatok: Understand client requirements and propose technical solutions. • Take part in high and detailed level solution and software architecture design. • Develop new and existing applications while following the company?s and/or the client?s standards, best practices to keep the provided solution secure • Prepare and provide the relevant and related documentations. • Ensure the quality of the assigned development work, perform test activities and fix identified and/or assigned bugs. • Participate in the research of new technologies. • Work independently and provide guidance to less experienced colleagues/employees. • Participate in projects, closely work and collaborate with onsite and offsite teams at different worldwide locations in Hungary/China/US while delivering and implementing solutions.. | Mit ajánlunk: To be a member of dynamically growing site and enthusiastic team. • Professional challenges and opportunities to work with prestigious multinational companies. • Competitive salary and further career opportunities. | Elvárások: At least 4 years of experience as a developer in a similar position • Strong Computer Science fundamentals algorithms, data structures, and understanding of software architecture and object-oriented design included Design Patterns. • Deep knowledge of web applications, web services, .Net framework and C# language • Demonstrated experience in server side programming C#, ASP.Net. • Demonstrated experience in web development using HTML/HTML5, CSS and popular client side JavaScript frameworks jQuery/angular.js/React.js. • Solid experience in relational databases and SQL preferable to MSSQL • Open to improve his/her skills, competencies and learn new technologies. • Strong analytical and problem solving skills to identify and resolve issues proactively • Ability to work and cooperate onsite and offsite teams located in different countries Hungary, China, US and time zones. • Strong verbal and written English communication skills • Experience in test automation and unit testing is an advantage | További infó és jelentkezés itt: www.profession.hu/allas/1033297
          Research&Advance development SW developer munkakörbe keresünk munkatársat. | Feladatok: Shaping...   
Research&Advance development SW developer munkakörbe keresünk munkatársat. | Feladatok: Shaping the future of the next generations automotive steering systems from low level software until functional development • Customer experiment projects software development e.g.: Highly Automated Driving, Steer-by-Wire? • Creating new mathematical models, deriving high level algorithms • SW requirements analysis, effort estimations, module design, implementation, control and functionality development, integration, testing and debugging • Active participation and responsibility in each phase of projects from start to customer test drives up to level 3 releases. | Mit ajánlunk: Flexible worktime options, benefits and services, childcare offers, medical services, employee discounts, various sports and health opportunities, on-site parking, catering facilities, access to local public transport, room for creativity, urban infrastructures | Elvárások: Bsc/Msc in computer science, automation, electronics and telecommunications, informatics, mathematics or comparable. • 3+ years of experience with embedded software development preferably in the automotive industry and strong C programming skills • Strong knowhow in developing, verifying and testing MATLAB Simulink models and generating SW components based on the model • Experience with one or more of the fieldbus systems and toolchain CAN, FlexRay • Advantage if experience in the following: Real time Operating Systems, Ddvelopment with AUTOSAR components, Diagnostics, UML, DOORS, ClearQuest, ClearCase, Automotive SPICE, functional safety for road vehicles ISO26262 • Experience with driving electrical drives is also an asset • English or German advanced • Good communication and excellent problem solving skills | További infó és jelentkezés itt: www.profession.hu/allas/1033401
          SuperFreakonomics Book Club: Ian Horsley Answers Your Questions About the Terrorist Algorithm   

In the SuperFreakonomics Virtual Book Club, we invite readers to ask questions of some of the researchers and other characters in our book. Last week, we opened up the questioning for "Ian Horsley," a banker who's been working with Steve Levitt to develop an algorithm to catch terrorists. His answers are below. Thanks to Ian and to all of you for the questions.

The post SuperFreakonomics Book Club: Ian Horsley Answers Your Questions About the Terrorist Algorithm appeared first on Freakonomics.


          AI Creates Art That Critics Can't Distinguish From Human-Created Work   

The greatest artists of our time are considered unique, but what if artificial intelligence can be taught to create art? And what if it turns out us humans actually prefer it? Researchers from Rutgers University, College of Charleston, and Facebook’s AI Research Lab have created an algorithm that allows AI to create art that is […]

The post AI Creates Art That Critics Can't Distinguish From Human-Created Work appeared first on Breaking News, Sports, Entertainment and more.


          AI Creates Rather Wonderful Art That Fools Critics It’s Not Human-Made   

The greatest artists of our time are considered unique, but what if artificial intelligence can be taught to create art? And what if it turns out us humans actually prefer it? Researchers from Rutgers University, College of Charleston, and Facebook’s AI Research Lab have created an algorithm that allows AI to create art that is […]

The post AI Creates Rather Wonderful Art That Fools Critics It’s Not Human-Made appeared first on Breaking News, Sports, Entertainment and more.


          Facebook alters news feed to crack down on spam links   
Facebook announced on Friday a new adjustment to its news feed algorithm that the company says will help crack down on spam links.Adam Mosseri, a Facebook vice president, wrote in a blog post that the effort would help “deprioritize” posts from...
          กฎคัดกรอง Hate Speech ใน Facebook มีแนวโน้มเลือกปฏิบัติ ปกป้องคนขาว   

เว็บไซต์ Propublica เผยรายงานเอกสารสอนระบบอัลกอริทึมภายใน Facebook ที่ทำหน้าที่คัดกรอง Hate Speech พบเอกสารบางชุดสอนระบบให้ปกป้องคนขาวจาก Hate Speech มากกว่าผู้หญิงและเด็กผิวสี

เอกสารฝึกอบรมระบบที่กำลังเป็นประเด็นตอนนี้คือ คำถามว่า เราจะปกป้องคนกลุ่มใดจาก hate Speech โดยให้ระบบเลือกหนึ่งในสามภาพ ประกอบด้วย คนขับรถผู้หญิง (Female Drivers) เด็กผิวสี (Black Children) และ คนขาว (White Men) ปรากฏว่าระบบเลือกข้อสามคือคนขาว

ProPublica อธิบายเหตุผลเบื้องหลังสไลด์การสอนนี้ว่า อัลกอริทึมอาจสับสนนโยบายป้องกันคนจาก Hate Speech โดยระบบเข้าใจว่าต้องป้องกันคนจากการถูกโจมตีตามประเภทต่างๆ ต่อไปนี้ เช่น โจมตีเรื่องเพศ อัตลักษณ์ ศาสนา เชื้อชาติ รสนิยมทางเพศ ความผิดปกติของร่างกาย แต่ระบบไม่ได้ปกป้องกลุ่มคนที่ถูกระบุเป็นอาชีพ ชนชั้นทางสังคม ความเกี่ยวพันทางการเมืองเพราะคิดว่ามีความสำคัญน้อยกว่ากับอัตลักษณ์ของบุคคล ดังที่ยกตัวอย่างในภาพ คือสองภาพแรกมีคำว่า Drivers กับ Children ซึ่งเป็นกลุ่มคนแยกย่อยลงไป ในขณะที่ภาพที่สามคือ White Men นั้นเป็นกลุ่มที่ถูกปกป้องชัดเจนจากความเข้าใจของอัลกอริทึม

ที่มา - Fortune


          Алгоритмы и структуры данных, сложность - шпаргалка. (Algorithms and data structures complexity cheatsheet)   

Original site - http://bigocheatsheet.com/
Перевод на хабре.
Шпаргалка для программистов.

          latest update of Google algorithm   
currently update of Google?
          The Call To Adventure   
So.. Looks like I will be participating in Google Summer of Code this year. Officially it starts on the 23rd of May, but my thesis is in the way, so I will be starting about two weeks later.

What will I be doing you ask? Well, as some people know Krita on Mac OS X is not quite there yet. Some of the new cool functionality added to Krita 3.0 is forcefully omitted from the OS X release. Deep down in the depths of Krita painting we paint decorations using Qt's kindly provided QPainter class. This class allows us to make pretty lines and shapes very easily, and is perfectly suited to drawing all of the overlay functionality (such as grids, cursors, guides, etc.). What could possibly go wrong there? Well, even though we are grateful to have such easy rendering functionality, the backend of those functions haven't exactly kept up with the times.

In 2008 a new version of OpenGL came out (version 3.0) that threw out much of the old functionality and told programmers to do all of it themselves. You should upload your own data to the graphics card! You should define your own shading algorithms! And you should keep track of your own transformation matrix stack! Sounds like a lot of hassle, but the advantage is that we are not stuck in the rigidity of what OpenGL allows us to do. The problem is that OpenGL doesn't have a clue about what kind of application we are making, so it provides some general functions to us which may or may not suit our needs. And when these functions do not suit our needs, we are going to feel it in the performance.
Want to render a complicated model with many thousands of vertices? Well sorry, I only know how to redundantly upload all these vertices to the graphics card on every frame.

So now we have reached the year 2016 and the new functionality of Krita makes thankful use of this new OpenGL version that allows us to cleverly upload data to the graphics card. But here comes the showstopper, because we still use that old and slow OpenGL version to draw our decorations. D'oh!
Luckily this isn't too much of a problem, the old version is still fast enough to draw a couple of decorations without breaking a sweat. Indeed, mixing both versions works quite a charm... at least... on Windows and Linux..

With the advent of OpenGL version 3.0 the notion of deprecation was introduced. Many of the features that were in use before 3.0 were now replaced with newer better ways to accomplish the same and were marked as deprecated. In order to stay compatible with both ways of doing things, generally OpenGL doesn't really care if you use deprecated functionality together with the new functionality. A program that uses this type of mix is therefore said to use the 'Compatibility Profile'. The compatibility profile allows the programmer to use older and newer versions interchangeably. On the other hand we find something called 'Core Profile (with forward compatibility)'. This core profile removes all deprecated functionality and prevents the programmer from using those functions.

As it happens, the Windows and Linux platforms support programs that were written using core profile or compatibility profile. In contrast, the Mac OS X platform currently only supports using OpenGL versions lower than 3.0 or using versions higher than 3.0 but only in core profile. This forms a problem for Krita, because if we pick a version lower than 3.0, then OS X users don't have access to our new performance features (which use the new OpenGL version). However, if we pick a version higher than 3.0  (as we currently do) then we are disallowed from using the QPainter class (which uses deprecated functionality) to draw our decorations. It appears we are at an impasse...

Well, here is where I come in. It will be my responsibility in this Google Summer of Code to modernise this QPainter class and make it play nicely with the rest of Krita. I will need to make sure that all deprecated functionality is purged from its implementation and that it makes efficient use of the new functionality OpenGL has provided us with.

If all goes to plan it means that OS X users will get to enjoy the new performance enhancements brought to Krita 3.0 whilst at the same time not losing all of their decorations.
          Pangu – simulating planetary close encounters   

The high performance of ESA’s new generation ‘Planetary and Asteroid Natural scene Generation Utility’ or Pangu software enables real-time testing of both landing algorithms and hardware.

‘Entry, descent and landing’ on a planetary body is an extremely risky move: decelerating from orbital velocities of multiple km per second down to zero, at just the right moment to put down softly on an unknown surface, while avoiding craters, boulders and other unpredictable hazards.

But Pangu can generate realistic images of planets and asteroids on a real-time basis, as if  approaching a landing site during an actual mission. This allows the testing of landing algorithms, or dedicated microprocessors or entire landing cameras or other hardware ‘in the loop’ – plugged directly into the simulation – or run thousands of simulations one after the other on a ‘Monte Carlo’ basis, to test all eventualities.

Seen here is a Pangu recreation of the Mars Curiosity’s rover’s approach to Mars, using original telemetry, and then a view of Mars moon Phobos.

This is followed by another recreation the Japanese Hayabusa probe’s encounter with the rubble-strewn Itokawa near-Earth asteroid, and finally a telemetry-based recreation of  the field of view of the New Horizons mission as it performed its rapid flyby of Pluto.

This new generation of Pangu was developed for ESA by the University of Dundee in Scotland.


          How to encrypt string using AES Algorithm with secret key in C#   

Hi All,

I want to encrypt and decrypt the string using AES Algorithm with secret key.

Secret key is any string value not bytes.

Once encrypted it should return back string only not bytes.

For example

string Secret key = "psdasina6asdasd4$asds"

Thanks,

Aravind


          Save On Airfare By Tracking Flight Prices   

There seems to be an article every week pondering The Best day/time/season for buying airfare. We all want to get the best deal, so it would be awesome to decode the algorithm and find the ideal time to purchase flights. I understand the motivation.

The reality, however, is that airline revenue management is incredibly complex. There is no "best" time, just general trends, and even those trends vary by route, carrier, class of service, competition, season, day of the week, flight loads, and myriad other things.

But there are ways to filter through the noise enough to get the best price on your flights.


          Sous l'ère Trump, quoi de plus normal que d'accuser Victor Hugo de racisme ?   

Un algorithme n’a pas le sens de l’ironie. Et Google n'a que celui de l’opportunité. Ce 30 juin, le moteur décide donc de rendre hommage à Victor Hugo. Voilà maintenant 155 ans que l’écrivain publiait Les Misérables, chez Albert Lacroix, éditeur belge – d’autant plus remarquable qu’il fit également paraître Les Chants de Maldoror. Et voici qu'on le taxe de racisme : « Je suis tombé par terre », n’est-ce pas ?


Rodin: Victor Hugo.
Victor Hugo par Rodin - Anna Fox, CC BY 2.0

 

 

On s’apprête à tomber de haut, car l’hommage du moteur de recherche occasionnait déjà une certaine confusion. En effet, dans plusieurs médias, on lit que ce doodle serait à mettre en relation avec le Discours sur les caves de Lille. Et de citer ce texte : « Eh bien, dérangez-vous quelques heures, venez avec nous, incrédules ! et nous vous ferons voir de vos yeux, toucher de vos mains les plaies, les plaies saignantes de ce Christ qu’on appelle le peuple ! »

 

Sauf que le fameux discours date de mars 1851 – pas de juin 1850... Bien entendu, on y retrouve déjà les thématiques qui parcourront Les Misérables et feront toute l’aventure de Cosette et des autres. Soit.

 

Le Doodle ne cible d'ailleurs pas directement l’œuvre majeure – rééditée chez Folio Classique, en version monstrueuse, plus de 1300 pages. En réalité, c’est une sorte de biographie rapide que le moteur présente, reproduite ci-dessous. 

 

 

 

Or, hasard des calendriers, circule sur internet la pétition lancée par une lycéenne originaire de La Martinique. Il s’agit d’un courrier qu’elle a adressé à la rectrice d’Académie, ainsi qu’au ministre de l’Éducation nationale, Jean-Michel Blanquer. Elle revient notamment sur l’un des textes de Victor Hugo, justement, le Discours sur l’Afrique, prononcé le 18 mai 1879. Extraits : 

 

Que serait l’Afrique sans les blancs ? Rien ; un bloc de sable ; la nuit ; la paralysie ; des paysages lunaires. L’Afrique n’existe que parce que l’homme blanc l’a touchée. [...]

Cette Afrique farouche n’a que deux aspects : peuplée, c’est la barbarie ; déserte, c’est la sauvagerie [...]. 

Au dix-neuvième siècle, le Blanc a fait du Noir un homme ; au vingtième siècle, l’Europe fera de l’Afrique un monde.

 

 

Le texte est connu, mais, pour la jeune fille, il est l’occasion de dénoncer le racisme et l’attitude colonialiste de Victor Hugo – pas vraiment des qualités que l’on a coutume de lui prêter. 

 

Et l’étudiante de préciser : « Ces mots sont choquants et nous nous devons, au 21e siècle, de ne plus les enterrer comme si de rien n’était. Je sais que je vous en demande beaucoup, et que tout cela ne relève pas de votre compétence. Alors s’il ne vous est pas possible de supprimer complètement Victor Hugo de nos livres, je vous demande de bien vouloir imposer aux professeurs de Français de ne plus nous le présenter comme un homme parfait n’ayant aucun défaut. »

 

 

 

Le fait est que le Discours sur l’Afrique fut prononcé par Victor Hugo le 18 mai 1879, lors d’un banquet commémoratif, rendant hommage à l’abolition de l’esclavage – le décret datait de 1848. On pouvait y lire, plus loin : 

 

Refaire une Afrique nouvelle, rendre la vieille Afrique maniable à la civilisation, tel est le problème. L’Europe le résoudra. Allez, Peuples ! emparez-vous de cette terre. Prenez là. A qui ? à personne. Prenez cette terre à Dieu. Dieu donne la terre aux hommes, Dieu offre l’Afrique à l’Europe. Prenez-la. Où les rois apporteraient la guerre, apportez la concorde. Prenez-la, non pour le canon, mais pour la charrue; non pour le sabre, mais pour le commerce ; non pour la bataille, mais pour l’industrie; non pour la conquête, mais pour la fraternité. (à retrouver ici)


 

Et l’on finit par se demander si la lecture faite du texte, et les citations extraites, non seulement ne transforment pas le propos, mais surtout, dénaturent le contexte de l’écriture. Par conséquent, le discours même d’Hugo. Et plus encore si l’on met en parallèle ce texte avec un autre toujours signé d'Hugo, qui ne manque pas d’ironie, daté justement de l’abolition, en 1848 : 

 

La proclamation de l’abolition de l’esclavage se fit à la Guadeloupe avec solennité. Le capitaine de vaisseau Layrle, gouverneur de la colonie, lut le décret de l’Assemblée du haut d’une estrade élevée au milieu de la place publique et entourée d’une foule immense. C’était par le plus beau soleil du monde. Au moment où le gouverneur proclamait l’égalité de la race blanche, de la race mulâtre et de la race noire, il n’y avait sur l’estrade que trois hommes, représentant pour ainsi dire trois races : un blanc, le gouverneur ; un mulâtre qui lui tenait le parasol ; et un nègre qui lui portait son chapeau.

 

 

Que le poète ait une vision colonialiste, c’est manifeste : que la pétition se fourvoie, c’est une autre chose. Hugo n’imaginait pas la colonisation que l’Histoire a connue. Porteur de valeurs liées au progrès et à la libération de l’humain, mais en période de fausses informations, la pétition est plus dangereuse encore. 

 

Mais peut-être l'écriture du fameux Discours de Dakar, prononcé par Nicolas Sarkozy en avril 2009, a-t-il mis le feu aux poudres, participant de la confusion. La thématique développée par Hugo fut ouvertement reprise par Henri Guaino, qui rédigeait les discours du président. Mais outre le demi siècle d’écart, les hommes sont bien différents et leurs paroles tout autant.

 

« Le drame de l’Afrique, c’est que l’homme africain n’est pas assez entré dans l’Histoire. Le paysan africain qui, depuis des millénaires, vit avec les saisons, dont l’idéal de vie est d’être en harmonie avec la nature, ne connaît que l’éternel recommencement du temps rythmé par la répétition sans fin des mêmes gestes et des mêmes paroles », avait déclamé Nicolas Sarkozy. 

 

Entre la vision d’avenir – permanente chez Hugo, et pétrie d’espoir – et l'arrogant discours d'un Sarkozy paternaliste, qui manquait tant d’humilité que de repentance, quel dommage de se méprendre autant.

 

Les MisérablesVictor HugoGallimard Folio Classique — 9782072730672 — 13,90 €


          Talent Acquisition Executive - Decibel Insight - Boston, MA   
Our technology reveals exactly how users behave on websites, and our groundbreaking machine learning algorithms surface the nature of their experiences - be...
From Decibel Insight - Mon, 15 May 2017 18:25:34 GMT - View all Boston, MA jobs
          Client Services Associate, Japanese Speaker - Quid - San Francisco, CA   
Quid algorithms reveal patterns in large, unstructured datasets and then generate beautiful, actionable visualizations....
From Quid - Thu, 18 May 2017 06:21:37 GMT - View all San Francisco, CA jobs
          mksh R45 released   

The MirBSD Korn Shell R45 has been released today, and R44 has been named the new stable/bugfix-only series. (That’s version 45.1, not 0.45, dear Homebrew/MacOSX packagers.)

Packagers rejoice: the -DMKSH_GCC55009 dance is no longer needed, and even the run-time check for integer division is gone. Why? Because I realised one cannot use signed integers in C, at all, and rewrote the mksh(1) arithmetics code to use unsigned integers only. Special thanks to the people from musl libc and, to some lesser amount, Natureshadow for providing me with ideas what algorithms to replace some functionality with (signed shell arithmetic is, of course, still usable, it is just emulated using unsigned C integers now).

The following entertainment…

	tg@blau:~ $ echo foo >/bar\ baz
	/bin/mksh: can't create /bar baz: Permission denied
	1|tg@blau:~ $ doch
	tg@blau:~ $ cat /bar\ baz
	foo
 

… was provided by Tonnerre Lombard; like Swedish, German has got a number of words that cannot be expressed in English so I feel not up to the task of explaining this to people who don’t know the German word “doch”, just rest assured it calls the last input line (be careful, this is literally a line, so don’t use backslash-newline sequences) using sudo(8).


          Multi-Modal Mean-Fields via Cardinality-Based Clamping   
Mean Field inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, since it models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. We therefore replace the fully factorized distribution of Mean Field by a weighted mixture of such distributions, that similarly minimizes the KL-Divergence to the true posterior. By introducing two new ideas, namely, conditioning on groups of variables instead of single ones and using a parameter of the conditional random field potentials, that we identify to the temperature in the sense of statistical physics to select such groups, we can perform this minimization efficiently. Our extension of the clamping method proposed in previous works allows us to both produce a more descriptive approximation of the true posterior and, inspired by the diverse MAP paradigms, fit a mixture of Mean Field approximations. We demonstrate that this positively impacts real-world algorithms that initially relied on mean fields.
          They Know Me So Well   
'We know you better than you know yourself' declared Amazon in their latest email to me. Well, that's a bold claim, even for Amazon, so I thought I'd open this one and have a look...
  It seems the 'me' they know so well is some kind of alter ego of mine, with a lively interest in 'joggers' (they appear to be some kind of misshapen trousers) and 'combat work pants' (ditto), a chap who might well be tempted by the prospect of owning 140 jam/pickle jars, fancies owning a Hornby OO train set, and is keen to buy, er, women's dresses. Sometimes I wonder if these algorithm things are quite all they're cracked up to be.
          Elementary Functions: Algorithms and Implementation   

          Comment on Kenrazy – Ile Kitu by play poker for money no deposit   
Any pupil of poker historical past will inform you - this is a powerful query to answer. The Income Tax Act, 1961, Abroad Change Administration Act (FEMA) 1999, Anti Money Laundering Regulation, Data Know-how Act, 2000, Indian Enjoying Act, and so forth would collectively govern the approved obligation of on-line poker web pages in India. The software is dependable, functional and has greater than a a hundred features and customization options making 888 a superb option for multitabling. Related as in the previous instance, if two or more gamers have straight in a single hand, the winner is taken into account a participant with the best card within the straight. Chips arrived in good condition although some of the chips has some traces / smudges on the printing. Second, by no means-especially in a game based on variants-underestimate your opponent. Beneath we'll outline the foundations and payouts related to Final Texas Maintain ‘Em. I began playing poker because I felt I good in all probability make some money and stayed at it as a result of I cherished the sport. Enduring heavy swings in his early days, he figured that the only option to make poker worthwhile was to present it all his time. The range of limits also caters for novices, with cash games starting at $0.01/0.02. In addition to Texas Holdem poker, many other poker variants are provided. Is he the perfect participant at present, after all not but when talking corridor of fame and the most effective general poker player on this planet over time, there could be no different pick. With 2014 almost over, we at PokerTube have determined to try some attainable trends and changes for poker community player site visitors in 2015. When gamers are ahead more often than not as they wager into the pot, they are going to be successful poker players. In case you're just looking to go time or refine your abilities, the free gaming websites such as these on Fb are the best way to go. We value the safety of our gamers taking part in at To guarantee a completely secure, secured as well as a snug gameplay, through the use of SSLv3/TLSv1 encryption algorithms, environment friendly collusion detecting methods and real random quantity era (RNG) we ensure that our gamers are taking part in safely in our website. We encourage you to attempt to play different on line casino games, including the entertaining and enjoyable 3 card poker recreation, however first study the game, follow with play money and then begin enjoying with real money at small amounts to get more expertise and confidence. He has achieved eight bracelets of World Sequence of Poker and made to the ultimate desk thirty-5 instances. Ivey is among the few poker gamers to achieve the biggest cash video games in each reside and online play, whilst notching victories in the world's largest tournaments seemingly at will. Final year, they joined the Chico poker network, and have grown this part of their brand to turn into the 2nd largest US pleasant poker site. And this is because it provides builders a narrower scope over which to provide their poker apps. I am speaking about curiosity from poker websites, training sites, coaching presents, staking provides, interview requests, joint ventures, you identify it. PokerLauncher is India's leading platform for online poker offers bringing curated provides to gamers. EBay determines this value via a machine-discovered model of the product's sale prices inside the last ninety days. I like playing at totally different poker websites and since the entire sites I discussed above take this deposit option, it permits me to use my bankroll in any respect three of the sites from the same ewallet. In his New York Occasions obituary, Chip Reese was referred to as the best cash game poker participant of all time. Completely different poker sites supply different sizes of freerolls, prizes in cash, or merchandise. A gaggle, named the Public Curiosity Litigation, filed a case in opposition to assorted gaming organizations along with on-line poker india the Mahalaxmi Cultural Affiliation, Madras City Membership (India) Pvt Ltd, Madras Darkhorse Farm & Land Growth Pvt Ltd, and Madras Sakthi Recreation Centre. The actual query is when the authorized poker and on line casino betting websites will begin to hit the net. There is a very thin line between a cyber crime and a recreation of enjoyable and talent and a slight mistake or negligence can make the poker exercise a punishable offence in India. The gamers who are still in the hand enter into a 3rd spherical of betting, at the end of which the bets are collected and positioned in the pot. To me, an expert poker participant is solely somebody who earns their total living taking part in poker, both offline or on-line. On this on-line poker variant, a participant has to make use of two out of 4 gap playing cards and three from the board to make a excessive hand or a low hand combination. Handheld devices like Android telephones/tablets/phablets, and other handheld gadgets like iPhones/iPads are fashionable gadgets used to play Poker video games. So the player with worst Teen Patti sequence in accordance with the a standard Teen Patti game beats the player with one of the best Teen Patti sequence. Two years later, the Calcutta High Courtroom asked the police to refrain from harassing golf gear offering poker to its patrons as a play poker on-line india results of poker won't be categorized as enjoying in the state. You might be questioning in the event you log onto an online poker website in the early hours of the day for instance that there is probably not many players also logged in and playing. Wait till you actually obtain our poker software program program that is made In India, create a star poker account and get to the deposit half. So now that you're aware of the Bonus Pai Gow Poker guidelines, give on-line Bonus Pai Gow Poker a attempt, and have fun playing poker online at Wild Jack! Of all the poker rooms you may choose, is without doubt one of the most superb, namely because it is the official poker web site of the well-known Borgata Hotel On line casino & Spa. The authorities are mainly involved with football betting networks - individuals inserting bets reside in Baht, utilizing Thai bank accounts to maneuver cash round. The game lasted over 200 hours and was watched by thousands of people across 151 international locations on the Twitch community. Merely open the web site from your browser to take pleasure in no-problem no obtain poker video games with practical casino sounds and exciting graphics, the game and its poker odds has no difference than enjoying Texas holdem in a real casino or the poker rooms on-line. How they play: Fish love taking part in ridiculous palms more than they love calling bets for no good purpose. The platform conducts tournaments on a daily basis, partaking plenty of gamers with attractive presents and whopping prize money. Launched in Oct 2013, the Amator is the results of detailed authorized research by Vipin Chaudhary, a passionate poker participant and lawyer by career. To help you know which internet sites to avoid we maintain a listing of unsafe or disreputable sites. We each logged in to our accounts at Silver Oak Casino and began playing 25 cent Joker Poker. After taking part in it in backrooms and all kinds of shady locations, in 1967, four Texas highway-gamblers Crandell Addington, Roscoe Weiser, Doyle Brunson, and Amarillo Slim moved to Las Vegas the place poker had already been authorized for 36-years. Stay tuned at PokerNews as extra news develops within the Indian playing marketplace. Opening a Precise Cash account with us takes simply moments - merely hit Play for Precise Money, fill out the online type and make your preliminary deposit to acquire your 200% Welcome Bonus. Additionally they have a greater fame within the poker group, whereas I've heard quite a lot of stories about bwin treating poker players very badly. As probably the most nicely-recognized Indian poker player, and if all the things goes well this week, a pleasant chunk of money will probably be extracted back to the Motherland, the place it's Diwali, the one time of yr when gambling is untabooed and everybody gets drunk and loses their cash playing Teen Patti, poker's loud, crass cousin, largely to a bunch of loud, crass cousins. Battle the federal government's ban on your favorite game, and earn again your title as the Governor of Poker! Of course enjoying poker on a world degree must be guided and performed with International Tips. Sandholm also explained that the AI did not study from mimicking human poker players and analyzing historical information, however from game theory. He also offered Webb $three million for partial rights to three-card poker - which Webb agreed too. It implies that the algorithm could have a lot better implications for solving issues in the real world. Interestingly, in these the place gambling is taken into account as a vice, many players still play online poker and are usually not prosecuted vigorously in a way one would anticipate. At The Great Grind I write about poker technique, ideas, and all the other elements concerned in beating the game. Now as the match progresses you possibly can fluctuate this quantity up or down slightly, primarily based in your stack dimension. If the Dealer doesn't qualify then the Play Guess is returned to the player and the Ante guess is paid at even cash 1:1. Poker770 is a highly regarded poker rooms amongst players looking for for a good no deposit poker bonus. The difficulty with making deposit's to poker accounts from India using a Skrill account includes the way the account's set up. For those who join a Skrill account and also you say during join that you won't be using your card to make poker deposits, the card will not work at playing websites. I entered a match on the Seminole Arduous Rock Lodge and Casino in Hollywood, Florida, about 20 minutes from my residence in South Florida. Trump Plaza had a partnership with Betfair that allowed Betfair to function an online casino in NJ. Play three-card poker online and uncover the fun of quick play and easy fun! I started taking part in online poker from the final one yr and am planning to extend the ratio in direction of tournaments going ahead. One pair is a poker hand that comprises two playing cards of the identical rank, plus three unpaired cards. An internet poker website's software is arguably crucial consider choosing a room if you're going to be taking part in quite a bit. Poker is a fairly simple game that you just solely have to do a minimal to succeed - studying the abilities, cultivating the flexibility to be stoic, and being stage-headed on a regular basis. Courts in India have not but obtained a possibility to verify these arguments, although two terse Excessive Court docket orders have indicated that the result of poker-related litigation is also constructive. With a inhabitants of over a billion folks, it could well solely be a superb thing for players to have loads of choice whereby poker site to play at. In actuality, a analysis suggests that four out of 5 poker gamers in the US use such medicine. Chris Moneymaker as an newbie poker participant received the 2003 World Series of Poker Main Occasion as a digital unknown. Born in Finland, and inspired by his fellow nation man Patrik Antonius, Ilari has made a number of stay appearances cashing in huge, proving to the world that he is not your common online grinder. This newest edition of Governor of Poker is Youda Video games' first foray into the world of multiplayer poker. For those who're looking for a relaxing recreation experience in a bus or in bathroom, this card sport is an ideal choice. He performed forward of his time and everyone else performed to catch up. Poker has the identical connotation as martial arts, it is a whole lot of completely different video games as for martial arts it's a lot of different kinds but when it game to poker Doyle was capable of play all of the games and win. As an alternative, the court prevented the issue (paywall), saying that the original case did not really confer with playing Rummy for cash. This on line casino app is appropriate with iOS devices (iPhone 3+ mobiles that run on iOS4 or past), iPads on iOS 4 or beyond and virtually every Android +four cellular machine amongst different telephones. Many poker players enjoy the flexibility of cash games compared to tournaments the place you might be usually locked in for a good period of time. Phrases & Situations Apply to all 888 Poker bonus codes and offers, for current phrases and situations click the banner advert. At some poker websites, the watch for a low restrict single desk SNG can be less than a minute during peak occasions. You could make sure that each desk you play at is beatable - and be ready to move seats or sport when you end up sat with no ‘tender spots'. The location presently affords four poker variants- No Limit Texas Hold'em, Pot Restrict Omaha, Omaha Hello/Lo and Loopy Pineapple at numerous stakes. Most forms of poker represented including: Chinese, OFC, Pineapple, Stud, Razz, Stud08, 2-7 TD, 2-7 SD, 5C Draw, Badeucy, Badacey, HORSE, Seller's Choice, 7game-12game combine, and extra. Bonomo states that the alleged rapist was released from his sponsorship contract, however this person in all probability didn't give up poker. As at all times in poker there are numerous options that the Hero may have taken to play this hand just a little better. KhelPlay brings you a platform the place you'll be able to play Poker games online and On-line Poker video games of your alternative. But the story doesn't finish right here, we being coolest online Indian poker website, redeem your free chips as real cash. Improve your game with PokerTracker four, the business leading evaluation and HUD software for poker players. http://www.vietnamhat.com.vn/index.php/en/component/k2/itemlist/user/873371
          TMBA381: A Conversation with Cal Newport   
http://www.tropicalmba.com/calnewport/ If you are anything like Dan and Ian, then you have likely spent years geeking out on the writings of this week's guest. Cal Newport is a computer science professor, and he has been operating a blog on his website since 2007. He has also been writing books since he was in college, and his most recent book is called Deep Work, which is about the benefits and practical steps to getting more done in the internet age. This interview covers a wide range of subjects, including distributed algorithms at the extremes, why Cal uses walking to enhance his productivity, and why creativity is more workmanlike than most people realize.
          Blog: Understanding how pull algorithms work   

An interactive article from Why Not Games programmer and designer Nikhil Murthy showing how different kinds of pull algorithms work. ...


          Semtech To Acquire AptoVision   

Semtech Corporation, a supplier of analog, mixed-signal semiconductors and algorithms, has announced the signing of a definitive agreement to acquire …

The post Semtech To Acquire AptoVision appeared first on Sound & Communications.


          Cloudflare lança SSL gratuito   
Matthew Prince, CEO e co-fundador do  Cloudflare
Os certificados SSL serão emitidos gratuitamente para todos clientes do CloudFlare.

Como já diziam nossas avós: Quando a esmola é demais o santo desconfia!







Em declaração a imprensa o CEO e co-fundador do CloudFlare. Matthew Prince disse que estava liberando o serviço de SSL gratuito para todos os clientes interessados.

Explicou que ele resolveu fazer isso porque ele entendeu que essa seria sua missão: Ajudar a construir uma internet melhor e uma das coisas mais importantes que ele podia fazer era ativar o SSL Universal para todos os clientes pagantes e os não pagantes..

Ele disse: "Mesmo que isso faz mal a receita no curto prazo, é a coisa certa a fazer. Tendo criptografia de ponta pode não parecer importante para um pequeno blog, mas é fundamental para o avanço futuro criptografada por padrão da internet.

"Cada byte, porém aparentemente banais, que flui criptografados através da Internet faz com que seja mais difícil para aqueles que desejam interceptar, acelerador ou censurar a web.

"A internet é um sistema de crenças. No CloudFlare, temos orgulho de estarmos contribuindo para o sistema de crenças. E, depois de ter provado que SSL Universal é possível em nossa escala, esperamos muitas outras organizações nos siga e libere o SSL por todos os seus clientes e sem nenhum custo adicional. "

Ele reconheceu que o maior problema é o uso de navegadores antigos que não suportam a assinatura Elliptic Curve Digital Algorithm ( ECDSA ), no entanto, mais de 80% dos pedidos vêm de navegadores modernos (menos de seis anos de idade), e ele disse que percentual está crescendo rapidamente.

Ele disse também que espera que o SSL Universal incentive as pessoas a atualizarem seus navegadores e sistemas operacionais. "Às vezes, o progresso exige sacrificar alguma compatibilidade com versões anteriores", disse ele. "A boa notícia é que nenhum dos atuais clientes free do CloudFlare tinham anteriormente qualquer versão do SSL.

Matthew Príncipe disse ainda que este movimento irá dobrar o número de usuários de sites criptografados: "Ontem, havia cerca de dois milhões de sites ativos na internet com criptografia e até o final do dia de hoje, nós vamos ter que dobrou, declarou ele nessa quarta-feira dia 1º de outubro.

"Para um site que não tem atualmente SSL, será o padrão para o nosso modo SSL flexível, o que significa que o tráfego de navegadores para CloudFlare será criptografado, mas o tráfego de CloudFlare para servidor de origem não será criptografado. Para ser criptografado o proprietário do site precisará instalar um certificado em seu servidor web para que possamos criptografar o tráfego para a origem. Depois de instalar o certificado no servidor web e o modo SSL completo é possível criptografar o tráfego de origem e fornecer maior nível de segurança”.

Assim como os navegadores, ainda existe desafios com a carga da CPU e esgotamento do IPv4.

Ao ser questionado se os outros provedores vão seguir o seu exemplo, ele disse: "Nós não acreditamos que outros provedores sigam o mesmo caminho, pelo menos até que alguma pressão seja feita sobre eles.”

"No entanto, nós precisamos de mais navegadores de segurança avançada. Nós gostamos do que Cloudfare está fazendo e eles estão liderando o caminho para colocar a segurança do usuário em primeiro lugar. Nossa esperança é que a pressão o suficiente pode ser aplicada aos prestadores de serviços de comunicação e envergonhá-los para seguir o exemplo. "

Ontem no Grupo do Linkedin sobre Certificação digital foram colocados os seguintes pots sobre o assunto:





Parar de seguir Sergio

Cloudflare lança SSL gratuito

Sergio LealVP of Research and Development at ittruPrincipal contribuidor

Além de operar uma CDN de primeira linha gratuitamente, eles passam a ofecer certificados SSL de graça.

Como será que o mercado vai responder?
http://secutiryguru.blogspot.com.br/2014/09/cloudflare-lanca-ssl-gratuito.html
Security Guru: Cloudflare lança SSL gratuitosecutiryguru.blogspot.com.br
Security Guru: Cloudflare lança SSL gratuito



Eder Alvares. P.
Eder Alvares. P. Souza

Senior security consultant and co-founder at e-Safer Consultoria em Tecnologia da Informação

Sérgio pelo que eu entendi será um certificado provisionado pelo cloudflare que será implantado no servidor deles que atendem ao domínio do cliente. Assim, o cliente não irá gerar um par de chaves e receber o certificado e sim o pessoal da cloudflare irá gerar e disponibilizar um para o cliente. Inclusive o método que será praticado irá cifrar a conexão entre o visitando do site e o servidor da cloudflare e entre o servidor da cloudflare e o servidor do cliente não haverá cifra. Com isso, provavelmente o certificado utilizado será um simples domain validation e eles nem irão valiar a organização.

Pelo menos foi isso que eu entendi.

Abs.,
Eder Souza



Regina Tupinambá
CEO da Insania Publicidade e autora do Blog Certificação Digital

Isso é inacreditável!! Estamos vivendo uma enxorada de vulnerabilidades e ataques como nunca vistos e ainda existe espaço no mercado para certificados com apenas validação de domínio??




Eder Alvares. P. Souza
Senior security consultant and co-founder at e-Safer Consultoria em Tecnologia da Informação

Regina, sabemos quando custa uma validação segura e bem feita. Assim, sem custo somente certificado como esse e é a mesma prática adotada por empresas de hosting, isso quando o cara não compra um SAN DV e insere diversos clientes no mesmo certificado. Tudo para reduzir o custo ao máximo e segurança fica em segundo plano.

Abs.,

Eder Souza




Regina Tupinambá
CEO da Insania Publicidade e autora do Blog Certificação Digital

Eder, o problema é que nem todos os clientes entenderão que o que vale num SSL não é só a criptografia. Acreditarão que a Cloudflare está realmente proporcionando um item de segurança. Que bonzinhos...

A identificação numa cadeia de confiança é fundamental para a internet segura, como o Ceo da Cloudflare afirmou ontem que seria sua missão. Sem essa cadeia de confiança quem garante que o site do Bradesco é mesmo do Bradesco?

Um site idêntico pode estar criptografado por um hacker e as informações vão direto para os cibercrimonosos. SSL com validação de domínio não vale nada! E acho que o Prince - CEO da Cloldflare, não tem ideia do risco que ele esta expondo sua empresa.

Fácil assim: os hackers não precisam invadir servidores, basta colocarem um site clonado com criptografia, enviar uns spans e fisgar os iludidos para a armadilha! Muito mais simples que invadir o site, não acha?



Sergio
Sergio Leal

VP of Research and Development at ittru

Principal contribuidor

Amigos,

Esse assunto é bastante controvertido, ótima discussão.
Sempre apoiei a ideia de que pra cada tipo de necessidade existirá uma solução na medida certa. Assim, existe espaço para os certificados validados por organização (OV) e por domínio (DV).
Com a decisão do Google em favorecer os sites com HTTPS, a demanda de DV tinha tudo para disparar.
No caso do Cloudflare, a validação DV é automática, já que você teve que direcionar o seu DNS para apontar pra eles.
De qualquer maneira, estamos falando de operações que não exigem tanta segurança assim, uma vez que o Cloudflare já faz um man-in-the-middle.

Abs,
Sergio


          A new Comey "tape" theory   
I confess it: This post offers a conspiracy theory. Or rather, two related theories.

Unlike Alex Jones, I don't mind admitting that my ideas are in a germinal phase, and that they may soon prove misguided or foolish. I present these theories to you because I'd appreciate your criticisms: You're all wrong, Cannon, and here's why...

All day long, the talking heads on teevee have focused on the alleged Trump "tape" of Jim Comey. Trump has promised to show his cards (as it were) "in the very near future," and he told reporters that they would be "disappointed." Nobody knows what he meant by that word. Would we be disappointed in Comey? Or disappointed to learn that no recordings exist?

As readers know, I lean toward the view that recordings do exist. We know that Donnie has surreptitiously recorded individuals in the past, and we've seen the photo of Trump in the oval office with a digital voice recorder on his desk.

There is also the not-inconsiderable fact that Trump just volunteered to testify under oath. All of a sudden, a man who often seems to be imitating the stars of those "guilty dog" videos is acting like a gambler with an ace up his sleeve, and one or two more aces secreted in his pockets.

Why on earth is Donnie behaving in this fashion? Axios can offer only a couple of hoary political axioms: "The best way to defend is to attack. If you're explaining, you're losing."
The widely held view in Republican circles, according to Axios' Jonathan Swan, is that Trump's aggressiveness undercuts the notion that there are tapes.
That's a counterintuitive conclusion. As far as I'm concerned, a display of confidence indicates that the president does have recordings.

Let's take another look at the wording of the tweet that started it all:
"James Comey better hope that there are no 'tapes' of our conversations before he starts leaking to the press."
Conversations -- plural. At no point does Trump say that he made the recordings.

Mike Rogers runs the NSA, and Rogers proved in his testimony that he is Trump's man. In this context, it may be worth mentioning Rachel Maddow's observation that Rogers made a strange visit to Trump Tower while Obama was still president. I've been saying for more than a year that there is a pro-Trump faction within our own intelligence services.

Yes, it is certainly true that Trump frequently promises evidence which he never produces, as this list demonstrates. Nevertheless, the wording of the "tape" tweet leads me to believe that Trump may actually have something on the former FBI head. As weird as Trump is, he would not threaten a man unless he had something to threaten him with. You can't use a hallucination to intimidate an adversary.

The "tape" is but one of two Comey mysteries that have bedeviled us in recent days. We must also account for...the THING.
As Comey describes it in the statement he prepared for Thursday’s Senate hearing, Trump called him on the morning of April 11 and brought up a matter he’d raised before: What was Comey doing to sell the public on the idea that Trump wasn’t under investigation by the FBI? When Comey replied that Trump should take it up with the leadership at the Justice Department, the president said he would do so, then continued: “Because I have been very loyal to you, very loyal; we had that thing you know.”
Comey claims that he has no idea what all this talk of loyalty signifies. He says that he cannot identify the "thing."

Conventional thinkers would argue that the "thing" was simply the dinner they shared, and that Trump believes that he showed loyalty when he let Comey keep his job. That comforting scenario just doesn't sit right, at least not with me. This is Trump we're talking about. Trump always has something up his sleeve other than his elbow.

All day yesterday, I asked myself: Is there a single narrative which would explain both the "tape" mystery and the "thing" mystery? By midnight, I had cobbled together two different theories.

Theory 1: Comey really does have a secret. The secret could involve adultery (yawn), financial problems, or an uncharacteristic lapse into unethical behavior. The "tape" could be a recording of a telephone conversation during which this secret was discussed. The conversation may have been recorded by the NSA or by GCHQ -- perhaps even the Russians.

Trump showed "loyalty" when he agreed to keep Comey's "thing" secret.

Comey may feel protected, for now. He knows that if the NSA intercepted the conversation, revelation would be illegal. If the GCHQ intercepted the conversation, revelation could cause an international uproar.

Ah, but what about the idea of a Russian intercept? Comey may want to bait Trump into admitting that the Russians spied on his behalf. To prove that Trump colludes with the Russians, Comey may be willing to undergo a certain amount of public humiliation. Sometimes a warrior must fall on his sword.

If Theory 1 is correct, did Comey lie in his testimony before Congress? Not necessarily. He can claim that he left out part of his narrative in open hearings in order to avoid a conflict with Mueller's probe.

Theory 2: Comey is about to be framed. (This theory is both more fun and more frightening.) Let us posit that Trump is using Putin as his model. The question then becomes: WWVD? What Would Vladimir Do? Let's ask it another way: In the past, how has Putin operated against his foes?

We know that his enemies have been arrested for possession of child pornography, which was almost certainly planted.
Old-style kompromat featured doctored photographs, planted drugs, grainy videos of liaisons with prostitutes hired by the K.G.B., and a wide range of other primitive entrapment techniques.

Today, however, kompromat has become allied with the more sophisticated tricks of cybermischief-making, where Russia has proved its prowess in the Baltic States, Georgia and Ukraine. American intelligence agencies also believe that Russia used hacked data to hurt Hillary Clinton and promote Donald J. Trump in the U.S. presidential election, according to senior officials in the Obama administration.
Also see here:
Another tactic of choice involves sex tapes. In 2010, videos of Russian opposition journalists and politicians who had been filmed separately having sex with the same young Russian woman were leaked online. Last year, an opposition political party was damaged when a tape emerged of a married party leader having sex with an aide. Putin has been involved in such operations for years: In 1999, when he was the head of the FSB (the post-Soviet successor to the KGB), Putin reportedly helped then-President Boris Yeltsin to discredit and dismiss powerful prosecutor Yuri Skuratov, who had threatened to reveal which Russian officials were siphoning money to foreign bank accounts. When Yeltsin could not persuade the parliament to fire Skuratov, a video of the prosecutor — or at least a man who resembled him — having sex with prostitutes was aired on television. This all may sound like something out of “The Americans,” but it’s politics as usual in Russia.

Still, some clumsy attempts have backfired: In 2012, a media outlet published a picture of Kremlin opponent Alexei Navalny allegedly posing with exiled oligarch Boris Berezovsky, a Putin nemesis; the caption darkly suggested that forces outside Russia were funding opposition efforts. Navalny then produced the original photo, in which he was actually standing with a different man, and Russians were soon gleefully creating their own doctored images online of Navalny with individuals such as Arnold Schwarzenegger, Adolf Hitler and an extraterrestrial.

Kompromat is beautifully flexible. If a story isn’t playing well or if there is too much credible pushback, the perpetrators simply move on without apology or correction. The story disappears abruptly, leaving only confusion or unease in the minds of the audience.
Trump could have been planting the seed for a deception operation when he made those seemingly-bizarre references to "tapes," and "that thing" and the "loyalty" which he has allegedly shown toward Comey.

Jim Comey may have no idea as to what is about to hit him.

Modern technology makes it possible to create a fake "tape" of Comey saying certain incriminating things to Trump, things that the former FBI Director did not actually say. In fact, the technology has existed for more than a decade -- maybe even two decades, as this 2003 Science Daily article proves.

In 2017, we have a consumer-level app called Lyrebird:
Today, a Canadian AI startup named Lyrebird unveiled its first product: a set of algorithms the company claims can clone anyone’s voice by listening to just a single minute of sample audio.

A few years ago this would have been impossible, but the analytic prowess of machine learning has proven to be a perfect fit for the idiosyncrasies of human speech. Using artificial intelligence, companies like Google have been able to create incredibly life-like synthesized voices, while Adobe has unveiled its own prototype software called Project VoCo that can edit human speech like Photoshop tweaks digital images.

But while Project VoCo requires at least 20 minutes of sample audio before it can mimic a voice, Lyrebird cuts this requirements down to just 60 seconds. The results certainly aren’t indistinguishable from human speech, but they’re impressive all the same, and will no doubt improve over time.
Although Lyrebird recordings are not completely convincing, it is fair to presume that the intelligence services of the United States, Britain and Russia possess software that is far more advanced. The above-cited 2003 article indicates that our spooks had already reached an impressive level of sophistication fifteen years ago.

My critics will say that Theory 2 ascribes more cleverness to Trump than some would consider possible. My response: Never underestimate your foe.
          Applications of the spin networks and spin foam models in quantum gravity   

by: Jacek Puchta
Abstract:
The Spin Foam models are a path integral picture of Loop Quantum Gravity approach to quantisation of gravitational field. This PhD thesis presents a study of four issues of Spin Foam models. The first problem addressed is the question of the class of 2-complexes, that ensure that Spin Foam models are compatible with the kinematic sector of Loop Quantum Gravity. A framework of diagrammatic representation of spin foams was developed while researching this issue. This diagramatic representation is called Operator Spin-network Diagrams (OSDs). The OSDs allow to express a spin foam as a collection of graphs, connected by certain relations. Each graph captures the local structure of one of spin foam vertices, i.e. nodes of a graph correspond to edges and links of a graph correspond to faces incident to a spin foam vertex. The relations between graphs in OSDs represent the way, in which edges and faces connect vertices. It is proven, that for each OSD there is an unambiguous way to construct a 2-complex with cells labelled by a spin foam coloring, so that one can calculate the spin foam transition amplitude. A clear procedure to glue OSDs along their boundaries was developed. Such gluing is an equivalent of composing quantum processes. All possible OSDs are characterised in terms of gluing of basic diagrams representing zero or one interaction vertex each. The proposition of the answer to the first question is that the appropriate class of 2-complexes for Spin Foam models is given by all the 2-complexes that can be obtained out of OSDs. The OSDs was applied to find a solution of so called boundary problem: to find all spin foams which have boundary given by certain initial and final states of Loop Quantum Gravity. An algorithm finding a series of all OSDs with a given fixed boundary is presented. The series is ordered by the number of internal edges of the corresponding spin foam. The algorithm is tested by applying it to Dipole Cosmology model (introduced in 2010 by E. Bianchi, C. Rovelli and F. Vidotto). All the diagrams contributing to Dipole Cosmology amplitude, which have the minimal number of internal edges, are found. The contribution to transition amplitude coming from these diagrams is studied. It appears that in this order of expansion all the diagrams except from one gives amplitudes that are exponentially suppressed in the semiclassical limit, thus their presence does not spoil the result of authors of Dipole Cosmology model. The third issue addressed in this thesis were the divergent amplitudes in Spin Foam models caused by bubbles in spin foam 2-complexes (i.e. subcomplexes forming closed surfaces). Within the framework of 2-complexes it is relatively hard to find the bubble part of a spin foam, whereas the framework of OSDs provides a simple procedure that unambiguously identifies the bubble subdiagram. A notion of the rank of a bubble is introduced. The rank counts the number of elementary bubbles that the considered bubble consist of. A method to calculate the rank for each given OSD is presented. Several simple cases of diagrams containing bubbles, that illustrate the algorithms, are presented and studied. The fourth question posed and answered within this thesis is related to detailed study of one particular case of a spin foam bubble, called melonic bubble. The melonic bubble is a spin foam analogue of self-energy renormalization in Quantum Field Theory. Recent research led to a conclusion, that in the first order the self-energy correction is proportional to some operator T, however the operator T was not known. In the thesis this operator is studied in semiclassical limit. After some elaborate calculations the exact form of the leading order of T is found: for fixed eigenvalues of the area operators it is proportional to the identity operator, with the proportionality constant dependent on the eigenvalues.
          Sr. Java Developer   
<span>Our client located in El Segundo, CA has a long-term contract opportunity available for a Sr. Java Developer. <br>&nbsp;<br>Desired experience: <br>&nbsp;<br>&bull; At least 7 years of development experience with Java/J2EE <br>&bull; 3+ years of experience in developing Java-based web services utilizing J2EE and various frameworks, including Hibernate and Spring<br>&bull; 3+ years of experience in Jquery/Javascript framework, Web service: SOAP/REST, Spring framework<br>&bull; Experience with search engine platforms/SEO and algorithms is highly preferred <br>&bull; At least 4 years of experience in Project life cycle activities on development and maintenance projects.<br>&bull; Solid understanding of caching as well as software configuration management concepts and methodologies<br>&bull; Must possess excellent communication skills, with an emphasis on verbal and written communication.<br>&bull; Ability to work in team in diverse/ multiple stakeholder environment.<br>&bull; Excellent analytical abilities.<br>&bull; Bachelor&rsquo;s degree or foreign equivalent from an accredited institution is highly preferred<br>&nbsp;<br></span>
          Build Engineer   
This Build Engineer
- Understand various Encryption algorithms, managing certificates and Java Key stores.
- Code Release and Code merge and other release level activities using subversion
- Strong Working knowledge of Unix, Python, Ant maven and other building and script
- Proficient in Continuous Integration builds using Hudson/Jenkins
- Good understanding the programming languages like Java, .Net etc.
- Involved in writing Perl and shell scripts for compilation and deployment process and experienced in writing ANT scripts for making all the files local to the server.
- Familiar with HTTP, HTTPS, SFTP and FTP protocols,
- Work with the team of Build engineers to ensure improvements are made to the build processes.
- Foster building of reusable components which can be used across the projects.
- Good understanding of Jira as Defect tracking, Release management tools.
- Perform POCs of various tools and processes to improve Build processes. We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Data Scientist   
Specializes in data science, analytics and architecture. Strong experience/knowledge on framing and conducting complex analyses and experiments using large volumes of complex (not always well-structured, highly variable) data. Ability to source, scrub, and join varied data sets from public, commercial, and proprietary sources and review relevant academic and industry research to identify useful algorithms, techniques, libraries, etc. Assists in efforts to centralize data collection and develop an analytics platform that drives data science and analytics capabilities. Deep domain experience in Apache Hadoop, data analysis, machine learning and scientific programming. Understands how to integrate multiple systems and data sets. Able to link and mash up distinctive data sets to discover new insights. Designing and developing statistical procedures and algorithms around data sources, recommending and building models for various data studies, data discovery and predictive analytics tasks, implementing any software required for accessing and handling data appropriately, working with developers to integrate and preprocess data for inputs into models and recommending tools and libraries for data science that are appropriate for the project Required Qualifications: 5-10 years of platform software development experience ? 3-5 years of experience with, understanding and knowledge of the Hadoop ecosystem and building analytic jobs in MapReduce, Pig, Hive, etc. ? 5 years of experience in SAS, R, Perl, Python, Java, or other languages appropriate for large scale analysis of numerical and textual data ? Experience developing static and interactive data visualizations. ? Strong knowledge of technical design and architecture principles. ? Creating large scale data processing systems. ? Driving design and code review process. ? Ability to develop and program databases, query databases and perform statistical analysis. ? Working with large scale warehouse and databases, sound knowledge of tuning and query processing. ? Excellent understanding of entire development process, including specification, documentation, quality assurance, debugging practices and source control systems. ? Ability to understand business issues as they impact the software development project. ? Solves complex, critical problems related to significant and unique issues. ? Ability to delve into large data sets to identify useful trends in business and develop methods to leverage that knowledge. ? Strong skills in predictive analytics, conceptual modeling, planning, statistics, visualization capabilities, identification of best data sources, hypothesis testing and data analysis. ? Familiar with disciplines such as natural language processing (the interactions between computers and humans) and machine learning (using computers to improve as well as develop algorithms). ? Writing data extraction, transformation, munging etc. algorithms. ? Developing end-to-end data flow from data consumption, organization and making it available via dashboard and/or APIs. ? Bachelor's degree in software engineering, computer science, information systems or equivalent. ? 5 years of related experience; 10 years of overall experience. ? Ability to perform activities, tasks and responsibilities described in the Position Description above. ? Demonstrated track record of architecting and delivering solutions with enterprise customers. ? Excellent people and communication skills. ? Processing complex, large scale data sets used for modeling, data mining, and research ? Designing and implementing statistical data quality procedures for new data sources ? Understanding the principles of experimental testing and design, including population selection and sampling ? Performing statistical analyses in tools such as SAS, SPSS, R or Weka ? Visualizing and reporting data findings creatively to provide insights to the organization ? Agile Methodology Experience ? Masters Degree or PhD We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Build Engineer -   
- Understand various Encryption algorithms, managing certificates and Java Key stores.
- Code Release and Code merge and other release level activities using subversion
- Strong Working knowledge of Unix, Python, Ant maven and other building and script
- Proficient in Continuous Integration builds using Hudon/Jenkins
- Good understanding the programming languages like Java, .Net etc.
- Involved in writing Perl and shell scripts for compilation and deployment process and experienced in writing ANT scripts for making all the files local to the server.
- Familiar with HTTP, HTTPS, SFTP and FTP protocols,
- Work with the team of Build engineers to ensure improvements are made to the build processes.
- Foster building of reusable components which can be used across the projects.
- Good understanding of Jira as Defect tracking, Release management tools.
- Perform POCs of various tools and processes to improve Build processes. We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Cloud Security Engineer -   
Our Client provide Cyber security solutions and network, Defence and forensic services

Cloud Security Engineer

Location: Bethesda, MD

Full-time

Salary: 110K (some flexibility)


Looking for a Cloud Security Engineer who can provide hands-on technical engineering and ownership of the growing cloud security program, across multiple providers. You will work closely with our R&D group as well our Threat Research Team to help build secure and robust systems responsible for serving all customers.

Responsibilities will include secure design and architecture of complex web services and developer tools, extending our system and network incident detection and response capabilities into the cloud, performing risk based security assessment and reviews of current and future services, and building security tools for securing and assessing cloud instances.

Focus Areas

* Develop, implement and operate controls to secure cloud-based systems
* Utilize cloud-based APIs when appropriate to write network/system level tools for securing cloud environments
* Recognize, adopt, utilize and teach best practices in cloud security engineering
* Participate in efforts to promote security throughout the project and build good working relationships within the team and with others in the organization
* Participate in efforts that tailor the company?s security policies and standards for use in cloud environments
* Define, assess, and communicate security risk to product owners
* Develop reference architectures and proof of concept implementations of cloud security environments

REQUIRED QUALIFICATIONS:

? Demonstrated experience rationalizing, implementing, operating and maintaining security controls in cloud and hybrid cloud environments
?
* 3-5 years of experience with security engineering: secure development, cryptography, network security, security operations, systems security, policy, and incident response
?
* Experience with Amazon Web Services (AWS) security. Special focus on building highly resilient, multi regions infrastructures.
?
* Strong understanding of AWS services catalog and architecture.
?
* Experience with Shell, C, Perl, Python and developing API clients


Skills:

Cloud, AWS, Shell, C, Perl, Python, API clients, security, cryptography, network security, security operations, systems security, policy, and incident response

Preferred Qualifications:

* Experience with Linux operating system development and network protocols.
* Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability
* Internet and operating system security fundamentals
* Experience with Splunk, Hadoop a plus
* Fundamentals of private cloud solutions
* Sharp analytical abilities and proven design skills
* Strong sense of ownership, urgency, and drive
* Experience with web-based applications and/or web services-based applications, especially at massive scale

A BSCS (or equivalent) is required; an MSCS is preferred.


We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Linkpost | 9.22.2013   
• Major US security company warns over NSA link to encryption formula – RSA says customers using a random-number algorithm developed with the help of the NSA should switch to a stronger feature in its product. Also NSA Sends Letter to Its ‘Extended’ Family to Reassure Them That They Will ‘Weather’ This ‘Storm’ • Police
          21 Medien arbeiten an der News-App xMinutes mit.   
xminutes-marco-maas-600 Große App-Koalition: Daten-Durchblicker Marco Maas beteiligt 21 private und öffentlich-rechtliche Medien an der Entwicklung seines News-Assistenten xMinutes. Die "Tagesschau" und Spiegel Online, "Berliner Morgenpost" und die Medienholding Nord sowie "t3n", "Apotheken Umschau" u.a. liefern gratis Inhalte zu. xMinutes versucht sie ab Dezember 1.000 Test-Nutzern zur richtigen Zeit aufs Handy zu spielen, indem die App zahlreiche Daten zum Nutzungsverhalten analysiert, wie Maas schon im turi2.tv-Interview erläuterte. Google fördert das App-Projekt mit mehreren hunderttausend Euro aus der Digital News Initiative. Maas will sich im Monatsrythmus mit allen kooperierenden Medienhäusern austauschen, um xMinutes zu verbessern. Wann die App marktreif ist, steht nicht fest. Beteiligt an der algorithmenbasierten Auslieferung von News sind überregional: "Tagesschau", Spiegel Online, Deutsche Welle, rbb|24, BR24 und dpa infocom; regional: "Berliner Morgenpost", "Neue Osnabrücker Zeitung", "Hamburger Morgenpost", "Schwäbische Zeitung", "Mannheimer Morgen", mh:n-Gruppe ("Flensburger Tageblatt", "Schweriner Volkszeitung" u.a.) und infranken.de; Special Interest: t3n, "Apotheken Umschau", "Deutsche Apothekerzeitung" und Piqd. Maas sucht noch weitere Partner. presseportal.de, xminutes.net

xminutes-marco-maas-600
Große App-Koalition: Daten-Durchblicker Marco Maas beteiligt 21 private und öffentlich-rechtliche Medien an der Entwicklung seines News-Assistenten xMinutes. Die "Tagesschau" und Spiegel Online, "Berliner Morgenpost" und die Medienholding Nord sowie "t3n", "Apotheken Umschau" u.a. liefern gratis Inhalte zu. xMinutes versucht sie ab Dezember 1.000 Test-Nutzern zur richtigen Zeit aufs Handy zu spielen, indem die App zahlreiche Daten zum Nutzungsverhalten analysiert, wie Maas schon im turi2.tv-Interview erläuterte.

Google fördert das App-Projekt mit mehreren hunderttausend Euro aus der Digital News Initiative. Maas will sich im Monatsrythmus mit allen kooperierenden Medienhäusern austauschen, um xMinutes zu verbessern. Wann die App marktreif ist, steht nicht fest. Beteiligt an der algorithmenbasierten Auslieferung von News sind
überregional: "Tagesschau", Spiegel Online, Deutsche Welle, rbb|24, BR24 und dpa infocom;
regional: "Berliner Morgenpost", "Neue Osnabrücker Zeitung", "Hamburger Morgenpost", "Schwäbische Zeitung", "Mannheimer Morgen", mh:n-Gruppe ("Flensburger Tageblatt", "Schweriner Volkszeitung" u.a.) und infranken.de;
Special Interest: t3n, "Apotheken Umschau", "Deutsche Apothekerzeitung" und Piqd.
Maas sucht noch weitere Partner.
presseportal.de, xminutes.net


          Lead Security Software Engineer   
<span>Lead Security Software Engineer<br>&nbsp;<br>Security - San Diego (Sorrento Valley), CA - Full Time<br>&nbsp;<br>A Lead Security Software Engineer participates in the research and development of security related technologies and implementations. This research and development will help in the creation of a large product suite that enables content protection and security for video delivered via satellite, cable, and the Internet. &nbsp;The Lead Security Software Engineer collaborates with his/her teammates to deliver high-performing, scalable, high-quality products. &nbsp;The engineer should enjoy working through the software development life cycle. &nbsp;A successful engineer will be proactive, interactive, creative, and flexible. &nbsp;The engineer will need to learn and understand the entire product suite as well as gain deep technical knowledge of particular solutions in the group he/she joins. &nbsp;We are a global company and appreciate people with global awareness and knowledge (languages other than English are a bonus).<br>&nbsp;<br>Essential Duties &amp; Responsibilities:<br>&nbsp;<br>&bull; Assist development and QA teams with security related implementations and questions<br>&bull; Develop security related libraries for the development teams to use<br>&bull; Design security related protocols, secure storage mechanisms, authentication mechanisms, etc. for various products<br>&bull; Research new devices, chipsets and/or operating systems for security capabilities and weaknesses<br>&bull; Design and develop software for securing and managing premium video content in various environments<br>&bull; Participate and lead discussions dealing with architectures, specifications, requirements, testing and design reviews<br>&bull; Implement your designs, write code, write and perform unit tests, integrate into our distributed video security system and follow deliverables through the product design/development life cycle<br>&bull; Develop new algorithms and software; analyze, review, and re-architect current designs in order to create new capabilities as well as improve performance, efficiency, and sustainability<br>&bull; Estimate and plan development tasks, improve development processes and tools to meet corporate targets<br>&bull; Help train new development engineers in secure development life cycle (SDL)<br>&bull; Assist in analyzing possible security breaches and design countermeasures.<br>&bull; Participate in our innovation process to increase the company&rsquo;s patent portfolio.<br>This position reports to the Director of Security within the CTO team.<br>&nbsp;<br>Required Qualifications:<br>&nbsp;<br>&bull; 7 or more years software engineering work experience;<br>&bull; 5 or more years C/C++ or Java or Objective C design and coding experience (more than 1 language is a big plus);<br>&bull; Working knowledge of cryptographic paradigms such as PKI, Encryption, Authentication, Key exchange algorithms, etc.;<br>&bull; Understanding of software obfuscation and white-box cryptography and related commercial applications;<br>&bull; Significant programming experience using the following:<br>o Multi-threading;<br>o Network programming using TCP, UDP, etc.<br>o Client/server distributed architecture;<br>&bull; Experience with Secure Development Lifecycle (SDL) is required;<br>&bull; Knowledge of multimedia chipset security features and concepts such as Trusted Execution Environment (TEE) and TrustZone are highly desirable;<br>&bull; Familiarity with tools such as HP Fortify, penetration and fuzz testers are a plus;<br>&bull; Management experience is a bonus.<br>&nbsp;<br>&nbsp;<br>&nbsp;<br>&nbsp;<br>&nbsp;<br></span>
          Facebook changes algorithm to curb 'tiny group' of spammers   
Facebook Inc said on Friday it was changing the computer algorithm behind its News Feed to limit the reach of people known to frequently blast out links to…
          Algorithm Design Development Engineer - GPS0001893 - Milford, MI   
**Role Summary** + Work with Global Technical Specialist and others in defining algorithms for next generation engine controls and diagnostics\. + Develop calibration...
          Wipe 17.09   

Wipe is an easy and powerful tool to clear user browsing history, clean index.dat files, remove cookies, cache, logs, delete temporary internet files, autocomplete search history and any other tracks that user leaves after using PC. Includes DDOD algorithm to make deleted tracks unrecoverable. This tool allows you to review tracks that your computer keep against you. With monthly updates the program gets support for many new applications that released on the Web and you can be sure that your system is always clean and safe.

Copyright Betanews, Inc. 2017


          Global Head Mounted Display Market Industry Analysis and Opportunity Assessment   

The display optic projects virtual environment in front of wearer’s eye. A Head Mounted Display can reflect projected view or see through view.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- This Future Market Insights report examines the 'Global Head Mounted Display' (HMD) market for the period 2014–2020. The primary objective of the report is to offer updates on the advancements, growth, opportunities in semiconductor and electronics market, which have given rise to Head Mounted Display market.

Request Sample Report: http://www.mrrse.com/sample/240

The Head Mounted Display market is an aggregation of two types of products such as Helmet Mounted Display and Eye Wear Display. The 'helmet mounted display' is mounted around the complete head and are most applicable for high end aviation, Defense and military usage. On the other hand, 'eye wear display' is light weight goggle like device that user can wear on eyes.The display optic projects virtual environment in front of wearer's eye. A Head Mounted Display can reflect projected view or see through view. (Allow real world view along with superimposed computer generated images). 

The Head Mounted Display report starts with an overview & evolution of the Head Mounted Display System. Head Mounted Display market is segmented into two broad product types namely 'helmet mounted display' and 'eye wear display '.

In the next section, FMI covers the Head Mounted Display market performance in terms of Global Head Mounted Display units shipped and revenue split. This section additionally includes FMI's analysis of the key trends, drivers and restraints from the supply, demand and economy side, which are influencing the Head Mounted Display market. Impact analysis of key growth drivers and restraints, based on the weighted average model is included in the Head Mounted Display report to better equip and arm clients with crystal clear decision-making insights. Head Mounted Display market by components includes various components such as Micro-Display, Goggle, Head Tracker, Camera, Connectivity, Combined Mirror, Control Unit, Helmet, Battery, and Accessories.

The primary focus of the following section is to analyse the Head Mounted Display market by adoption among applications; the applications covered under the scope of the report are Defense, Aviation, & Military, Industrial Sector, Augmented & Virtual Reality, Research & Development, Healthcare, Video Gaming & Entertainment, Training & Simulation and Others.A detailed analysis has been provided for every application in terms of market size.

As highlighted earlier, Head Mounted Display is an aggregation of various components such as Micro-Display, Goggle, Head Tracker, Camera, Connectivity, Combined Mirror, Control Unit, Helmet, Battery, Accessories. All these sub-segments are included in this section to make the study more comprehensive.

The next section of the report highlights Head Mounted Display adoption by regions. It provides a market outlook for 2014 - 2020 and sets the forecast within the context of the Head Mounted Display market, including helmet mounted display and eye wear display at regional levels. This study discusses the key regional trends contributing to growth of the Head Mounted Display market on a worldwide basis, as well as analyses the degree at which global drivers are influencing this market in each region. Key regions assessed in this report include North America, Latin America, Western Europe, Eastern Europe, Asia Pacific excluding Japan, Japan as a separate region, Middle East and North Africa.

All the above sections, by products, by components, by application and by regions, evaluate the present scenario and the growth prospects of the Head Mounted Display market for the period 2014 - 2020. We have considered 2014 as the base year and provided data for the trailing 12 months.

Send An Enquiry: http://www.mrrse.com/enquiry/240

To calculate the HMD market size, we have considered revenue generated from the sale of Head Mounted Displays. The forecast presented in the report assesses the total revenue by both Value and Volume across the Head Mounted Display market. In order to offer an accurate forecast, we started by sizing the current market, which forms the basis of how the Head Mounted Display market will develop in the future. Given the characteristics of the market, we triangulated the outcome of two different types ofanalysis, based on supply side, and demand side. However, forecasting the market in terms of applications uptake and regions is more a matter of quantifying expectations and identifying opportunities rather than rationalising them after the forecast has been completed.

In addition, it is imperative to note that in an ever-fluctuating global economy, we not only conduct forecasts in terms of CAGR, but also analyse on the basis of key parameters such as year-on-year (Y-o-Y) growth to understand the predictability of the market and to identify the right opportunities across the Head Mounted Display market.

As previously highlighted, the Head Mounted Display market is split into a number of sub categories. All the Head Mounted Display sub-categories in terms of regions and applications are analysed in terms of Basis Point Share to understand individual segments' relative contributions to market growth. This detailed level of information is important for the identification of various key trends of the Head Mounted Display market.

Also, another key feature of this report is the analysis of all key Head Mounted Display segments, sub-segments, regional adoption and verticals revenue forecast in terms of absolute dollar. This is traditionally overlooked while forecasting the market. However, absolute dollar opportunity is critical in assessing the level of opportunity that a provider can look to achieve, as well as to identify potential resources from a sales and delivery perspective in the Head Mounted Display market.

Furthermore, to understand key growth segments in terms of growth & adoption of Head Mounted Display and regions, Future Market Insights developed the Head Mounted Display Market Attractiveness Index. The resulting index should help providers identify real market opportunities. 

Browse Full Report With TOC: http://www.mrrse.com/head-mounted-display-market

In the final section of the report, Head Mounted Display Competitive landscape is included to provide report audiences with a Dashboard view, based on categories of provider of helmet and eye wear glass, and presence in different applications. Key companies covered in the report are Head Mounted Display Component Providers, System Integrators, Consulting and Outsourcing Firms, and Head Mounted Display manufacturers. This section is primarily designed to provide clients with an objective & detailed comparative assessment of key providers specific to the type of Head Mounted Display (Helmet or Eyewear). Report audiences can gain application-specific vendor insights to identify and evaluate key competitors based on in-depth assessment of capabilities and success in the Head Mounted Display marketplace. Detailed profiles of the providers are also included in the scope of the report to evaluate their long-term and short-term strategies, key offerings and recent developments in the Head Mounted Display space. Key competitors covered are Google Corporation, Sony Corporation, Kopin Corporation, Oculus VR, eMagin Corporation, Seiko Epson Corporation, Rockwell Collins, Inc. Thales Visionix, Inc., Recon Instruments, and Sensics Corporation.

Related Report: Smart Mining Market: Global Industry Analysis and Opportunity Assessment 2015 - 2020

Smart Camera Market: Global Industry Analysis and Opportunity Assessment 2015 - 2020

Internet of Everything (IoE) Market: Global Industry Analysis and Opportunity Assessment 2014 - 2020

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/global-head-mounted-display-market-industry-analysis-and-opportunity-assessment-827266.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Global Video on Demand (VoD) Market Industry Analysis and Opportunity Assessment   

The primary objective of the report is to offer updates on the advancements in ICT and embedded systems that have given rise to a futuristic technology.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- This Future Market Insights report examines the 'Video on demand market for the period 2015–2020. The primary objective of the report is to offer updates on the advancements in ICT and embedded systems that have given rise to a futuristic technology: the VoD services, which is significantly transforming consumer content watching experience.

Request For Sample Report: http://www.mrrse.com/sample/226

Video on demandservices (VoD) allows TV program, news, movies and sports event to be delivered directly to a set-top box, PC, IPTV, mobile phones from satellite TV, internet, cable companies and as well as telephone, when customer requests it.Video on Demand solutions allow digital video subscribers to select programming of their choice from a library of content, to watch when they want for up to 24 hours with the power to pause, rewind, stop, and start at any time.Video-on-Demand is changing the way individual watch television due to which internet is becoming an alternate way to cater VoD services.

In the next section, FMI covers the video on demand market performance in terms of Global Video on Demandrevenue split, since this is detrimental to growth of the Video on Demandmarket. This section additionally includes FMI's analyses of the key trends, drivers and restraints from the supply, demand and economy side, which are influencing the video on demand market. Impact analysis of key growth drivers and restraints, based on the weighted average model is included in the video on demand report to better equip and arm clients with crystal clear decision-making insights.

As highlighted earlier, Video on Demandis an aggregation of pay TV service (includes analog cable TV, digital cable TV, IPTV and satellite TV), transactional based services and subscription based services. All these sub-segments are included in this section to make the study more comprehensive.

The next section of the report highlights video on demand adoption by regions. It provides a market outlook for 2015–2020 and sets the forecast within the context of the Global Video on Demandecosystem, including VoD services to build a complete picture at regional levels. Thisstudy discusses the key regional trends contributing to growth of the Video on Demandmarket on a worldwide basis, as well as analyses the degree at which global drivers are influencing this market in each region. Key regions assessed in this report include North America, Latin America, Western Europe, Eastern Europe, Asia Pacific excluding Japan (APEJ), Japan as a separate region, Middle East and Africa.

All the above sections, by services or by regions, evaluate the present scenario and the growth prospects of the Global Video on Demandmarket for the period 2015 –2020. We have considered 2014 as the base year and provide data for the trailing 12 months.

Send An Enquiry: http://www.mrrse.com/enquiry/226

To calculate the video on demand market size, we have considered revenue generated from the sale of video on demandsolutions and adoption ofservices. The forecast presented here assesses the total revenue by Value across the Video on demand market. In order to offer an accurate forecast, we started by sizing the current market, which forms the basis of how the Video on demand market will develop in the future. Given the characteristics of the market, we triangulated the outcome of three different types of analyses, based on supply side, consumer spending and economic envelope. However, forecasting the market in terms of various video on demand services, and regions is more a matter of quantifying expectations and identifying opportunities rather than rationalising them after the forecast has been completed.

In addition, it is imperative to note that in an ever-fluctuating global economy, we not only conduct forecasts in terms of CAGR, but also analyse on the basis of key parameters such as year-on-year (Y-o-Y) growth to understand the predictability of the market and to identify the right opportunities across the Global Video on Demandmarket.

As previously highlighted, the Video on demand market is split into a number of sub categories. All the Video on Demandsub-categories in terms of services and regions are analysed in terms of Basis Point Share (BPS) to understand individual segments' relative contributions to market growth. This detailed level of information is important for the identification of various key trends of the Global Video on Demand market.

Also, another key feature of this report is the analysis of all key Video on demand segments, sub-segments and regional adoption and revenue forecast in terms of absolute dollar. This is traditionally overlooked while forecasting the market. However, absolute dollar opportunity is critical in assessing the level of opportunity that a provider can look to achieve, as well as to identify potential resources from a sales and delivery perspective in the Video on demand market.

Furthermore, to understand key growth segments in terms of growth & adoption of Video on Demandservices across regions, Future Market Insights developed the Video on demand Market Attractiveness Index. The resulting index should help providers identify real market opportunities.

In the final section of the report, Video on Demand Competitive landscape is included to provide report audiences with a Dashboard view, based on categories of provider in the value chain, presence in Video on Demand services portfolio and key differentiators. Key categories of providers covered in the report are Video on Demandservices and solutions Providers. This section is primarily designed to provide clients with an objective & detailed comparative assessment of key providers specific to a market segment in the video on demand value chain. Report audiences can gain segment-specific vendor insights to identify and evaluate key competitors based on in-depth assessment of capabilities and success in the Video on Demandmarketplace. Detailed profiles of the providers are also included in the scope of the report to evaluate their long-term and short-term strategies, key offerings and recent developments in the Video on demand space. Key competitors covered are Accenture plc., Alcatel-Lucent, Motorola Solutions Inc., Cisco Systems Inc., SeaChange International, Netflix Inc., Amazon.com Inc., ZTE Corporation, Vubiquity Inc. andBritish Sky Broadcasting Ltd.

Browse Full Report With TOC: http://www.mrrse.com/video-on-demand-market

Key Segments Covered

By VoD Services
Pay TV Services
Analog Cable TV
Digital Cable TV
IPTV
Satellite TV
Transactional Based Services
Subscription Based Services

Key Regions/Countries Covered

North America
Latin America
Western Europe
Eastern Europe
Asia-Pacific Excluding Japan (APEJ)
Middle East & Africa

Key Companies

Accenture plc.
Alcatel-Lucent
Motorola Solutions Inc.
Cisco Systems Inc.
SeaChange International
Netflix Inc.
com Inc.
ZTE Corporation
Vubiquity Inc.
British Sky Broadcasting Ltd.

Related Report:Flat Panel Display Market: Global Industry Analysis and Opportunity Assessment 2014 - 2020

Consumer Electronics Market: Global Industry Analysis and Opportunity Assessment 2015 - 2020

Head Mounted Display Market: Global Industry Analysis and Opportunity Assessment 2014 - 2020

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/global-video-on-demand-vod-market-industry-analysis-and-opportunity-assessment-827259.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Global Distributed Control Systems Market Industry Analysis, Trends and Forecast   

It provides an advanced solution for the complicated process automation systems to make them more efficient and reliable.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Distributed control system (DCS) is a computerized control system used to control the production line in the industry. It provides an advanced solution for the complicated process automation systems to make them more efficient and reliable. Growing use of DCS in process automation helps in improving the process quality, down time costs, reduces the life cycle, and creates high reserves, thus, encouraging the companies to invest in newer distribution control systems. Increase in manufacturing activities in Asia Pacific and Middle East countries is expected to contribute to the growing demand for distributed control system (DCS) solutions across different end use industries. Increasing investments in oil and gas, chemicals, petrochemicals, power generation, and water and waste-water industry among others is driving the demand for distributed control systems in regional markets. There is rapid industrialization worldwide since the markets have begun to revive after the launch of projects that were deferred during the economic downturn.

Request For Sample Report: http://www.mrrse.com/sample/576

Increasing demand for oil and gas is creating the need for technologically advanced control systems for monitoring and controlling different activities in order to increase production efficiency. Also, these systems can help in reducing the risk to human life as DCS solutions can be implemented in hazardous environments. 

The growing demand for power is encouraging governments to set up new power stations or upgrade the existing ones for capacity expansion. Use of DCS solutions for monitoring different activities in power generation stations will help in enhancing the power generation speed and avoid human errors occurring during the process. Also, there is an increasing demand for distributed control systems in food processing industries due to the rapidly growing population. 

The global distributed control systems market is segmented based on components into DCS hardware, DCS software, and DCS services. The software segment is the largest in the global DCS market as most existing distributed control systems require system upgradation. DCS hardware is expected to become the fastest growing segment during the forecast period from 2012 to 2018 due to the increasing number of Greenfield projects in the Asia Pacific region. 

Most distributed control systems in North America and Europe were installed during the 1980s and are now reaching the end of their reasonable lifecycle, thus generating the need to upgrade or replace the systems. The opportunities in retrofitting currently outnumber the opportunities for new construction activities. In North America alone, there are around 5 million buildings that can be retrofitted, thus displaying the potential for distributed control systems in retrofit projects.  

The study on global distributed control systems, analyzes the market based on major component type, applications of DCS in end user industry and major geographies. The geographies analyzed under this report include North America, Europe, Asia Pacific, and Rest of the World. The report provides complete analysis of the factors responsible for driving and restraining the global DCS market and discusses the potential growth opportunities. 

Market shares and analysis of the leading players in the distributed control systems market is presented in the research study. ABB Ltd remained the market leader followed by Siemens and Honeywell in 2011. Other important players dominating the global distributed control systems market include Honeywell International, Yokogawa, Emerson, and Invensys Plc among others. The worldwide DCS market is highly consolidated with top five players accounting for around three-fourth of the total market share. Key players are continuously upgrading their products to keep up with the global market and match the increasing demands for sophisticated automated systems. 

The global market for distributed control systems is segmented as follows:

Distributed Control Systems 

By component type

DCS hardware
DCS software
DCS services
By end user industry

Browse full Report With TOC: http://www.mrrse.com/distributed-control-systems-market

Oil and gas industry
Chemicals industry
Power industry
Metal and mining industry
Pharmaceutical industry
Water and waste water treatment industry
Pulp and paper industry
Other process industries

By geography

North America
Europe
Asia Pacific 
Rest of the world

Related Report: IT Software and Services Market: Russian Industry Analysis and Opportunity Assessment 2014 - 2020

Supervisory Control and Data Acquisition (SCADA) Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2016 – 2024

IT Software and Service Market: Poland Industry Analysis and Opportunity Assessment 2014 - 2020

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/global-distributed-control-systems-market-industry-analysis-trends-and-forecast-827258.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Non-Alcoholic Drinks Market Global Industry Analysis, Trends and Forecast   

Changing customer needs and introduction of new flavors and product variants are the major factors driving demand for non-alcoholic drinks.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Non-alcoholic drinks refer to beverages, which have less than 0.5% alcoholic content by volume. Non-alcoholic beer and wine fall under this category. Non-alcoholic beverages are also known as 'virgin drinks.' Soft drinks, juices, ready-to-drink tea and coffee, bottled water, and energy drinks are the most-consumed non-alcoholic drinks globally. Changing customer needs and introduction of new flavors and product variants are the major factors driving demand for non-alcoholic drinks. With large number of upcoming business utilities in power, construction and automotive sector in developing countries, the per-capita income is expected to increase over the forecast period, thereby increasing the disposable income of the regions. Increasing disposable income in emerging economies is expected to have high impact on demand for non-alcoholic drinks in the long run.

Request For Sample Report:  http://www.mrrse.com/sample/748

North America is the largest market for non-alcoholic drinks globally closely followed by Asia Pacific. The U.S. is one of the major markets for non-alcoholic drinks in North America. However, in recent times owing to increasing health awareness the demand for non-alcoholic drinks has decreased considerably, especially among the younger population. Due to this, non-alcoholic drinks market in North America is expected to have a stable growth throughout the forecast period. Apart from this, increasing awareness about obesity is also one of the major factors restraining the demand for non-alcoholic drinks in North America. The major manufacturers in the non-alcoholic drinks market have introduced zero-sugar and diet drinks which meet the consumers' demands of reduced calories to cater with changing consumer requirements.

Asia Pacific is also one of the fastest growing markets for non-alcoholic drinks. Rapidly changing lifestyle and increasing disposable income are some of the major factors fueling the demand for non-alcoholic drinks in Asia Pacific. Emerging economies such as India, China and Singapore among others are some of the major markets for non-alcoholic drinks in Asia Pacific. Owing to these factors, Asia Pacific is expected to be one of the largest markets for non-alcoholic drinks in the long run.

Apart from this, increasing health awareness among consumers in Europe the demand for non-alcoholic drinks is expected to decrease considerably in the forecast period. However, with the introduction of diet and zero sugar drinks the customer perception is expected to change in the forecast period fueling the demand for non-alcoholic drinks in Europe. Apart from this, the demand for non-alcoholic drinks in Rest of the World is also expected to increase considerably in the forecast period. Brazil, Argentina, Chile and Saudi Arabia among others are some of the major markets for non-alcoholic drinks in this region.

A.G. Barr, plc. (U.K.), Dr. Pepper Snapple Group, Inc. (U.S.), Dydo Drinco, Inc. (Japan), Attitude Drinks, Inc. (U.S.), LiveWire Ergogenics, Inc. (U.S.), Calcol, Inc. (U.S.), Danone (France), Nestle S.A. (Switzerland), PepsiCo, Inc. (U.S.) and The Coca-Cola Company (U.S.) are some of the major players operating in the non-alcoholic drinks market.

This report has been segmented by product and geography and it includes the drivers, restraints, and opportunities (DROs), Porters Five Forces analysis, and supply chain of the non-alcoholic-drinks market. The study highlights current market trends and provides forecasts from 2014 to 2020. Average selling prices (ASP) across all product segments and packaging sizes are also covered within the scope of research. We have featured the current market scenario for the non-alcoholic drinks market and identified future trends that will impact demand for non-alcoholic drinks during the forecast period.

By product, the market has been segmented into soft drinks, bottled water, tea and coffee, juice, and dairy drinks. By geography, the market has been segmented into North America, Europe, Asia-Pacific, and RoW. The study also covers major countries such as the U.S., Canada, the U.K., Italy, France, Poland, Germany, Netherlands, Hungary, India, China, Japan, Australia, Brazil, and the Middle East. The report provides the current market size and anticipates its status over the forecast period.

The report also analyzes macro-economic factors driving and inhibiting growth in the non-alcoholic drinks market. Porter's Five Forces analysis offers insights into the market competition across its value chain. The market attractiveness analysis provided in the report highlights key-investing areas in this industry. The report will help manufacturers, suppliers, and distributors to understand the present and future trends in this market and formulate strategies accordingly.

Send An Enquiry: http://www.mrrse.com/enquiry/748

The report segments the global non-alcoholic drinks market as:

Non-Alcoholic Drinks Market, by Product

Soft Drinks
Bottled Water
Tea and Coffee
Juice
Dairy Drinks
Others
Non-Alcoholic Drinks Market, by Geography:

Browse Full Report With TOC: http://www.mrrse.com/non-alcoholic-drinks-market

North America
S.
Canada
Rest of North America
Europe
Italy
France
Poland
K.
Germany
Netherlands
Hungary
Rest of Europe
Asia Pacific
India
China
Japan
Australia
Rest of Asia Pacific
Rest of the World
Brazil
Middle East
Others

Related Report: Global Market Study on UHT Milk: Asia Pacific to Witness Highest Growth by 2019

Flavored and Functional Water Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast, 2013 - 2019

North America Milk Market - Scenario, Industry Analysis, Size, Share, Growth, Trends, and Forecast, 2013 - 2019

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/non-alcoholic-drinks-market-global-industry-analysis-trends-and-forecast-827256.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Global Bottled Water Market Industry Analysis, Trends and Forecast   

In terms of value, the market is expected to register a CAGR of 6.6% during the forecast period (2016–2024).

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Transparency Market Research (TMR) offers an 8-year forecast for the global bottled water market between 2016 and 2024. In terms of value, the market is expected to register a CAGR of 6.6% during the forecast period (2016–2024). The main objective of the report is to offer insights on the advancements in the bottled water market. The study demonstrates market dynamics that are expected to influence the current environment and future status of the global bottled water market over the forecast period. The report aims to offer updates on trends, drivers, restraints, value forecasts, and opportunities for manufacturers operating in the global bottled water market.

Request For Sample Report: http://www.mrrse.com/sample/704

Global Bottled Water Market: Drives and Restraints

Factors such as increasing health consciousness, hygiene awareness, lack of well-developed public water infrastructure and demand for functional bottled water are expected to fuel revenue and volume growth of the global bottled water market.  Bottled water manufacturers are introducing new products with health benefits and new flavours which is resulting several product launches in the bottled water market. The new products offering functional benefits, better taste and convenience are preferred by consumers. Increasing disposable income and consumer preferences for bottled water over aerated drinks and rising demand for functional and flavoured water are expected to further fuel demand for bottled water across the world.  Globally, the growth of PET bottles sector has led to widespread supply of bottled water, through wide network organized markets as well as several grocery and club stores. These factors are expected to bolster growth of the bottled water market in the near future. 

A section of the report discusses how the overall competition in the market is steadily increasing. It discusses various factors shaping the internal as well as external competition in the market. Overall internal competition in the bottled water market is observed to be comparatively high owing to a large number of major providers of bottled water products and increasing number of small domestic players in the market. The global bottled water industry is facing external competition from producers & distributors, which are adopting forward and backward integration strategies, and developing their own facilities to produce bottled water. Various barriers to entry in the industry are analyzed and rated on the basis of their impact on the competition level in the market. 

Global Bottled Water Market: Competitive Landscape

In the final section of the report, a competitive landscape has been included to provide report audiences with a dashboard view. Key categories of providers covered in the report are bottled water suppliers, manufacturers, and a list of major retailers and raw material suppliers. Detailed profiles of the providers are also included in the scope of the report to evaluate their long- and short-term strategies, key offerings, and recent developments in the bottled water space. Key players in the global bottled water market report include bottled water suppliers and manufacturers such as Key players in the global bottled water market report include bottled water suppliers and manufacturers such as Nestle Waters, Groupe Danone, PepsiCo Inc, The Coca Cola Company, Mountain Valley Spring Company, LLC., Suntori Beverage & Food Ltd, Unicer - Bebidas SA, Grupo Vichy Catalan, Icelandic Water Holdings ehf., CG Roxane, LLC

Send An Enquiry: http://www.mrrse.com/enquiry/704

Global Bottled Water Market: Segmentation 

The report analyses the market share of the global bottled water market by each of the packaging type segment including PET bottles, glass bottles, and others(foodservice, vending). It also analyses the market share of the global bottled water market by each distribution channel and product types. A section of the report highlights bottled water demand, region-wise. It provides a market outlook for 2016–2024 and sets the forecast within the context of the bottled water ecosystem, including strategic developments, latest regulations, and new product offerings in the global bottled water market. This study discusses key region trends contributing to growth of the global bottled water market, as well as analyzes the degree to which drivers are influencing the market in each region. Key regions assessed in this report include North America, Latin America, Europe, Asia Pacific (APAC), and Middle East & Africa (MEA).

Key Segments Covered

By Product Type
Still Bottle Water
Carbonated Bottle Water
Flavored Bottle Water
Functional Bottle Water
By Packaging
PET Bottles
Glass Bottles
Others
By Distribution Channel
Super/Hypermarket
Convenience/Drug Stores
Grocery Stores/Club Stores
Others (Foodservice/Vending)

On the basis of product type, the global bottled water market is segmented into still bottle water, carbonated bottle water, flavoured bottle water and functional bottle water. Demand for bottled water has grown significantly owing to increasing consumption bottled water over any other non-alcoholic beverage. Improving hygienic conditions, health consciousness and improving living standards, clubbed with increasing consumer preferences for functional bottled water is expected to drive revenue growth of the global bottled water market. 

On the basis of distribution channel, the global bottled water market is segmented into super/hypermarket, convenience/drug stores, grocery stores/club stores, and others (foodservice/vending). A detailed analysis has been provided for every segment in terms of market size analysis for bottled water across the globe. 

In addition, it is imperative to note that in an ever-fluctuating global economy, we not only conduct forecasts in terms of CAGR, but also analyze on the basis of key parameters such as Year-on-Year (Y-o-Y) growth, to understand the predictability of the market and identify the right opportunities.

Another key feature of this report is the analysis of all key segments in terms of absolute dollar. This is usually overlooked while forecasting the market. However, absolute dollar opportunity is critical in assessing the level of opportunity that a provider can look to achieve, as well as to identify potential resources from a sales and delivery perspective in the global bottled water market.

Key Regions/Countries Covered

North America
S.
Canada
Latin America
Brazil
Argentina
Rest of Latin America
Europe
EU5
Benelux
Russia
Rest of Europe
Asia Pacific
China
India
Japan
ASEAN
Australia and New Zealand
Rest of APAC
Middle East & Africa
GCC
North Africa
South Africa
Rest of MEA

Browse Full Report With TOC: http://www.mrrse.com/bottled-water-market

Related Report : UHT Milk Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2013 - 2019

Flavored and Functional Water Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast, 2013 - 2019

Bottled Water Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2016 - 2024

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/global-bottled-water-market-industry-analysis-trends-and-forecast-827252.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/earphones-headphones-market


          The U.S. Market Study on Beauty Devices at-Home Devices to Witness Highest Growth   

Skin is one of the most complex organs of the body that regulates body temperature and acts as a shield to protect the body from ultraviolet rays, bacteria, and viruses.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Skin is one of the most complex organs of the body that regulates body temperature and acts as a shield to protect the body from ultraviolet rays, bacteria, and viruses. Skin diseases and conditions are treated with the help of various medicines, creams, and therapies. In order to look younger and reduce skin aging, various anti-aging products and devices are used to revitalize and tighten the skin. Personal care products such as beauty devices and beauty products help to care for the skin. In addition, reduction of cellulite can be achieved with the help of medication, exercise, alteration in diet, and by using beauty devices. However, beauty devices have more potential to rectify dermal conditions with greater efficiency, markedly improving the appearance of the skin. In addition, beauty devices provide immediate results, especially when it comes to removing wrinkles and blemishes. 

Request For Sample Report: http://www.mrrse.com/sample/413

North America dominates the global market for beauty devices due to a large number of people in the aging population and availability of technologically-advanced devices. The U.S. is the largest market for beauty devices in North America owing to increased awareness about the potential applications of beauty devices in the country. Similarly, in Canada growing geriatric population and increasing awareness about the potential applications of beauty devices in the treatment of skin and hair problems have fuelled the demand for anti-aging and hair growth booster devices.

In recent times, there has been an increased use of beauty devices due to rise in the geriatric population. Increasing the prevalence of skin diseases, harmful effects of ultraviolet radiation and increasing the prevalence of obesity resulting in cellulite accumulation are fuelling the growth of the market. The geriatric population is more inclined to invest in beauty devices such as anti-aging and rejuvenation devices to reduce the signs of aging. On the other hand, complications such as prolonged erythema, contact dermatitis, and superficial bacterial and fungal infections associated with beauty devices inhibit the growth of the U.S. beauty devices market. In addition, extended availability of easy-to-use beauty products is also a major concern for the U.S. beauty devices market. Beauty device manufacturing companies are developing new and innovative products to meet the rising demand for advanced skin care and hair care solutions. In addition, growing product innovation and rising consumer inclination towards at-home beauty devices are some of the major trends in the U.S. beauty devices market. 

This report provides an in-depth analysis and estimation of the U.S. beauty devices market for the period 2014 - 2020, considering 2013 as the base year for calculation. Moreover, data pertaining to current market dynamics including market drivers, restraints, trends, and recent developments have been provided in the report. The U.S. beauty devices market is categorized on the basis of the usage area of beauty devices and the type of beauty devices. Based on usage area of beauty devices, the market comprises salons, spas, at home, and others. On the basis of device type, the market is categorized as hair removal devices, hair growth devices, cleansing devices, acne removal devices, oxygen and steamer devices, rejuvenation devices, intense pulsed light devices, derma roller, cellulite reduction devices, and others.

Browse Full Report With TOC: http://www.mrrse.com/us-beauty-devices-market

Some of the major players in the U.S. beauty devices market are L'Oreal Group, Nu Skin Enterprises, Inc., Home Skinovations Ltd., PhotoMedex, Inc., TRIA Beauty, Inc., Koninklijke Philips N.V., Syneron Medical, Ltd. Cynosure, Inc. and Procter & Gamble Company. These key market players have been profiled on the basis of attributes such as company overview, recent developments, growth strategies, sustainability, and financial overview.

Related Reports:

High Content Screening (HCS) Market: Global Industry Analysis and Opportunity Assessment 2015 - 2025

Gamma Knife Market: Global Industry Analysis and Opportunity Assessment 2015 - 2025

Wound Closure Products (Sutures, Surgical Staples, Wound Closure Strips, Adhesives and Tissue Sealants and Hemostats) Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2023

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/the-us-market-study-on-beauty-devices-at-home-devices-to-witness-highest-growth-827223.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Latin America Market Study on Capnography Equipment Brazil to Witness Highest Growth   

Capnography is a technique used to monitor the partial pressure of carbon dioxide (CO2) in the respiratory gases.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- This report on the capnography equipment market studies the current and future scenario of the Latin America (LATAM) market. Capnography is a technique used to monitor the partial pressure of carbon dioxide (CO2) in the respiratory gases. The technology is also used for the diagnosis of respiratory diseases. Stakeholders for this report include the companies engaged in manufacturing, distribution and marketing of capnography equipment and new entrants, who are planning to invest in the capnography equipment market. 

Request For Sample Report: http://www.mrrse.com/sample/382

An executive summary section of this report comprises market snapshot which summarizes market analysis about various segments of the capnography equipment market. It also provides data analysis and information about the LATAM capnography equipment market with respect to segments based on the products, capnograph types, end-use, hospital rooms and applications. 

Based on products, the capnography equipment market has been segmented into two major categories: capnographs and disposables. Capnographs segment has been further segmented into three categories: mainstream capnographs, sidestream capnographs, and micro stream capnographs. Based on end-use, the market has been categorized into three segments: hospitals, ambulatory, and others. Hospitals segment has been further differentiated on the basis of hospital rooms into five segments including operating room, intensive care units, emergency rooms, post-anesthesia care unit and general care floor. Furthermore, based on applications, the capnography equipment market has been segmented into procedural sedation, anesthetics, diagnosis and monitoring of patients and others. The market size and forecast for all the segments is estimated in terms of USD million for the period 2014 to 2021. The report also provides the compound annual growth rate (CAGR %) for each market segment for the forecast period from 2015 to 2021, considering 2014 as the base year. The volume for capnographs types is also estimated in terms of number of units for the period 2011 to 2021.

Send An Enquiry: http://www.mrrse.com/enquiry/382

On the basis of countries, the capnography equipment market has been categorized into five segments: Brazil, Mexico, Argentina, Colombia and Rest of the LATAM. The market size and forecast for each of these countries has been provided for the period 2014 to 2021, along with their respective CAGRs for the forecast period 2014 to 2021, considering 2014 as the base year. 

A detailed qualitative analysis of factors responsible for driving and restraining the growth of the market and future opportunities have been provided in the market overview section. Porter's five forces analysis is also explained in this section to understand the attractiveness of the capnography equipment market in Latin America considering five different parameters that have an effect on the sustainability of the companies. The parameters bargaining power of buyers, bargaining power of suppliers, the threat of new entrant, the threat of substitutes and competitive rivalry are explained in detail. All these factors will help the market players to take strategic decisions which will assist them in expanding their market share and strengthening their positions in the global capnography equipment market. The competitive scenario is analyzed through heat map of major market players in the competitive landscape section of the report. The presence of the major market players across different product types is analyzed in this section of the report.

A list of recommendations has been provided for new entrants as well as existing market players to assist them in taking strategic initiatives to establish a strong presence in the market. The report also profiles major players in the capnography equipment market based on various attributes such as company overview, financial overview, product portfolio, business strategies, and recent developments. Major players profiled in this report include Draegerwerk AG & Co. KGaA, Masimo Corporation, Medtronic, Inc., Nihon Kohden Corporation, Nonin Medical, Inc., Philips Healthcare, Smiths Medical, and Welch Allyn, Inc.

Browse Full Report With TOC: http://www.mrrse.com/latin-america-capnography-equipment-market

The capnography equipment market in Latin America has been segmented into the following:

LATAM Capnography Equipment Market, by Products  

Capnographs
Mainstream Capnography
Sidestream Capnography
Microstream Capnography 
Disposables
 
LATAM Capnography Equipment Market, by End-users

Hospitals
Operating Room
Intensive Care Units
Emergency Rooms
Post-anesthesia Care Unit
General Care Floor
Ambulatory
Others
 
LATAM Capnography Equipment Market, by Applications

Procedural Sedation
Anesthetics
Diagnosis and Monitoring of Patients
Others

LATAM Capnography Equipment Market, by Countries 

Brazil
Mexico
Argentina
Colombia
Rest of LATAM

Related Reports:

Mining Equipment Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2012 - 2018

Global Biochar Market - Industry Analysis, Market Size, Share, Growth, Trends and Forecast 2014 - 2020

Latin America Drilling Fluids Waste Management Market (By Offshore and Onshore Application): Industry Analysis, Size, Share, Growth, Trends and Forecast, 2014 - 2020

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/latin-america-market-study-on-capnography-equipment-brazil-to-witness-highest-growth-827213.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Global Market Study on Digital Signature Software: BFSI Industry Segment Projected to Register High Growth Rates During 2017-2025   

The report starts with an executive summary that depicts the pertinent market numbers of the digital signature software market and the CAGR for the forecast period 2017 – 2025. The executive summary also gives the 2025 market value share by component, by industry and by end user in the digital signature software market.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Persistence Market research presents a comprehensive report on the digital signature software market titled 'Digital Signature Software Market: Global Industry Analysis & Forecast, 2017–2025'. The report starts with an executive summary that depicts the pertinent market numbers of the digital signature software market and the CAGR for the forecast period 2017 – 2025. The executive summary also gives the 2025 market value share by component, by industry and by end user in the digital signature software market. In another part of the executive summary, region-wise market analysis of the global digital signature software market is given, which gives the region-wise values of the market in 2017 and 2025 as well as the CAGR of the different assessed regions during the forecast period.

Request For Sample Report: http://www.mrrse.com/sample/3111

The executive summary also contains a concise list of drivers, restraints and opportunities in the global digital signature software market along with a list of important market players operating in this market. After the executive summary, a section of the report is devoted to the market overview that comprises the definition of the global digital signature software market explaining what this market is all about and the scope of the report as per the given market definition.

Market Taxonomy

By Component

Software
Services

By End User

Consumer
Enterprises

By Industry

BFSI
Defense
Government
Retail and Consumer Goods
Healthcare
Education
IT and Telecom
Others

By Region

North America
Latin America
Europe
Asia Pacific
Middle East and Africa

The next section is devoted to the market dynamics of the global digital signature software market. This section explores in detail the drivers, restraints and opportunities in the global digital signature software market and explains in detail the factors encouraging as well as hampering the growth of this market. Various market opportunities are also discussed that give report audience an in-depth knowledge about the latest offerings in the global digital signature software market. Subsequent sections of the report discuss the global digital signature software market analysis and forecast by component, end user, industry and region. These sections of the report provide important information like Basis Point Share analysis, year-on-year growth comparison, absolute dollar opportunity and market attractiveness analysis. Region-wise trends developing in the digital signature software market are also presented for each region studied in detail in this report.

The last section of the report comprises the competition landscape that studies and profiles in detail the key market players operating in the global digital signature software market. This competition landscape gives a dashboard view of the key companies operating in the global digital signature software market along with their important information and broad strategies adopted to stay as leaders in the digital signature software market. This section also presents the digital signature software market evolution and the key developments that have shaped the market till the present day. Also, there is an important section on the recent deals/contracts that have taken place as far as leading market players operating in the global digital signature software market are concerned. Each leading company is also profiled individually and important information about the company such as company details, company description, product portfolio along with key developments concerning the company and strategic analysis is presented. This competition landscape is one of the most important sections of the report as it imparts a deep understanding of the leading companies operating in the global digital signature software market.

Browse Full Report With TOC: http://www.mrrse.com/digital-signature-software-market

Research Methodology

Overall market size has been analyzed through historical data, primary responses, and public domain data. Revenue of companies in the digital signature software market has been benchmarked to ascertain the market size for the base year. Macroeconomic indicators such as GDP and industry growth have been considered to forecast the market size over the period of assessment. The historical growth trend of end-use industries, market participants' performance, as well as the present macro-economic outlook has been taken into consideration for estimating the overall market trend forecast. Data acquired through primary and secondary research is then validated using the triangulation method and is extensively scrutinized using advanced tools to garner quantitative and qualitative insights into the global digital signature software market.

Related Reports:

Global Market Study on Membrane Technology in Pharmaceutical, Biopharma and Life Sciences: North America to Witness Highest Growth by 2019

Digital Transformation Market: MENA Industry Analysis and Opportunity Assessment 2014 - 2020

Physical Security Market (Hardware, Software and Services) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2013 - 2019

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/global-market-study-on-digital-signature-software-bfsi-industry-segment-projected-to-register-high-growth-rates-during-2017-2025-827134.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/digital-signature-software-market


          Study on Composites Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2017 - 2025   

Composite is a multiphase material exhibiting a significant properties of both constituent phases. It consists of a continuous phase called matrix and a dispersed phase reinforcement.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Composite is a multiphase material exhibiting a significant properties of both constituent phases. It consists of a continuous phase called matrix and a dispersed phase reinforcement. It is commercially available in different product types such as polymer matrix composite, metal matrix composite, and ceramic matrix composite. In terms of application, automotive & transportation and aerospace & defense held majority share in the global composites market in 2016. Rising demand from these end user industries is anticipated to fuel the composites market during the forecast period.

Request For Sample Report: http://www.mrrse.com/sample/3112

This study analyzes, estimates, and forecasts the global composites market in terms of volume (Kilo tons) and revenue (US$ Mn) from 2016 to 2025. The report also analyzes several driving and restraining factors and their impact on the market during the forecast period.

Global Composite Market: Applications

The report provides a detailed view of the composites market based on applications. Key applications included in the report are automotive & transportation, construction, aerospace & defense, electrical & electronics, marine & oil & gas, wind energy, and other (consumer goods, etc.). In terms of technology, the market is segmented into: pultrusion process, layup process, filament winding, compression molding, injection molding, resin transfer molding, and others. In terms of product type, the market is segmented into: polymer matrix composite, metal matrix composite, and ceramic matrix composite. Furthermore, the report segments the market based on key geographies such as North America, Europe, Asia Pacific, Latin America, and Middle East and Africa. It also provides market volume and revenue for each application, technology and product type under every regional segment. The composites market is further analyzed into major countries of each region.

Based on applications, technologies, product types and countries, the report analyzes the attractiveness of each segment with the help of an attractiveness tool. The study includes value chain analysis, which provides a better understanding of key players in the supply chain (from raw material manufacturers to end-users). Additionally, the study analyzes market competition and industry players using Porter's five forces analysis.

Global Composite Market: Research Methodologies

Primary research represents the bulk of our research efforts, supplemented by an extensive secondary research. We reviewed key players' product literature, annual reports, press releases, and relevant documents for competitive analysis and market understanding. Secondary research includes a search of recent trade, technical writing, internet sources, and statistical data from government websites, trade associations, and agencies. This has proven to be the most reliable, effective, and successful approach for obtaining precise market data, capturing industry participants' insights, and recognizing business opportunities.

Secondary research sources that are typically referred to include company websites, annual reports, financial reports, broker reports, investor presentations, and SEC filings, internal and external proprietary databases, and relevant patent and regulatory databases. Other sources include national government documents, statistical databases, and market reports, news articles, and press releases and webcasts specific to the companies operating in the market. Secondary sources referred for the study of the composites market include Reinforced Plastics Magazine, European Plastics Council, Compositesone, Composites World Magazine, etc. and company presentations.

Browse Full Report With TOC: http://www.mrrse.com/composites-market

Key Players Mentioned in this Report are:

The report includes an overview of the market share of key companies in the global composites market. Key players profiled in the composites study include Hexcel Corporation, TPI Composites, Inc, Owens Corning, Teijin Limited, Faurecia, Performance Composites Inc., Enduro Composites, Inc., Toray Industries, APPLIED POLERAMIC INC., Hexagon Composites, KINECO, Creative Composites Ltd., HITCO Carbon Composites, Inc., The Quadrant Group of Companies, Kangde Xin Composite Material Group Co., Ltd., BGF Industries, Inc., FACC AG, Premium Aerotec, Fokker Aerostructures, COTESA GmbH, PLASAN CARBON COMPOSITES, Wethje Carbon Composites, VELLO NORDIC AS, Fiberdur GmbH & Co. KG, Akiet B.V., and FILL GESELLSCHAFT M.B.H.

Related Reports:

Construction Chemicals Market (Asphalt Additives, Concrete Admixtures, Adhesives, Sealants and Protective Coatings) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast, 2014 - 2020

Toluene Diisocyanate Market for Flexible Foam, Rigid Foam, Coatings, Adhesives & Sealants, and Elastomers - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2023

Cell Culture Market (Consumables: Media, Sera, Reagents; and Instruments: Culture Systems, Incubators, Bioreactors, Pipetting Instruments, Roller Bottle Equipment, Biosafety Cabinets, Cryostorage Equipment, Others): Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2014 - 2022

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/study-on-composites-market-global-industry-analysis-size-share-growth-trends-and-forecast-2017-2025-827133.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/composites-market


          Hydrophilic Coatings Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2017 - 2025   

Hydrophilic coatings are applied on a variety of substrates in order to reduce the coefficient of friction. These coatings can be used for medical as well as non-medical applications.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Hydrophilic coatings market are applied on a variety of substrates in order to reduce the coefficient of friction. These coatings can be used for medical as well as non-medical applications. Medical devices such as catheters, guidewires, syringes, needles and intravascular devices are coated with hydrophilic coatings in order to make the surface of these devices lubricious. Hydrophilic coating materials are also used to impart self-cleaning and anti-fogging properties to flat glass for applications in the building, automotive and aerospace industries. Increasing expenditure on better medical & healthcare facilities and growth in the construction industry is anticipated to provide lucrative opportunities to the hydrophilic coatings market during the forecast period.

Request For Sample Report: http://www.mrrse.com/sample/3113

Global Hydrophilic Coatings Market: Scope of the Report

This report analyzes and forecasts the market for hydrophilic coatings at the global and regional level. The market has been forecast based on revenue (US$ Mn) from 2017 to 2025, considering 2016 as the base year. The study includes drivers and restraints of the global hydrophilic coatings market. It also covers impact of these drivers and restraints on demand for hydrophilic coatings during the forecast period. The report also highlights opportunities in the hydrophilic coatings market at the global and regional level.

The report includes detailed value chain analysis, which provides a comprehensive view of the global hydrophilic coatings market. Porter's Five Forces model for the hydrophilic coatings market has also been included to help understand the competitive landscape in the market. The study encompasses market attractiveness analysis, wherein end-users are benchmarked based on their market size, growth rate, and general attractiveness.

Global Hydrophilic Coatings Market: Segmentation

The study provides a decisive view of the global hydrophilic coatings market by segmenting it in terms of substrates such as polymers, metals & metal alloys and glass & other ceramics and applications such as automotive, aerospace, medical devices, optical & others ( building etc.). These segments have been analyzed based on present and future trends. Regional segmentation includes current and forecast demand for hydrophilic coatings in North America, Europe, Asia Pacific, Latin America, and Middle East & Africa.

The report provides the actual market size of hydrophilic coatings for 2016 and estimated market size for 2017 with forecast for the next eight years. The global market size of hydrophilic coatings has been provided in terms of revenue. Market revenue is given in US$ Mn. Market numbers have been estimated based on key end-users of hydrophilic coatings. Market size and forecast for numerous end-users have been provided in terms of global, regional, and country level markets.

Global Hydrophilic Coatings Market: Research Methodology

In order to compile the research report, we conducted in-depth interviews and discussions with a number of key industry participants and opinion leaders. Primary research represented the bulk of research efforts, supplemented by extensive secondary research. We reviewed key players' product literature, annual reports, press releases, and relevant documents for competitive analysis and market understanding. Secondary research includes a search of recent trade, technical writing, Internet sources, and statistical data from government websites, trade associations, and agencies. This has proven to be the most reliable, effective, and successful approach for obtaining precise market data, capturing industry participants' insights, and recognizing business opportunities.

Secondary research sources that are typically referred to include, but are not limited to company websites, annual reports, financial reports, broker reports, investor presentations, SEC filings, Plastemart magazine, TPE magazine, internal and external proprietary databases, and relevant patent and regulatory databases such as ICIS, Hoover's, OneSource, Factiva and Bloomberg, national government documents, statistical databases, trade journals, market reports, news articles, press releases, and webcasts specific to companies operating in the market.

We conduct primary interviews on an ongoing basis with industry participants and commentators to validate data and analysis. These help validate and strengthen secondary research findings. These also help develop the analysis team's expertise and market understanding.

Browse Full Report With TOC: http://www.mrrse.com/hydrophilic-coatings-market

Global Hydrophilic Coatings Market: Competitive Dynamics

The report comprises profiles of major companies operating in the global hydrophilic coatings market. Key players in the hydrophilic coatings market are Surmodics, Inc., Royal DSM N.V., Hydromer, Inc., AdvanSource Biomaterials Corp., Covalon Technologies Ltd., BioCoat, Inc., and Harland Medical Devices. Market players have been profiled in terms of attributes such as company overview, financial overview, business strategies, and recent developments.

Related Reports:

Expanded Perlite Market for Construction Products, Fillers, Horticultural Aggregates, Filtration & Process Aids and Other Applications - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2023

Lysine and Other Amino Acids (Methionine, Threonine & Tryptophan) Market by Application (Animal Feed, Food & Dietary Supplements, Pharmaceuticals), by Livestock (Swine, Poultry, Others) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast, 2012 - 2018

Propane Market by Application (Residential, Commercial, Chemical & Refinery, Industrial, Transportation and Agriculture) - Global Industry Analysis, Size, Share, Growth Trends and Forecast 2014 - 2022

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/hydrophilic-coatings-market-global-industry-analysis-size-share-growth-trends-and-forecast-2017-2025-827130.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/hydrophilic-coatings-market


          Global Market Study on Cyber Security: Services Segment Projected to Be the Most Attractive Segment by Component During 2017 - 2025   

The report includes an extensive analysis of key industry drivers, restraints, market trends and market structure. The market study provides a comprehensive assessment of key stakeholder strategies and imperatives for succeeding in the business.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Cyber Security Market: Global Industry Analysis and Forecast 2017 – 2025," a new report by Persistence Market Research offers insights into the various factors driving the popularity of the need for cyber security in five regions across the globe. The report includes an extensive analysis of key industry drivers, restraints, market trends and market structure. The market study provides a comprehensive assessment of key stakeholder strategies and imperatives for succeeding in the business. The report segregates the market based on component type, technology type and verticals using cyber security across different regions globally. Impact analysis of key growth drivers and restraints based on the weighted average model is included in this report to facilitate clients with crystal clear decision-making insights.

Request For Sample Report: http://www.mrrse.com/sample/3114

While analyzing the data the analysts have not only considered the historical trend examination but have also taken into consideration statistical analysis and government support analysis. Top countries' GDP analysis has been included in this report. The report quantifies the market value and market volume share of various segments of the global cyber security market across the studied regional markets, thereby performing a comprehensive analysis of the global cyber security market across all levels.

In-depth assessment of capabilities and detailed profiles of key competitors is included in the scope of the report

This report on the global cyber security market presents a competitive landscape to provide clients with a dashboard view based on categories of providers in the value chain, presence in the cyber security portfolio and key differentiators. This section is primarily designed to provide clients with an objective and detailed comparative assessment of key providers specific to a market segment in the cyber security supply chain as well as potential players. Report audiences can gain segment-specific vendor insights to identify and evaluate key competitors based on an in-depth assessment of capabilities and success in the marketplace. Detailed profiles of providers are also included in the scope of the report to evaluate their long-term and short-term strategies, key offerings and recent developments in the global cyber security market.

Research Methodology

This report evaluates the present scenario and the growth prospects of the global cyber security market across various regions globally for the period 2017 – 2025. The analysts have considered 2016 as the base year and have provided data for the trailing 12 months. In order to offer an accurate forecast, the analysts have started by sizing the current market, which forms the basis for how the global cyber security market will grow in the future. Given the characteristics of the market, the analysts have triangulated the outcome of different types of analyses based on the technology trends. In addition, the report not only presents forecasts in terms of CAGR but also analyzes the market on the basis of key parameters such as year-on-year (Y-o-Y) growth to understand the predictability of the market and to identify the right opportunities across the market. As previously highlighted, the global cyber security market is split into a number of segments. All segments are analyzed in terms of basis point share to understand each individual segment's relative contribution to market growth. This detailed level of information is important for the identification of various key trends governing the global cyber security market.

Browse Full Report With TOC: http://www.mrrse.com/cyber-security-market

Another key feature of this report is the analysis of all market segments in terms of absolute dollar opportunity. This is traditionally overlooked while forecasting the market. However, absolute dollar opportunity is critical in assessing the level of opportunity that a provider can look to achieve, as well as to identify potential resources from a sales and delivery perspective. Yearly change in inflation rate has not been considered while forecasting market numbers. Top-down approach has been used to assess market numbers for each category while bottom-up approach has been used to counter-validate the reached market numbers.

Related Reports:

Facial Recognition Market (By Technology Type - 2D Facial Recognition, 3D Facial Recognition and Facial Analytics; By End-use Industry - Government and Utilities; Military; Homeland Security; BFSI; Retail; Others (Digital Signage, Automotive, Web Applications, and Mobile Applications)) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2022

Physical Security Market (Hardware, Software and Services) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2013 - 2019

Software Defined Networking (SDN) Market - Global Industry Analysis, Size, Share, Growth, Trends, and Forecast 2012 - 2018

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/global-market-study-on-cyber-security-services-segment-projected-to-be-the-most-attractive-segment-by-component-during-2017-2025-827128.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/cyber-security-market


          Automotive Sheet Metal Components Market - Global Industry Analysis, Market Size, Share, Growth, Trends, and Forecast 2017 - 2025   

Sheet metals are metals, which are formed or developed by industrial processes into flat and thin pieces. Sheet metals are refined form of metals which can be cut and bent into various shapes and sizes as per the required specifications.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- This market research study analyzes the automotive sheet metal components market on global basis and provides estimates in terms of revenue (US$ Bn) from 2016 to 2025. It describes the market dynamics affecting the industry and analyzes their impact through the forecast period. Moreover, it highlights the significant opportunities for market growth in the next eight years.

Sheet metals are metals, which are formed or developed by industrial processes into flat and thin pieces. Sheet metals are refined form of metals which can be cut and bent into various shapes and sizes as per the required specifications. The thickness of sheet metals can vary significantly depending on the type of application.

Request For Sample Report: http://www.mrrse.com/sample/3115

Extremely thin sheet metals are called leaf or foil and sheet metals thicker than 6 mm are called plates. Sheet metals can be available in coiled strips or flat pieces. The coiled strips are formed by processing coils of metals through a roll slitter. The most commonly used sheet metals range from 30 to 7 gauge. Gauge differs between ferrous metals, which is iron based and non-ferrous metals such as aluminum or copper. There are numerous metals that can be used to form sheet metals such as titanium, steel, tin, nickel, brass, aluminum, gold, platinum, silver and copper. However, steel and aluminum are the two major type of metals used as automotive sheet metals.  

Global Automotive Sheet Metal Components Market: Segmentation

The market is segmented on the basis of geography into North America, Europe, Asia Pacific, Middle East and Africa (MEA), and Latin America. These segments have been estimated in terms of revenue (US$ Bn). In addition, the report has been segmented based on material, which includes, steel and aluminum. By application the market is categorized into interior, drivetrain, engine, exterior and chassis. For better understanding of the automotive sheet metal components market, the study comprises market attractiveness analysis, where the material segment is benchmarked based on their market scope, growth rate and market attractiveness. Competitive rivalry is projected to be high among key players to acquire higher share of the market in the coming years.

Global Automotive Sheet Metal Components Market: Competitive Landscape

The global automotive sheet metal components market is fragmented with few medium and large companies. Entry into this market is not restricted as there is no monopoly of business and the market has huge scope and opportunity. However, setting up of manufacturing units for automotive sheet metal components require huge capital and resource, which is not feasible for most of the small and medium sized companies. Increasing private equity investments and merger and acquisitions of companies in the automotive sector has been of great influence to the automotive sheet metal components market. Significant growth in the automotive sector coupled with economic reforms in major developing countries has been able to bolster the growth of this market. Asia Pacific, Middle East and Africa, and Latin America are key markets for the future and are expected to provide huge opportunities to the global automotive sheet metal components manufacturers because of the increasing production and usage of passenger vehicles in the aforementioned regions.

Browse Full Report With TOC: http://www.mrrse.com/automotive-sheet-metal-components-market

The report also provides company market share analysis of the various industry participants. Acquisition is the main strategy being widely followed by leading market players. In case of an acquisition, the acquirer takes advantage of existing synergies. As a result, both companies are expected to emerge more profitable and stronger than before. Key players in the global automotive sheet metal components market have been profiled and their company overview, financial overview, business strategies and recent developments have been covered in the report. Major market participants profiled in this report include: Novelis Inc., Aleris International Inc., Mayville Engineering Company, Inc., O'Neal Manufacturing Services, General Stamping and Metal Works, Larsen Manufacturing, LLC, Amada Co. Ltd., Paul Craemer GmbH, Frank Dudley Ltd., and Omax Autos Ltd.

Related Reports:

Head-up Display Market (Type: Combiner Projected HUDs, Windshield Projected HUDs; Applications: Aviation, Automotive, Others (Sports, Gaming, etc.); Automotive: Premium Cars, Sports Cars, Mid-segment Cars) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2022

Agriculture and Farm Machinery Market (Product: Farm Tractors, Harvesting Machinery, Plowing and Cultivation Machinery, Planting and Fertilizing Machinery, Haying Machinery, Other Agricultural Machinery and Parts and Attachments) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2022

Automatic Number Plate Recognition (ANPR) Market (Security and Surveillance, Vehicle Parking, Traffic Management, Toll Enforcement) - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 - 2023

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/automotive-sheet-metal-components-market-global-industry-analysis-market-size-share-growth-trends-and-forecast-2017-2025-827125.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/automotive-sheet-metal-components-market


          Smart Rings Market - Global Industry Analysis, Trend, Size, Share and Forecast 2017 - 2025   

Smart rings are electronic wearable devices worn on the finger. It works through movement of the fingers and gesture control. The smart rings are equipped with sensors and chips which enables it to perform various functions such as receiving call, message and email alerts, making contactless payments, measuring blood pressure levels, pulse rates and heart beats and it also provides biometric solutions by monitoring access controls among others.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- This report covers the analysis and forecast of the smart rings market on a global and regional level. The study provides historic data of 2016 along with forecast for the period between 2017 and 2025 based on volume (Thousand Units) and revenue (US$ Mn).

Smart rings are electronic wearable devices worn on the finger. It works through movement of the fingers and gesture control. The smart rings are equipped with sensors and chips which enables it to perform various functions such as receiving call, message and email alerts, making contactless payments, measuring blood pressure levels, pulse rates and heart beats and it also provides biometric solutions by monitoring access controls among others.

Request For Sample Report: http://www.mrrse.com/sample/3116

Global Smart Rings Market: Scope of the Report

The study provides a decisive view of the smart rings market by segmenting it based on type of operating system, technology, applications and regional demand. The type of operating system, technology and applications segments have been analyzed based on current trends and future potential. The market has been estimated from 2017 to 2025 in terms of volume (Thousand Units) and revenue (US$ Mn). Regional segmentation includes the current and forecast demand for North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa. These have been further sub-segmented into countries and regions with relevance to the market. The segmentation also includes demand for individual applications in all regions.

The study covers the drivers and restraints governing the dynamics of the market along with their impact on demand during the forecast period. Additionally, the report includes potential opportunities in the smart rings market on the global and regional level. The study encompasses market attractiveness analysis, wherein applications have been benchmarked based on their market size, growth rate, and general attractiveness for future growth.

The market has been forecast based on constant currency rates. Prices of smart rings vary in each region and are a result of the demand-supply scenario in the region. Hence, a similar volume-to revenue ratio does not follow for each individual region. Individual pricing of smart rings for each application has been taken into account while estimating and forecasting market revenue on a global basis. Regional average price has been considered while breaking down the market into segments in each region.

The report provides the size of the smart rings market in2016 and the forecast for the next nine years up to 2025. The size of the global smart rings market is provided in terms of both volume and revenue. Market volume is defined in thousand units, while market revenue for regions is in US$ Mn. The market size and forecast for each product segment is provided in the context of global and regional markets. Numbers provided in this report are derived based on demand generated from different applications.

Global Smart Rings Market: Research Methodologies

Market estimates for this study have been based on volume, with revenue being derived through regional pricing trends. The price for commonly utilized grades of smart rings in each application has been considered, and customized product pricing has not been included. Demand for smart rings has been derived by analyzing the global and regional demand for smart rings in each application. The global smart rings market has been analyzed based on expected demand. Market data for each segment is based on volume and corresponding revenues. Prices considered for calculation of revenue are average regional prices obtained through primary quotes from numerous regional suppliers, distributors, and direct selling regional producers based on manufacturers' feedback. Forecasts have been based on the expected demand from smart rings. We have used the top-down approach to estimate the global smart rings market, split into regions. The type of operating system, technology and applications split of the market has been derived using a top-down approach for each regional market separately, with the global type of operating systems, technology and applications segment split being an integration of regional estimates. Companies were considered for the market share analysis based on their product portfolio, revenue, and manufacturing capacity. In the absence of specific data related to the sales of smart rings of several privately held companies, calculated assumptions have been made in view of the company's product portfolio and regional presence along with the demand for products in its portfolio.

Browse Full Report With TOC: http://www.mrrse.com/smart-rings-market

Key Players Mentioned in the Report are:

The report covers a detailed competitive outlook that includes market share and company profiles of key players operating in the global market. Some of the leading market players in the smart rings market are McLear Ltd. (U.K), Log bar Inc. (Japan), Moodmetric (Finland), Shanxi Jakcom Technology Co., Ltd. (China) and Ringly Inc. (U.S.) among others.

Related Reports:

Smartwatches Market [By Price Range - High-end Smartwatches, Mid-end Smartwatches, and Low-end Smartwatches; By Operating System - Android Wear, Watch OS, and Others] - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2014 - 2020

Video On Demand (VoD) Market: Global Industry Analysis and Opportunity Assessment 2014 - 2020

Freezer and Beverage & Wine Coolers Market (By Product Type - Ice-cream Freezers, Chest Freezers, Upright Freezers, Beverage Coolers and Wine Coolers; By Capacity - 500 & above litres, 300 to 500 litres, 200 to 300 litres and 200 & below litres; By Door Type - 4 Door & above, 3 Door, 2 Door and 1 Door) - SEA Industry Analysis, Size, Share, Growth, Trends and Forecast 2014 - 2022

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/smart-rings-market-global-industry-analysis-trend-size-share-and-forecast-2017-2025-827122.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/smart-rings-market


          Automotive Filters Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2017 - 2025   

Filters have become an integral part of automotive engines in the current scenario. Durability, reliability, and ease in operation of the vehicles are the key factors on which the quality of these vehicles depends. Automotive filters helps in maintaining a quality life for a vehicle.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- Filters have become an integral part of automotive engines in the current scenario. Durability, reliability, and ease in operation of the vehicles are the key factors on which the quality of these vehicles depends.  Automotive filters helps in maintaining a quality life for a vehicle. It enhances the efficiency as well as help customers breathe cleaner air. The regulations pertaining to environmental safety and emission standards set by the regulatory bodies are likely to become tougher during the forecast period. This in turn will mandate the automobile manufacturers to use automotive filters in the cars produced, thereby encouraging the growth of the market during the forecast period.

Request For Sample Report: http://www.mrrse.com/sample/3117

Global Automotive Filters Market: Segmentation

Based on filter type, the global automotive filters market has been segmented into air filters, fuel filters, hydraulic filters, oil filters, and others. By vehicle type the global automotive air filters market can be segmented into passenger vehicles, commercial vehicles, and others. The global automotive filters market, by end-use industry has been classified into OEMs and aftermarket.  The OEM segment leads the market in terms of revenue. The rising demand for passenger cars across the world has mandated car manufacturers to use various types of filters in their cars in order to adhere to various vehicular norms, which subsequently led to the increase in demand for filters in this segment. Aftermarket segment is expected to grow at a steady rate during the forecast period 2017- 2025. High replacement rate and low cost of aftermarket parts are some of the reasons for the high growth of the segment. Some filters are functionally stronger than OEM filters as well. These factors are likely to boost the growth of aftermarket end-use segment.

Sterner regulations are likely to shape the automobile market, companies operating in the global automotive filters market are anticipated to invest heavily on their research and development to come up with advanced automotive filters. This in turn will propel the overall market for automotive filters. Moreover, adherence to these norms is compulsory for the automobile manufacturers. Therefore, the use of filters in automobiles is likely to increase during the forecast period in order to meet these regulatory norms.

Global Automotive Filters Market: Drivers and Restraints

This report on the global automotive filters market highlights the current scenario of the market along with stating the expected growth of the global market during the forecast period. Various political, social, economic and technological factors which are likely to impact the demand of automotive filters have been analyzed to include an exhaustive study of the global market drivers, restraints and opportunities, i.e. the market dynamics under the purview of the report. Further, the key players operating the automotive filters market have been profile thoroughly and competitively across the five geographic regions and their competitive landscape is inclusive of their recent developments related to automated guided vehicles and the distinguishable business strategies adopted by them. To further analyze their market positioning, SWOT analysis has been provided for each of the players. In addition, the report includes the market attractiveness analysis of the segmentation, by filter type for offering a deep insight into the major filter usage area in the vehicles. Thus, the global automotive filters market report provides an extensive study of the market along with offering the forecast of the market in terms of revenue (US$ Billion) and volume (million units) from the period of 2017 – 2025.

Browse Full Report With TOC: http://www.mrrse.com/automotive-filters-market

Key Players Mentioned in this Report are:

Some of the major players in the Automotive Filters market are: Sogefi SpA (Italy), MAHLE GmbH (Germany), MANN+HUMMEL GmbH (Germany), A.L. Filter (Israel), Robert Bosch GmbH (Germany), Donaldson Company, Inc. (U.S.), North American Filter Corporation (U.S.), Fildex Filters Canada Corporation (Canada), K&N Engineering, Inc. (U.S.), Filtrak BrandT GmbH (Germany), Luman Automotive Systems Pvt. Ltd. (India), ALCO Filters Ltd. (Cyprus), and Siam Filter Products Ltd., Part. (Thailand).

Related Reports:-

Vehicle Cameras (Affordable, Mid-range and high-end Vehicle Cameras) Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2014 - 2020

Automotive Telematics Market: Asia Pacific Industry Analysis and Opportunity Assessment 2014 - 2020

Dual Clutch Transmission (DCT) Market: Global Industry Analysis and Opportunity Assessment 2014 - 2020

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: http://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/automotive-filters-market-global-industry-analysis-size-share-growth-trends-and-forecast-2017-2025-827119.htm

Media Relations Contact

Vishant
Assistant Manager
MRRSE
Telephone: 518-730-0559
Email: Click to Email Vishant
Web: http://www.mrrse.com/automotive-filters-market


          Hotels Market - Global Industry Analysis,Trends and Forecast   

The booming travel and tourism industry is one of the major factors boosting the growth of the hotel industry.

Albany, NY -- (SBWIRE) -- 06/30/2017 -- The hotel industry is one of the major sectors in the hospitality sector. Hotel facilities vary in function, size, and cost. The facilities provided by hotels include swimming pool, childcare, conference facilities, business centers, and social function services, among others. The booming travel and tourism industry is one of the major factors boosting the growth of the hotel industry. Apart from this, aggressive branding strategies adopted by the major players are also expected to drive the growth of the hotels market. Hotels are classified based on services they offer into: 1 Star, 2 Star, 3 Star, 4 Star, 5 Star, and unrated. The one star hotels include tourist hotels; two star hotels include standard hotels; 3 star hotels include comfort hotels; 4 star hotels include first class hotels; and 5 star hotels include luxury hotels.

Request For Sample Report : http://www.mrrse.com/sample/280

The report has been segmented by type and region, and highlights the current market trends and provides forecast for the period from 2015 to 2021. We have also covered the current market scenario for hotels and highlighted the future trends that are likely to have an impact on the demand for hotels. The report lists a number of hotels located across major countries in each of the regions. The report also analyzes the macroeconomic factors influencing and inhibiting the growth of the global hotels market.

The 3 Star segment held the largest market share in the global hotels market. Increasing domestic tourism coupled with demand for luxurious lifestyle is one of the major factors fueling the demand in the 3 Star hotels segment. However, unrated segment is expected to be fastest growing segment with increasing demand in the budget hotels segment. However, the 5 Star hotels segment also has huge growth potential. Increasing number of business travelers coupled with demand for better service is one of the major factors fueling the demand in this segment.

North America is the largest as well as the fastest growing market for hotels globally. The U.S. held the largest market share in North America hotels market. The U.S. has largest number of budget hotels globally. Moreover, booming travel and tourism industry is also expected to have positive impact on the hotels market. In addition, the major players offering segmented offerings is also driving the hotels market in the U.S. India, China, Singapore and South Korea among others are some of the major markets for hotels in Asia Pacific. Singapore with increasing number of business travelers is the fastest growing market in the Asia Pacific region. The 4 Star segment held the largest market share in Singapore hotels market. In addition, Brazil, Saudi Arabia and UAE are some of the major markets fueling the demand for hotels in rest of the world.

The report also provides an understanding of the value (USD billion) of the hotels market. The study also provides forecast from 2015-2021 and highlights current and future market trends. The report also provides an understanding of brand shares in hotels market across various countries covered in the scope of research.

By Geography, the market has been segmented into North America, Europe, Asia Pacific and rest of the world. The countries included in the North America region are U.S., Canada and rest of North America. Europe includes the U.K., Germany, France and rest of Europe. India, China, South Korea and Singapore are some of the major countries covered within the scope of Asia Pacific. Rest of the world region includes Brazil, UAE and Saudi Arabia among others.

Browse Full Report With TOC : http://www.mrrse.com/hotels-market

The report also provides number of hotels present across each of the countries across segments. The key players operating in global hotels market are Hilton Worldwide Holdings Inc., Marriott International Inc., InterContinental Hotels Group Plc, Starwood Hotels and Resorts Worldwide, Inc., Accor Group, Indian Hotels Co Ltd., ITC Ltd., Jumeirah International LLC, Atlantis The Palm Limited, and Four Seasons Holdings Inc. among others.

Global Hotels Market: By Type

1 Star
2 Star
3 Star
4 Star
5 Star
Unrated
Global Hotels Market: By Geography

North America
U.S.
Canada
Rest of North America
Europe
U.K.
Germany
France
Rest of Europe
Asia Pacific
India
China
South Korea
Singapore
Rest of Asia Pacific
Rest of the World
Brazil
UAE
Saudi Arabia
Others
About Us

Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.  

Contact

State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States

Telephone: +1-518-730-0559 
Email: sales@mrrse.com

Google+: https://plus.google.com/u/0/109558601025749677847/posts

Linked in: https://www.linkedin.com/company/mrrse

Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/hotels-market-global-industry-analysistrends-and-forecast-826758.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 1-518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          POU Water Purifiers Market MENA Industry Analysis and Opportunity Assessment   

This Future Market Insights report examines the ‘POU Water Purifier Market’ in Middle East and North Africa region for the period 2014–2020.

Albany, NY -- (SBWIRE) -- 06/29/2017 -- This Future Market Insights report examines the 'POU Water Purifier Market' in Middle East and North Africa region for the period 2014–2020. The primary objective of the report is to offer key insights about water purifier market in MENA to current market participants or new entrant's participants across the value chain. 

Request For Sample Report: http://www.mrrse.com/sample/271

Report includes study of the three key technologies of water purification i.e. Reverse Osmosis (RO), 

Ultra Violet (UV) and Media filtration (Gravity). Report offers in depth analysis of market size, forecast and the key trends followed in all three segments.The report starts with an overview of parent market i.e. water treatment industry in MENA and the part POU water purifier industry plays in it. Report also offer useful insights about global POU water purifier market and the role MENA market is posed to play. 

Next section of the report includes FMI analysis of the key trends, drivers and restraints from supply side, demand side and economic perspective, which are influencing the target market. Impact analysis of key growth drivers and restraints based on weighted average model included in the report better equips and arms client with crystal clear decision making insights.As highlighted before, water purifiers are based Reverse Osmosis (RO), Ultra Violet (UV) and media based filtration technology. Reverse osmosis is estimated to contribute noteworthy proportion of revenue in MENA water purifiers market. However, in the price sensitive regions, media based segment is expected to witness robust growth during the forecast period.

The next section highlights POU water purifier market by region. It provides market outlook for 2014- 2020 and sets forecast within context of water purifier market, including the three technologies to build out a complete picture at regional level. This study discusses the key regional trends contributing to the growth of the water purifier market in MENA as well as analyses the degree at which key drivers are influencing water purifiers market in each region of MENA. For this report, regions assessed are Kingdom of Saudi Arabia, United Arab Emirates, Turkey, Israel, Egypt, Algeria and rest of MENA.

To calculate the revenue generated from POU water purifiers, the report considered total volume sales of water purifier along with the average selling price, and also the revenue generated from water purifier segment of major players in the market. When forecasting market, the starting point is sizing the current market, which forms the basis for how market will develop in future. Given the characteristics of market, we triangulated the outcome of three different type of analysis based on supply side, consumer spending, and economic envelope. However, forecasting the market in terms of various water purifier technologies and regions is more matter of quantifying expectations and identify opportunities rather than rationalizing them after the forecast has been completed.

Also another key feature of report is analysis of the three key technologies of water purifier and regions in terms of absolute $ opportunity. This is traditionally overlooked when analyst forecasts the market. But absolute $ opportunity is critical in assessing the level of opportunity that a provider can look to achieve, as well as to identify potential resources from both the sales and delivery perspective. 

Further to understand key growth segments in terms of technology and region FMI developed the MENA water purifier market attractiveness index. The resulting index should help providers identify real market opportunities.

In the final section of report, MENA water purifier market competitive landscape is included to provide report audience with dashboard view based on categories of provider in value chain, presence in water purifier market and their key differentiators. Key categories of providers covered in the report are manufacturers and major distributors. This section is primarily designed to provide client with an objective and detailed comparative assessment of key providers specific to market segment in the POU water purifier value chain. Report audiences gain segment and function specific vendor insight to identify and evaluate key competitors based on in depth assessment and capabilities and success in the POU water purifier market place. Detailed profiles of the providers are also included as scope of the study to evaluate their long term and short term strategies, key offerings and recent developments in the market. Key competitors covered are Eureka Forbes, PureIt, Strauss Water, Panasonic, LG and others.

Browse Full Report With TOC: http://www.mrrse.com/mena-pou-water-purifiers-market

In this study, we analyze the MENA Water Purifier Market during 2012-2020. We focus on:

Market size and forecast, 2012-2020
Key drivers and developments in POU Water Purifier Market
Key Trends and Developments of MENA Water Purifier Market technologies such as RO,UV and Media

Key Drivers and developments in particular regions such as KSA, UAE, Turkey ,Israel, Egypt, Algeria and Others

Key Geographies Covered
Middle East and North Africa

Other Key Topics

MENA- Water Market, MENA- Wastewater Treatment Equipment Market
Examples of key Companies Covered
Straus Water, Water Life, LG, Panasonic, Eureka Forbes

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.                                                 

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: https://twitter.com/MRRSEmrrse

For more information on this press release visit: http://www.sbwire.com/press-releases/pou-water-purifiers-market-mena-industry-analysis-and-opportunity-assessment-826752.htm

Media Relations Contact

Bharat
Assistant Manager
MRRSE
Telephone: 518-621-2074
Email: Click to Email Bharat
Web: http://www.mrrse.com/


          Global Market Study on Sports Equipment Ball Sports to Be the Largest Segment   

The global sports equipment market consists of equipment for ball sports, adventure sports, fitness, golf, winter sports.

Albany, NY -- (SBWIRE) -- 06/29/2017 -- The global sports equipment market is witnessing considerable growth over the last few years. Increasing the participation in sports activities, growing consumer awareness about health and fitness, and the emergence of e-commerce are some of the key drivers impelling growth of the sports equipment market. However, increasing availability of counterfeit products and rising prices of sports equipment are restraining the growth of this market to some extent. The global sports equipment market consists of equipment for ball sports, adventure sports, fitness, golf, winter sports, and others sports, including archery, billiards, bowling, wheel sports, pogo sticks, and indoor games. Among the various sports equipment segments, ball sports hold the largest market share. Increasing media coverage of various global sports events such as the Olympic Games, Commonwealth Games, and FIFA World Cups encourage the youth to take part in various sports. The sports equipment industry is swiftly embracing new technologies and adapting its products in order to keep pace with rapidly changing global trends.

Request For Sample Report: http://www.mrrse.com/sample/163

North America holds the largest market share for sports equipment, followed by Europe and Asia Pacific. Developed markets in the U.S. and European countries dominate the sports equipment market. The U.S and Canada are the largest markets for sports equipment in North America. Demand for sports equipment in the developing economies of Asia Pacific is expected to show high growth prospects during the forecast period. During 2010–2013, the sports equipment market in Asia Pacific experienced greater growth rate as compared to that of other regions, including North America and Europe. Countries such as Japan, China, India, and Australia are witnessing rapid economic growth. This, in turn, is expected to further drive the sports equipment market in Asia Pacific.

The report covers in-depth sports equipment market analysis, by product segment (ball sports, adventure sports, fitness equipment, golf equipment, winter sports, and other sports equipment) and by region (North America, Europe, Asia-Pacific, and Rest of the World) for the period 2010 to 2020. Moreover, the current sports equipment market dynamics, including the drivers, restraints, opportunities, and recent developments, have been covered in the report. The competitive landscape section of the sports equipment market report includes company's revenue and number of product segment offered by the company. Company profiles include attributes such as company overview, products and services, financial performance, and recent developments. Some of the major players in the sports equipment market are Amer Sports, Adidas AG, Callaway Golf Company, PUMA SE, Cabela's Incorporated, GLOBERIDE, Inc., MIZUNO Corporation, Nike Inc., Jarden Corporation, and YONEX Co., Ltd.

Browse Full Report With TOC: http://www.mrrse.com/sports-equipments-market

Key Points Covered in the Report

Market segmentation on the basis of product
Geographic segmentation
North America
Europe
Asia Pacific
RoW

Market size and forecast of the various segments and geographies for the period of 2010 to 2020
Company profiles of the leading companies operating in the market
Porter's five forces analysis of the market

About MRRSE
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords. 

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower 
90, State Street 
Suite 700 
Albany, NY - 12207 
United States
Telephone: +1-518-730-0559 
Email: sales@mrrse.com
Google+: https://plus.google.com/u/0/109558601025749677847/posts
Linked in: https://www.linkedin.com/company/mrrse
Twitter: