Comment on The Open Source – SELRS Update by SELRS at the Lake – SELRS Update | Growing Food Security in Alberta        
[...] & Awareness – implementing open source technology to build more effective online communications and project [...]
          Comment on The Open Source – SELRS Update by Rene Michalak        
'Occupy' as a business model: The emerging open-source civilisation
          (Bazaarmodel - To Heal - Teal) Dream - Semco Style (no replies)        
Those who make dreams come true get noticed

- Aldowa (A Semco Style Company)

'Leading Wisely is a podcast series by Ricardo Semler about the search for wisdom in organizations. In discussions with business leaders such as Zappos' Tony Hsieh, Basecamp's Jason Friedman and David Heinemeier Hansson and with other experts on the topic such as Frederic Laloux, he challenges assumptions and explores how we can change the way we live and work.

- Killing the dinosaur business model, 2017

'In France, almost every couple of days a company or public sector organization is entering corporate liberation. How about US? Here is one more example on how a company’s leader decided to liberate his company.

Ricardo Semler’s book served as an inspiration. Add to that a deep conviction and a lot of common sense.'

- Freedom Inc, '..corporate liberation.' June 6, 2017

Context (Leaders)

'..I believe in responsibility but not in pyramidal hierarchy .. the negative value of structure. Structure creates hierarchy, and hierarchy creates constraint..'

(Bazaarmodel - To Heal - Teal) - 'Your physical .. cultural .. soul heredity..'

(To Heal - Teal - Bazaarmodel) - Striving for wholeness '..We have let our busy egos trump the quiet voice of our soul; many cultures often celebrate the mind and neglect the body..'

'I, too, have a pet little evil, to which in more passionate moments I am apt to attribute all the others. This evil is the neglect of thinking. And when I say thinking I mean real thinking, independent thinking, hard thinking.'

- Learning How to Think (Economics - '..acts of choice.' (‘..imagination of alternatives..’))

(Open Source) - '..“open innovation.” Companies such as AstraZeneca, Lilly, GSK, Janssen, Merck, Pfizer, Sanofi, TransCelerate, and others..'

(Haptopraxeology) - '..the senses were the windows of the soul and that reason had a divine right to feed upon fact..'

(To Heal) - Overview of Focus Levels ' areas of greater free will choice.'

' rethink and to rebuild a culture where there are open channels between feeling and understanding..'

(Praxeology) - '..his or her subjective values .. to explain all economic phenomena as the results of what people do..'

'Reinventing Organizations: ..radically more soulful, purposeful and powerful ways to structure and run .. organizations.'

(To Heal)(Management innovation) - '..Teal Organizations to start healing the world..'

(Haptopraxeology) - Students of Civilization

          '..FORGET fuel-powered jet engines .. looking into hybrid planes .. plasma engine..' (no replies)        
'FORGET fuel-powered jet engines. We’re on the verge of having aircraft that can fly from the ground up to the edge of space using air and electricity alone.


Berkant Göksel at the Technical University of Berlin and his team now want to fit plasma engines to planes. “We want to develop a system that can operate above an altitude of 30 kilometres where standard jet engines cannot go,” he says. These could even take passengers to the edge of the atmosphere and beyond.


Göksel is hoping for a breakthrough in compact fusion reactors to power his system..

In the meantime, he is looking into hybrid planes, in which his plasma engine would be combined with pulse detonation combustion engines or rockets to save on fuel.'

- Sandrine Ceurstemont, Plasma jet engines that could take you from the ground to space, May 17, 2017


(Fusion Power) - LPP Focus Fusion 1; '..FF-1 results are right now far ahead..'

(The Electric Universe) - SAFIRE as Astrophysical Laboratory | EU2016

'Open source is now mainstream..'

The New Fusion Race - Part 4 - Fusion Race: Who is Ahead, April 6, 2017

          '..committed to 100 percent clean energy by the year 2050.' (no replies)        
'Leaders from the City of Portland and Multnomah County have committed to 100 percent clean energy by the year 2050.

In an announcement earlier this week, authorities said that their goal was to meet the community's electricity needs with renewables by the year 2035 and to move all remaining energy sources to renewable ones by 2050.


Multnomah County is the most populous county in Oregon. Its Chair, Deborah Kafoury, welcomed the news. "This is a pledge to our children's future,'' she said. "100 percent renewables means a future with cleaner air, a stable climate and more jobs and economic opportunity.''

Portland is among a number of U.S. cities looking to embrace renewables. Over the weekend Chicago's Mayor, Rahm Emanuel, announced that city buildings there were to be powered by 100 percent renewable energy by 2025.'

- Anmar Frangoul, Portland commits to 100 percent renewable energy by 2050, April 12, 2017


(Fusion Power) - LPP Focus Fusion 1; '..FF-1 results are right now far ahead..'

(Fusion Power) - '..LPP has so far has two out of the three necessary ingredients for successful breakeven..'

' Ban Internal Combustion Engines by 2030'

The nuclear retreat - '..the global transition to sustainable 100 percent renewable energy.' - ' Europe by 2050.'

(To Heal) - '..the forces and forms of nature -- clouds, mountains, waves -- in cities of the future.'

(In The Electric Universe) Open Source Infrastructure, beginning of the Enterprise Nervous System (ENS)

          Vision of Humanity - Global Peace (no replies)        
'Economists on Peace aims to stimulate global discussion and shared learning on economic aspects of peace and conflict leading to appropriate action for peace, security and the world economy.'

- An Economists for Peace and Security editorial collaboration

'The need to understand what works in peacebuilding, how to measure its impact and cost-effectiveness is essential to long-term efforts to prevent violence and build peace. Yet, there is much we collectively do not know about peacebuilding, what works and doesn’t work, let alone what activities broadly define it. At a time when the international community’s resources to international development and aid are under strain due to tightened national budgets and stress from humanitarian action, the need to understand and invest in the most cost-effective ways to build long term peace is more crucial than ever.'

- Report: Measuring Peace Building Cost-Effectiveness

Context (Haptopraxeology) - '..We have lost three centuries as a result of ignoring our scholars!'

Global Peace Index

The Book of Peace

Four Ways Peace Research Made an Impact in Nuevo León, Mexico

Gold, Peace, and Prosperity

In The Electric Universe a Future of Peace and Love

'..tell your boss you think the company has a love deficit.' - Hamel

(To Heal)(Reinventing Organizations) - ' about what happening in the space of organizations going Teal.'

(In The Electric Universe) Open Source Infrastructure, beginning of the Enterprise Nervous System (ENS)

          'Open source is now mainstream..' (no replies)        
'Open source is now mainstream. More and more developers, organizations, and enterprises are are understanding the benefits of an open source strategy and getting involved. In fact, The Linux Foundation is on track to reach 1,000 participating organizations in 2017 and aims to bring even more voices into open source technology projects ranging from embedded and automotive to blockchain and cloud.'

- Mike Woster, Global Enterprises Join The Linux Foundation to Accelerate Open Source Development Across Diverse Industries, March 30, 2017


(Open Source) - '..“open innovation.” Companies such as AstraZeneca, Lilly, GSK, Janssen, Merck, Pfizer, Sanofi, TransCelerate, and others..'

'..Microsoft, is shifting over to open source for its development.'

          Call for developers: Quanta Plus and KDEWebDev        

KDE Project:

Time is passing by. Sometimes I'm also amazed that it was more than 5 years ago when I wrote my first KDE application and soon after I joined the Quanta Plus project. And a few months later Quanta Plus become part of the KDE releases, I think with version 3.1.
Probably many of you know that I worked full time on Quanta in the past years, thanks to Eric Laffoon and many other supporters, who made this possible. But things have changed, and I cannot spend all my time anymore on this beloved project. I don't abandon it, just realized that alone it would take just too much time to get a release for KDE 4.x series out in time. Therefore I call for help, I'd like to ask the community, existing developers or users with some C++ knowledge, developers who would like to find a challenging project in the open source world to come, join us. Help to make Quanta4 a reality and make many users happy throughout the world. You don't have to be afraid of the size of the project, one of the goals of Quanta4 is to have a modular code, build up as KDevPlatform (KDevelop) plugins.

There are other projects inside the KDEWebDev module that need help, some even maintainers:

- Kommander: just take a look at and you will be amazed by the number of Kommander scripts uploaded by the users. Help to have a good Kommander for KDE4 as well!
The executor is already ported, but we have lots of new ideas waiting to be implemented.

- KFileReplace: useful search and replace tool, unfortunately without a current maintainer. It works, but needs some love.

- KImageMapEditor: don't let web developers without a KDE image map editor!

Of course our priority would be Quanta Plus and Kommander, but if you are interested in either of the above, just contact us on our developer list.

          Tools: SpiderFoot – Open Source Intelligence Automation Tool (OSINT)        

Soapmaking in the planning
My idea of a perfect (summer) includes a lot of R & D (research and development) in the lab. Earlier this summer, before humidity got out of hand, I got my act together and created my first batch of soap, ever. It was the exact same formula that Open Source Soap used for creating all my fragrance 3-in-1 soap bars. I decided on an unscented soap for my first batch, because I really wanted to see and experience the soap in its pure form  - and also avoid painful loss of precious fragrant materials in case I screw up.
Pouring my first batch of soap ever
The process is a tad tedious and time consuming, requiring one to be precise with the temperatures and also extra cautious with the lye's caustic properties. It was a rather humid day when I made it, so I realized pretty fast that it is very uncomfortable to work with goggles and gloves when the air is so slippery and moist; and also there is that feeling that the air would cary the caustic fumes far too easily into my system. No harm was done, but I am now convinced that winter is the best time for this kind of production (or R & D, for that matter).
Cured soap
I've used stainless steel loaf pans as molds. I made a mistake of not putting any linings (I didn't want them to have wrinkles at the bottom). Turns out it was near impossible to get the soap out after the 24 hour hardening period. But I managed to do it anyway.
Soap slicing
The result I'm very pleased with as far as the soap consistency, properties (lathering, moisturizing) albeit its messy look. I know that if it was possible to take it out of the mold easily they would have been beautiful, so for next time I'm going to use a different procedure for the pouring process and probably use a different mold - probably will reuse 1L milk cartons. The bars will have a different size than they did under Schuyler's hands (he used 2L juice cartons, and than cut them in the middle to create a long shaped rectangle). Mine will be more on the squarish side.
Post-Soapmking Mess
I ended up with a lot of soap shavings, from which I can make a liquid soap or just use for hand washing clothes etc.
Post-Soapmking Mess - Cleanup
Cleanup time!
(Which is super easy, by the way - especially with my designated sink and stainless steel surfaces - yay!).


I am now waiting at least for a dry weather to proceed with more experiments. In the meantime, I'm creating oil infusions of herbs that could be incorporated into the soap, from wild herbs that grow here - for example Varthemia and Sage. Having appropriate space makes all the difference - I have room for large- mouthed jars that can sit around for months if needed and still not take up much of my ongoing workspace. It is so refreshing to have a studio built especially for the purpose I need it for. I can't even begin to tell you how thrilled I am about that and all the possibilities of what I can do next.
          2007 in Retrospect        

As I did in 2006, here's my review of 2007. For some strange reason, I decided to make some New Year Resolutions in 2006. How did I do? I said I'd do more unit testing - and I did, but there's always room for more unit testing. I said I'd do more open source. Well, I released Fusebox 5.1 and Fusebox 5.5 as well as my Scripting project and a cfcUnit facade for CFEclipse so I think I did alright there. I also said I'd do more Flex and write some Apollo (now AIR) applications. I didn't do so well on those two! I think I'll revert to my usual practice of not making resolutions this year...

2007 was certainly a year of great change for me, leaving Adobe in April (a hot thread with 62 comments!) to become a freelance consultant, focusing on ColdFusion and application architecture. I also worked part-time on a startup through the Summer but consulting has been my main focus and continues to be my total business as we move into 2008.

2007 also saw me getting much more involved with the ColdFusion community, rejoining all the mailing lists that I hadn't had time to read with my role at Adobe, becoming an Adobe Community Expert for ColdFusion and then taking over as manager of the Bay Area ColdFusion User Group.

I also got to speak at a lot of conferences in 2007:

I also attended the Adobe Community Summit which was excellent!

ColdFusion frameworks were also very busy in 2007:

Adobe was extremely busy too:

  • Apollo (AIR) hit labs in March
  • The Scorpio prerelease tour (Ben came to BACFUG in April) with the ColdFusion 8 Public Beta in May and the full release in July
  • Creative Suite 3
  • Flex began its journey to open source
  • The Flex 3 and AIR Beta releases
  • Adobe Share

I had a number of rants:

Other good stuff from 2007:

          Ð ÐµÐ¹Ñ‚инг Рунета назвал самые популярные CMS в 2017 году        
Рейтинг Рунета опубликовал очередной рейтинг систем управления сайтами. Как и ранее, результаты представлены в виде сводного ТОП, трех основных категорий (Open Source, студийные и коммерческие CMS), и срезов по тематикам и типам создаваемых на них сайтов.
          UML Modeller        
Umbrello UML Modeller
"Umbrello UML Modeller is a Unified Modelling Language diagram programme for KDE. UML allows you to create diagrams of software and other systems in a standard format."
"ArgoUML is the leading open source UML modeling tool and includes support for all standard UML 1.4 diagrams."

          Links from Open Source Musician Podcast 53        
Here are some links from Open Source Musician Podcast episode 53: - Mixbus

Harrison Mixbus: The $79 Virtual Analog Console, Now on Both Mac and Linux

Sonic Talk Podcast


WalkThrough/Dev/JackSession - Jack Audio Connection Kit - Trac

ardour - the digital audio workstation

Ardour 3.0 alpha 4 released

Paul Davis

Mixxx - Free Digital DJ Software

DSSI - API for audio processing plugins

Leigh Dyer: woo, tangent | lsd's rants about games, music, linux, and technology


PiTiVi, a free and open source video editor for Linux

Open Source Musician Podcast: Podcast editing using Ardour

Saffire PRO 40 Audio Interface - Free Firewire Audio Drivers

Diffusion (acoustics) - Wikipedia


Podcast OUT.
          Feeling Fuzzy        
"SimMetrics is an open source extensible library of Similarity or Distance Metrics, e.g. Levenshtein Distance, L2 Distance, Cosine Similarity, Jaccard Similarity etc etc. SimMetrics provides a library of float based similarity measures between String Data as well as the typical unnormalised metric output."

Python: difflib
"This module provides classes and functions for comparing sequences. It can be used for example, for comparing files, and can produce difference information in various formats..."

SSIS: Fuzzy Lookup Transformation
"The Fuzzy Lookup transformation performs data cleaning tasks such as standardizing data, correcting data, and providing missing values."

          Major General Hoyer Visits OSIX at FSU        
Tue, 2012-02-28

Fairmont State University Interim President Maria Rose greets Maj. Gen. James A. Hoyer, adjutant general for the West Virginia Army and Air National Guard, who visited Fairmont State University on Friday, Feb. 24, to tour the Open Source Intelligence Exchange (OSIX).

          AIR, Flex 3, BlazeDS, and a new Adobe Open Source site!        
Check out Flex 3 SDK and BlazeDS releases on Adobe’s new Open Source site! Flex Builder 3 is live too! But wait, there’s more! AIR 1.0 is also live: Woo hoo!
          360 Flex in Atlanta        
I’m at 360 Flex this week in Atlanta! I’ll be presenting at 10am on Tuesday with an in-depth look at the open source Flex 3 SDK! Drop on by if you’re around at the conference!
          CMSday – l’Open Source au service de votre stratégie digitale        

Le 17 juin dernier se déroulait à Paris le CMSday, une journée consacrée aux solutions de gestion de contenus Open Source. Cette année, deux axes majeurs étaient donnés, l’un technique avec des conférences orientées sur les CMS et leur spécificités, l’autre autour du contenu lui-même et ses mode d’affichages. De nombreux protagonistes présents, pour offrir... Read more »

Cet article CMSday – l’Open Source au service de votre stratégie digitale est apparu en premier sur Blogoergosum.

          Finalmente Notepad++ Plugin Manager x64        
Arriva infine su windows il plugin manager a 64 bit di notepad++!

Vi siete trovati senza plugin quando per errore avete installato la versione a 64 bit del miglior editor open source di testo per windows?

Eravate rimasti sgomenti dal fatto che nell'installer il plugin manager non è più incluso?

immagine che dimostra che attualmente il plugin manager non è incluso nella versione x64 di notepad
Nella versione 7.4.2 il plugin manager non è ancora presente

Non sapevate più come indentare i vostri JSON e XML, come decodificare l'encoding delle URL o come confrontare in velocità due files?
Beh ecco a voi come ripristinare il plugin manager!

Read more »

          Rustris Postmortem: Making Tetris with ggez        

You know what game I love? Tetris. An easy game to play but difficult to master that’s been released on every platform under the sun. The game has had a strong following for over 30 years and it’s still going strong. For my next Rust-based game project, I decided to make my own version of Tetris with basic gameplay features, a menu screen and game ending states. After a month of working on it sporadically, I managed to get the project into a state I’m okay with releasing into the wild (The source code is available on GitHub or GitLab). Here are some of my thoughts on challenges faced & choices made during development.


The main goal of this project was to learn how to create Tetris in Rust. It was not to learn how to make a game engine in Rust. If I had wanted to do that, I would have started with something like SDL2 and worked off of that.

Making Good Games Easily

After spending some time earlier playing around with the Lua-based love2d game framework, picking a Rust framework influenced by love2d seems like the obvious path to take. ggez fits the bill here, providing easy ways to draw to the screen, play audio, access the filesystem, render text, handling input and deal with timing. It doesn’t aim to provide every feature one can want in a game engine such as entity-component systems or math functions; this functionality can be provided by already-existing crates. Instead, the focus is on being easy to get up and running, and to be productive quickly without having to think about lower level operations. Just like the name of the framework says!

Personally, I found ggez quite pleasant and easy to work with. Much of ggez revolves around the EventHandler trait. This trait contains required callbacks that must be implemented (update(), draw()) as well as a bunch of optional input-related callbacks. From there, a developer has free reign to do whatever they please.

As an example, this is what my GameEndState struct looks like:

pub struct GameEndState {
    request_replay: bool,
    request_menu: bool,
    request_quit: bool,
    options: Vec<Option>,
    current_selection: usize,

    game_end_text: graphics::Text,
    final_score_text: graphics::Text,
    final_line_text: graphics::Text,
    final_level_text: graphics::Text,

The input system in Rustris is state-based instead of event-based so for smaller states such as the game over screen, I store a bool for each potential input response from a user1. In the state’s update() method, I check whether any of these switches have flipped and act accordingly. Options were abstracted into an Option struct, so I store a vector of those as well as the currently selected option. Finally, when the GameEndState is created, so are any graphics::Text objects that are required to display some information on screen. These are stored in the state and drawn to the buffer every frame.

The simplicity of ggez’s API allowed me to just focus on making the game. Each state contained what it needed to do its job and that’s it. Whenever I found myself repeating code across states, I’d lift that code into a module containing code shared between states. For example, my Option struct which is used across multiple states (MenuState, GameEndState), is in shared module. It feels very simple and clean as well as easy to extend later on.

Bumps in the Road

Once work on the main ‘play field’ state was finished and plans for the menu state began, I had noticed a couple of limitations:

  1. I had created a Transition enum listing potential transitions between states within a state manager. I wanted to have both update() and draw() return said Transition[^2] so a state’s update() method can request state changes but the ggez EventHandler trait is hardcoded to return an empty tuple.

  2. What if I wanted to create an Assets object that held all my assets that lived at the top-level of my object hierarchy? This way, a reference to that object can be passed to any active states that may need access to an image, or a sound.

  3. If Transition was defined in my code, how would EventHandler even know about it? This is when I first jumped into the ggez event module source code in attempts to make the return type generic. This ended a rabbit hole filled with Boxes and eventually, failure.

Thankfully, ggez is written in a clean and modular way - no spaghetti here. I was able to simply make my own copy of the event module and make my needed changes. I think ideally, there would be a way to define return types from EventHandlerfrom the framework user’s end but for now, this will do. The problem of passing Assets seems like a harder one to solve. I don’t believe Rust currently has the option to call functions with a variable number of arguments or optional arguments. For now, editing the event module will work as a solution.

Other stuff

Another issue I had actually has to do with one of ggez’s dependencies. rodio, created by master crate creator tomaka. Currently, it doesn’t have a way to stop any audio that is being played. There is a PR ready to be merged into implement this, and then ggez simply needs to offer a high-level interface to stop audio[^3]. This isn’t a huge deal, but it will be nice to have this functionality when it is finally implemented.

The design for both the Assets struct and Transitions enum were shamelessly inspired by the Rust-based Amethyst game engine, another excellent project. It has tons of great ideas and I’m excited to start playing around with it in the future on larger projects.

I implemented a state manager inspired by the “Pushdown Automata” discussed in the “State” chapter of the excellent Game Programming Patterns. I believe that Amethyst also uses a similar pattern which makes sense if my Transition enum, used to transition between game states, was inspired by Amethyst’s Trans enum. With a state stack, all one needs to do to add a ‘pause state’ or ‘inventory’ screen’ is to push that state on the stack when needed and pop it off when done.

I’m not an artist nor a musician, so I had to rely on open source and freely licensed assets. All the audio and sound effects were found online and some of the assets I made, such as the black hole graphic on the menu and the background of the play field. It’s not the best looking or sounding game in the world but for what was an educational non-commercial product, I think it did the job.

Final Thoughts

Tetris is a fun game to write in Rust. I don’t think that my code is some kind of incredible work of art, a standard that other code should be judged against (if anything, it’s probably the opposite) but I had lots of fun and I learned quite a bit about making games, making games in Rust and Rust itself. That was my goal, so mission accomplished! You can check out the source code on GitHub or GitLab. I don’t have access to a Windows or Mac right now so I unfortunately cannot produce any binaries to distribute this at the moment.

ggez is a great framework to use for small 2D games. I was able to talk with Icefox, one of the library authors, at the Game Development in Rust meetup in Toronto on July 7th 2017. He’s a nice & smart guy and after watching him talk about the future of ggez during his presentation of the framework, I have 100% confidence that it will continue to grow into a fine young lad great framework to use to make good games easily.

  1. For states with more input such as my PlayState, I threw all the potential state bool into its own struct - makes the code easier to read, in my opinion. [^2]: Wrapped in ggez’s custom Result type, GameResult. [^3]: Yes. Just “simply” implement it. You know, easy! 

          Playtime with Phaser        

For the past few days, I’ve been playing around with Phaser, a game engine written in JavaScript. I decided that I’d like to experiment with game development for the next month. It has been a career path I’ve considered on and off throughout my life, but I need to put the idea to the test: do I actually want to make games, or do I just find the thought of doing it appealing? Let’s find out!

Struggle of Inaction

A lot of new game developers (including myself) seem to fall into a trap. Do you want to make a game, or do you want to learn how to program games through engines? My problem was that I was obsessed with the idea that I had to start out learning how it all worked from the lowest level. “If I use an engine,” I would think to myself, “and it limited my grand ambitious game, whatever would I do?!” I would then attempt to learn OpenGL, get frustrated trying to tie everything together into something that resembles a game and give up.

The fear of using an engine without fully understanding how games work was a huge concern, although I don’t know why. I don’t understand how my car works but that doesn’t stop me from using it. I think I was partially making excuses to justify the Not Invented Here Syndrome I was experiencing. If I really want to make games, an engine is probably the best place to start. When I run into issues due to gaps in my knowledge, I’ll just fill those gaps in as I need to.

Select your Engines

With a renewed sense of resolve, I now had to choose a language and engine to work in. I decided that my first project would be a simple platforming-style game so heavy engines like Unreal or CryEngine seemed a bit over-the-top. Unity is another very popular engine with indie and hobbyist developers, and has the ability to export your game to a variety of platforms. In the end, I felt forced into using their GUI app to get stuff done. I’m just not a big fan of using mice!

It had been a few years since I last looked at JavaScript game engines. Back then, audio on the web wasn’t well-supported. The Web Audio API is now here but still doesn’t have full browser support (looking at you, IE). Regardless, I feel like JavaScript speed increases, audio API improvements and the growth of the JS game development community has made this a good time to try out writing a game in JavaScript.

There are quite a few mature JS game engines around, such as MelonJS, ImpactJS, and Phaser. I ended up choosing Phaser for a few reasons:

  1. The Website looked good (Hey, what can I say, this actually matters to me)
  2. Excellent examples site and documentation. These two were fairly important in my final decision. The examples site contains many (you guessed it) examples of different ways to implement everything from animation via spritesheets, using the camera, music and more. I’m a learn-by-example kind of person, so this was perfect for me. The documentation is pretty great, and the source code itself is well commented and easy to understand.
  3. Price & license. ImpactJS costs $99USD for a license. While you receive the source code with your download, you can’t contribute upstream from what I understand. Both MelonJS and Phaser are open source under the MIT license. I’d prefer to use an open source engine if I could, as I’d like to contribute any bug fixes I may come across with the engine itself.
  4. Built-in support for levels created with Tiled, a tile-based map editor. This wasn’t a huge deal but it is nice to have.

Both Phaser and MelonJS seemed equally capable. I found more questions asked on StackOverflow on Phaser (287 vs MelonJS’s 47 as of this post). While questions on StackOverflow isn’t the greatest metric to judge an engine on, I just wanted to make a quick choice and this shallow justification did the job.

Initial thoughts & issues

Now that I had my engine chosen, it was time to get started. This is where I encountered my first roadblock: I had no art. I needed some tilesets and character sprites to start off with. Luckily, there are plenty of freely licensed artwork packs available for those of us who are not artistically inclined. One such pack, Platformer Art Complete is a huge pack of tiles, items, character sprites and more made by an awesome dude named Kenney.

After creating a small simple level with a few platforms, I loaded it into the engine and noticed that some platform tiles were not the ones I selected to use in Tiled. The initial spritesheet included in the pack that I was using had some excess art on the right margin. Spritesheets work by providing the size of each frame in the sheet, and the amount of rows and columns of frames. So if Tile N is on row 3, column 3 of the sheet, and each frame is 50 pixels square, then by grabbing the subsection of the image at x 100, y 100 should get us the desired frame. However, while Tiled was smart enough to remove the excess tiles from the spritesheet, there is no way Phaser could possibly know that; it just knows how big each frame is and the amount of frames per row and column. I found a version of Kenney’s platformer base pack that was made to work with Tiled. After using this, my problems went away. When I attempt to create my own art, I’ll be able to specify how the spritesheet is layed out. Until then, be mindful of your art!

After ensuring the level was being rendering correctly, I added a simple character with basic move left/right controls. So far, so good. I decided to test out the camera functions by creating a level larger than the canvas/game viewport of 1280x720. For some reason, this caused my player sprite to get caught on some invisible walls. I did some research and found this thread with some else describing a similar issue. Luckily enough, a Phaser developer chimmed in stating that he found the cause and that it had been fixed in the dev branch. Thankfully, using the dev version of Phaser seemed to fix it.

So far, I’m pretty happy with Phaser. Just getting things happening on a screen and reacting to input is pretty awesome. Hopefully I’ll have something to show off in my next post!

          WebVTT: The Middle        

It’s been a while since my last post about WebVTT. A lot has been going on since that last post in the beginning of March. The OSD team gave two presentations about WebVTT to Mozilla Toronto: one during a class visit to MoTo and another during MoTo’s Open Web Open Mic event to some of the Toronto open source community. Both events were a lot of fun! I was surprised by how excited some people seemed to get when they saw WebVTT. When you work on a project, it can be hard to take a step back and see the impact your project has on others, especially when you can get so invested in said project. Good job team!

I haven’t been able to spend as much time on WebVTT as I would have liked over the past month. After the push to get things working for the first Mozilla Toronto demo, I had an avalanche of final assignments fall onto me. Now that they are finally behind me, I’ve been able to get back into the swing of things.

Review Issues

While I’ve been busy with school, rillian has been working on the patch. Thanks a lot, rillian! There have been several iterations of the patch since. As always, you can see the latest updates on the Bugzilla bug. It’s been reviewed quite extensively by bz and Ms2ger who brought up a few issues.

One of the issues had to do with nested templates. We were using nsTArrays like this: nsTArray<nsRefPtr<TextTrackCue>>, which reads as ‘an array of nsRefPtr which refer to TextTrackCue objects.’ Apparently, the >> at the very end was being interpreted as the >> operator on some older compilers. So we just added some whitespace to pad out the characters: nsTArray< nsRefPtr<TextTrackCue> >. Easy fix!

In the review, it was suggested that we convert nsAString out parameters in class getters to DOMString, a relatively new class in the Mozilla codebase. I went ahead and did this, but it was not an easy fix. In fact, it was pretty messy, and involved extracting a StringBuffer from an nsAString object and using that to set the value of the DOMString out parameter.

The code used to set getters went from this:

void GetKind(nsAString& aKind) const
  aKind = mKind;

to this:

void GetKind(DOMString& aKind) const
  aKind.SetStringBuffer( nsStringBuffer::FromString(mKind), mKind.Length() );

Yikes. That’s what I call messy. To be fair, this is the only way to set the value of a DOMString from an nsString, the type used internally to store strings.

It didn’t get accepted. In retrospect, it would have been easier to just adjust the internal members of the TextTrack* objects to use a DOMString rather than a nsString. Once I clarify with others that this is the best way to go about with this fix, I’ll probably end up doing just that.

The first fix ended up making its way into the patch on Bugzilla. The pull request on GitHub is still open due to the DOMString issue. If you’re interested in following that particular issue, you can check it out here.


Something I’ve wanted to start adding to the WebVTT DOM implementation for a while now is exception handling. Firefox’s C++ code doesn’t use exceptions; I’m talking about JavaScript exceptions. For instance, if one tries to use the removeCue(cue) method on a TextTrackList JavaScript object which doesn’t contain cue, a NotFoundError exception should be thrown.

In order to do this, we first need to edit the WebIDL file which defines the DOM interface. We must add a [Throws] declaration in front of every method which throws an exception. For example, in TextTrack.webidl:

void addCue(TextTrackCue cue);
void removeCue(TextTrackCue cue);

By using some crazy wizardry, this creates the necessary DOM bindings to use exceptions. More specifically, it passes an ErrorResult& as the last argument to our method. We can use this object to throw exceptions.

Let’s take a look at TextTrack::RemoveCue and how we can throw an exception if the cue passed as an argument is not already in the internal TextTrackList.

In our header file:

void RemoveCue(TextTrackCue& aCue,
               ErrorResult& aRv);

In the implementation file:

TextTrack::RemoveCue(TextTrackCue& aCue,
                     ErrorResult& aRv)
  // If the given cue is not currently listed in the
  // method's TextTrack object's text track's text
  // track list of cues, then throw a NotFoundError
  // exception and abort these steps.
  if( DoesContainCue(aCue) == false ) {
  // Remove cue from the method's TextTrack object's text track's text track
  // list of cues.

The generated bindings make it very easy to throw an exception. Now that an ErrorResult& is being passed to our method, we simply call Throw on the object and pass the error. One could search on MXR for the particular error macro/constant they need; luckily, bz told me which to use in #content.

Initially, I was a bit confused when trying to implement exceptions. While looking through MXR to see examples of how other classes implemented exceptions, I’d often see ErrorResult initialized as a local variable within a method. Apparently, this is only used for methods that catch exceptions. Methods that throw exceptions do it through an ErrorResult& argument passed in by the generated binding methods. Oh, you Mozilla engineers. You guys/gals are magic.

That particular pull request is still open at the time of writing. You can check out that pull request here.


I’m really not sure if this release can be considered a 1.0 release. The code still hasn’t landed in Firefox (although it seems close). There are still things to do! Some of those things include: maintaining and using TextTracks.activeCueList, finish implementing exceptions on the rest of the TextTrack* classes, and fix all of the issues raised in the reviews done by bz and Ms2ger. Concurrently, I believe we are at the point where we can start using the MochiTests written by Marcuus and Jordan to bring other issues to light.

A big issue is the ever-shifting nature of the WebVTT spec. It seems like every week, the spec changes - and not always for the better. For instance, there are now a list of rules associated with every TextTrack that defines how the text tracks should be rendered. Also, TextTrackCue is an abstract class from which we should be creating a new object: WebVTTCue. Not all these changes make sense to me (Rules for updating cues? Why?) but I guess that is a part of the age-old rift between spec writers and implementers.

Between the spec being in flux and the other items still on my to do list, there is still quite a bit to be done before I’d be comfortable slapping the ol’ 1.0 tag on. Now that school is officially complete forever, I should have much more time to spend on WebVTT.

The Middle

Speaking of school being officially done forever, this is my last official ‘for school’ post on my blog. This isn’t the end though, not by a longshot. I’ll be continuing my work on WebVTT as a volunteer. I’m excited to see what happens with the project in the coming few months!

Thanks to Humph for teaching the OSD classes to begin with. If you are a student at Seneca in CPA or BSD, do yourself a favour and take these classes. The OSD classes are the best professional options offered at Seneca, in my opinion. I’d also like to thank my OSD teammates for a crazy 8 months. We’re still alive! Success!

          [urls/news] The Global Grid: China's HPC Opportunity        
Thursday, November 11, 2004
Dateline: China
For this posting, I'm using an annotated urls format.  Let's begin.
The Global Grid
Grid computing.  HPC (high-performance computing).  Lots of trade press coverage.  Lots of academic papers.  Generally, this is a GREAT convergence.  Didn't hold with AI (artificial intelligence), but the coverage of grid computing is much more pervasive.  Also, it's an area where I believe that systems integrators (SIs) in China can play with the globals.  It's new enough that there are no clear leaders.  Okay, maybe IBM is a clear leader, but it's certainly not an established market.
It's also a market where Chinese SIs can leverage work done for domestic applications for Western clients.  This is NOT true in areas such as banking applications; the apps used in China are very different from the apps used in the States.  Fundamentally different systems.  But a lot of grid work is more about infrastructure and custom development.  There's also a lot of open source in the grid sphere.
I've selected some of the best papers and sites for review.  This is certainly not meant to be comprehensive, but simply follow the links for more info.
One last note:  Clicking on any of the following links will likely lead you to an abstract and possibly to some personal commentary not included in this posting.  You may also find related links found by other Furl users.
The "Bible" of the grid world.  The home page will lead to many other relevant papers and reports.  See also The Anatomy of the Grid (PDF).
Hottest journal issue in town!!  Papers may be downloaded for free.  See also Grid computing: Conceptual flyover for developers.
One of the better conferences; covers applications and provides links to several excellent papers and presentations.
Well, the link has been replaced.  Try to get a hold of this paper.  It WAS available for free.  SOA meets the grid.  The lead author, Liang-Jie Zhang, is a researcher at IBM T.J.Watson Research Center and chair of the IEEE Computer Society Technical Steering Committee (technical community) for Services Computing. Contact him at .  Ask for his related papers, too.
Several excellent papers; recent conference.  Middleware:  Yes, middleware is the key to SI opportunities.
Conference held earlier this month!!  See who is doing what in China.
Want a competitive edge in the grid space?  This is it!!
NOTE:  A search for "grid computing" in my Furl archive yields 164 hits (and most are publicly searchable).  See .
Other News
Outsourcing & Offshoring:
I don't agree with this, but it's worth reading, especially considering the source.  I agree that China shouldn't try to be a clone of India, but the arguments in support of the domestic market don't consider margins.
I'll be writing a column for the AlwaysOn Network about the disconnect between China's foreign policy initiatives and the realities of the IT sector.  Suffice it to say that SIs in China should NOT chase after the EU.  Again, do NOT confuse foreign policy with corporate policy!!
More of the same.  Read my comments about Romania by clicking the link ...
Google is coming to China, too.  Think MS Research in Beijing.
Another great move by IBM; they're clearly leading the pack.
This article is a bit confusing.  I suspect that TCS is simply copying the IGS China strategy.  But it's worth noting that they're moving beyond servicing their American clients with a presence in China.
Yes, yes and yes.  Expect a lot more of this.  I wouldn't be surprised to see China's SIs forced to move a bit lower on the U.S. SI food chain for partnerships.  Move up the chain by thinking verticals!!
No need to click; it's all about security.
No, not really a new model; more about a new certification!!  Just what the world needs ...
Enterprise Software:
The title says it all.
Maybe the "P" in "LAMP" should stand for "Plone"?
A strategy for USERS, i.e., SIs in China.
Marketing & Management:
Product Management 101, courtesy of the Harvard Business School.
Spread this throughout your organization ... and then ramp up with some paid tools.
SCM (supply chain management) meets marketing, but with a general management and strategy slant.
G2 planning strategies.  A wee bit mathematical, but still fairly easy to follow.
Expect the next original posting in two or three weeks; my next column for the AlwaysOn Network will be sent to this list.  Off to HK/SZ/ZH/GZ next week.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [news/commentary] Building ISV Relationships: Targeting SMEs - Part I        
Thursday, September 16, 2004
Dateline: China
New column on the AlwaysOn Network.  It's on the potential downside of offshoring (the downside for the States, that is).  For the next five days, see ; the permanent link is at .  It got the ire of a lot of readers and a lot of views (I'm projecting nearly 500 in less than one day).  The article which was the basis for my column is getting a lot of attention in the States.  Worth reading.
Building ISV Relationships: Targeting SMEs -- Part I
First, a bit of commentary.  One thing all smart SIs (systems integrators) do is develop partnerships and alliances with ISVs (independent software vendors, i.e., software publishers/software companies in a broad sense).  Of course, it's difficult to be the 1,000th entrant in the game and expect to get any traction/assistance from your ISV partner.
SIs in China ALWAYS use the approach of offering localization services and OFTEN offer to help push an ISV's product within the domestic market in China.  Frankly, this is what the (usually American) ISV wants, too.  Does this strategy work?  Well, sometimes.  However, even in the case of high profile alliances such as some of those Microsoft has in China (and I won't name names to protect the innocent), it's really nothing more than window dressing.  Everything looks good on paper, but the reality is something quite different.
Regardless, this does NOT address the need and desire for SIs in China to build their market in the States.  And when this issue becomes center stage, ISVs frequently respond with something bordering on contempt.  Some ISVs are getting clued that their American channel partners absolutely need partners in China and other low(er)-cost development areas in order to win bids.  Let's face it, it's all about closing deals.  And if an ISV's competitors have channel partners which can put together winning bids, perhaps in part (and perhaps in LARGE part) due to an offshoring component with their channel partner's SI partner(s) in China, then the ISV with an indirect link to China has a competitive advantage.  I don't view this as a sufficient condition to winning bids, but it's increasingly a necessary condition.
Clued ISVs want their American channel partners to have an offshoring option, but this requires that their channel partners have relationships with SIs in a country such as China.  But ISVs tend to focus their channel development efforts on their American partners and might develop a couple/few relationships in China, but usually NOT tied to their channel development efforts in the States.  Goofy and shortsighted, to say the least.
But how can SIs in China get traction with American ISVs, especially since they're almost always late to the game (in other words, the American ISV already has a well-developed channel)?  The answer (or, at least one answer):  Focus on servicing the needs of SMEs (small and medium enterprises, which is also referred to as "SMBs" -- small and medium businesses).
There's another reason this makes sense:  Most of the SIs in China are already focused on servicing SMEs/SMBs in China.  It might be nice to bag a large SOE (state-owned enterprise), but the reality is that most firms in China, especially the burgeoning number of privately-held firms, are SMEs by definition.  Hence, the experiences gained by SIs in China is already within the same market, although I'd be the first person to warn than company size and even similar domains does not necessarily equate to directly transferable skills.  Fact is, things in China are often quite different from the way they are in the States, especially in a "hot" ITO (IT outsourcing) market like financial services.  More about this in a forthcoming postingBottom line:  Give serious thought to targeting the SMB/SME market in the States.  (Part II of this commentary might be a while in coming.)
IT Tidbits
Lots of tidbits this week.
Controlling project costs.  My favorites:  Scope creep, not understanding project financing, "big-bang" projects, overtesting (although I'm not sure I agree with this one), poor estimating.  Good stuff, with recommended solutions.  See .
Challenges for China's SIs.  Adapted from a Forrester report.   For starters, how about:  Improving account management (are there really any account managers in China, or at least any who can manage accounts with U.S. clients?   ), moving away from technology-centric messages that often alienate business buyers (better yet, moving away from messages in Chinglish), investing in vertical-specific skills (how many times have I said this?) and becoming more multicultural organizations (yes, and let's start with learning English!).  See .
"Yee Haw" as an outsourcing option.  Forget India.  Forget China.  Forget the Philippines.  Let's go to Arkansas!!  See .
American start-ups go offshore.  Try Corio (is Corio really a start-up?), CollabNet, Aarohl, Infinera, and many others.  See .  Another good article with a BPO spin in Venture Capital Journal, .
Offshorings mixed results.  "Vietnam and Myanmar were also in demand ..."  Really?  See .
Looking for SI partners?  Kennedy ranks the largest firms.  As I've said in the past, I like their reports.  (No, I don't get a cut.)  Satyam and TCS didn't make the grade, though.  See .
Another challenge to conventional outsourcing and offshoring "wisdom."   "Services-driven development models, such as the one at work in India, broaden the global competitive playing field.  As a result, new pressures are brought to bear on hiring and real wages in the developed world - pressures that are not inconsequential in shaping the jobless recoveries unfolding in high-cost wealthy nations.  For those in the developed world, successful services- and manufacturing-based development models in heavily populated countries such as India and China - pose the toughest question of all: what about us?"  For more, see .
Forget the Golden Triangle.  How about China + India vs. the world (or, sans the world)?  "Newspaper headlines portray China as the world's manufacturing base for low-cost goods, like clothing and shoes, and India as the global IT monopoly-to-be.  Unfortunately, media outside Asia have failed to acknowledge the growing partnership between the two giants."  "Given the complementary nature of their economies and the size of their markets (nearly 2.2 billion people in total), the nascent cooperation between the two holds the potential to dramatically alter the world trade balance.  A perusal of the Shanghai technology corridor reveals a hint of the countries' industrial interconnectedness.  Walk through one of the main complexes in Shanghai's Pudong Software Park, and you will see a prominently displayed sign for Infosys, one of India's most respected IT firms.  The same complex also holds Satyam, the first of India's software service companies to set up offices in Shanghai.  Nearby are the headquarters of the largest software services company in Asia, Tata Consultancy Services (TCS), which currently runs an outsourcing center for GE in the town of Hangzhou.  TCS is owned by the Tatas, one of India's most prominent business families.  Across the river is NIIT, the principal software training center in India's private sector.  NIIT, operating in China since 1998, now runs an extensive two-year course in 25 provinces, training around 20,000 students to be software professionals.  There is widespread speculation that Wipro, India's only giant IT firm without a presence in the city, will establish a Shanghai office very soon.  It is no surprise that Indian software companies are setting up in China. They, like everyone else, sense great opportunity in one of the largest, fastest-growing economies in the world."  (Bold is my emphasis.)  All true, and they even forget MphasiS.  See one of my must-read sources, YaleGlobal .
The partnering wave of the future.  I've talked about this many times in previous postings.  This time CTG dances with Polaris Software.  See .
CMMi:  The key to success.  A little simplistic and uses incorrect definitions, but still worth reading.  See .
How about Microsoft vs. China in an AO "Grudge Match"?  See a lengthy article in CFO titled, "Does Microsoft need China?"; link at .  China: The champion of open source!!
Business creativity 101.  "A new book from Wharton School Publishing, The Power of Impossible Thinking by Jerry Wind and Colin Crook prompts you to rethink your mental models and transform them to help you achieve new levels of creativity. In this book, the authors give a set of guidelines on how to see differently."  Examples:  Listen to the radicals; embark on journeys of discovery; look across disciplines.  See .
The innovator's battle plan.  "Great firms can be undone by disruptors who analyze and exploit an incumbent's strengths and motivations.  From Clayton Christensen's new book Seeing What's Next."  GREAT stuff (although John Dvorak won't like it).  What about asymmetric warfare theories applied to the realm of corporate innovation and creativity?  Just a thought ...  See .
Your next competitors?  Have you thought about Senegal, Uganda, Kenya, Sri Lanka and Bangladesh, especially in the BPO space?  See .
Message to product companies: go sell services!!  Interesting take from a VMI perspective.  See .
Don't know much about bloggin'?  Good take on the various types of corporate blogs.  See .
Urls as web services?  You have to read it to get it.  Might be a bit too much for the uninitiated ...  See .
Joel is back and blogging!!  Joel takes on Jakob Nielsen in "it's not just usability."  See .
How about open source software for HPC?  See Geek alert, geek alert!!
Saving the best for last: a piece on Woz.  See .
TTFN.  Expect a urls update before I go back to the States.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [news] "2004 State of Application Development"        
Friday, August 13, 2004
Dateline: China
Special issues of journals and magazines are often quite good -- if you're into the subject matter.  But the current issue of VARBusiness is absolutely SUPERB!!  EVERY SYSTEMS INTEGRATOR SHOULD READ IT ASAP -- STOP WHAT YOU'RE DOING AND READ THIS ISSUE!!  (Or, at the very least, read the excerpts which follow.)  See .  They even have the survey results to 36 questions ranging from change in project scope to preferred verticals.  In this posting, I'm going to comment on excerpts from this issue.  My comments are in blue.  Bolded excerpted items are MY emphasis.
The lead article and cover story is titled, "The App-Dev Revolution."  "Of the solution providers we surveyed, 72 percent say they currently develop custom applications or tailor packaged software for their customers. Nearly half (45 percent) of their 2003 revenues came from these app-dev projects, and nearly two-thirds of them expect the app-dev portion of total revenue to increase during the next 12 months."  I view this as good news for China's SIs; from what I've observed, many SIs in China would be a good fit for SIs in the U.S. looking for partners to help lower their development costs.  "By necessity, today's solution providers are becoming nimbler in the software work they do, designing and developing targeted projects like those that solve regulatory compliance demands, such as HIPAA, or crafting wireless applications that let doctors and nurses stay connected while they roam hospital halls."  Have a niche; don't try to be everything to everyone.  "Nine in 10 of survey respondents said their average app-dev projects are completed in less than a year now, with the smallest companies (those with less than $1 million in revenue) finishing up in the quickest time, three months, on average."  Need for speed.  "The need to get the job done faster for quick ROI might explain the growing popularity of Microsoft's .Net framework and tools.  In our survey, 53 percent of VARs said they had developed a .Net application in the past 12 months, and 66 percent of them expect to do so in the coming 12 months."  My Microsoft build-to-their-stack strategy.  "Some of the hottest project areas they report this year include application integration, which 69 percent of VARs with between $10 million or more in revenue pinned as their busiest area.  Other top development projects center around e-commerce applications, CRM, business-intelligence solutions, enterprisewide portals and ERP, ..."  How many times have I said this?    "At the same time, VARs in significant numbers are tapping open-source tools and exploiting Web services and XML to help cut down on expensive software-integration work; in effect, acknowledging that application development needs to be more cost-conscious and, thus, take advantage of open standards and reusable components.  Our survey found that 32 percent of VARs had developed applications on Linux in the past six months, while 46 percent of them said they plan to do so in the next six months.  The other open-source technologies they are using today run the gamut from databases and development tools to application servers."  I guess there's really an open source strategy.  I come down hard on open source for one simple reason:  I believe that SIs in China could get more sub-contracting business from a build-to-a-stack strategy.  And building to the open source stack isn't building to a stack at all!!  "As a business, it has many points of entry and areas of specialization.  Our survey participants first arrived in the world of app dev in a variety of ways, from bidding on app-dev projects (45 percent) to partnering with more experienced developers and VARs (28 percent) to hiring more development personnel (31 percent)."  For SIs in China, simply responding to end-user RFQs is kind of silly.  Better to partner on a sub-contracting basis.  "According to our State of Application Development survey, health care (36 percent), retail (31 percent) and manufacturing (30 percent) ranked as the most popular vertical industries for which respondents are building custom applications.  Broken down further, among VARs with less than $1 million in total sales, retail scored highest, while health care topped the list of midrange to large solution providers."  Because of regulatory issues, I'm not so keen on health care.  I'd go with manufacturing followed by retail.  My $ .02.  "When it comes to partnering with the major platform vendors, Microsoft comes out the hands-on winner among ISVs and other development shops.  A whopping 76 percent of developers in our survey favored the Microsoft camp.  Their level of devotion was evenly divided among small, midsize and large VARs who partner with Microsoft to develop and deliver their application solutions.  By contrast, the next closest vendor is IBM, with whom one in four VARs said they partner.  Perhaps unsurprisingly, the IBM percentages were higher among the large VAR category (those with sales of $10 million or more), with 42 percent of their partners coming from that corporate demographic.  Only 16 percent of smaller VARs partner with IBM, according to the survey.  The same goes for Oracle: One-quarter of survey respondents reported partnering with the Redwood Shores, Calif.-based company, with 47 percent of them falling in the large VAR category.  On the deployment side, half of the developers surveyed picked Windows Server 2003/.Net as the primary platform to deliver their applications, while IBM's WebSphere application server was the choice for 7 percent of respondents.  BEA's WebLogic grabbed 4 percent, and Oracle's 9i application server 3 percent of those VARs who said they use these app servers as their primary deployment vehicle."  Microsoft, Microsoft, Microsoft.  Need I say more?  See .
The next article is on open source.  "Want a world-class database with all the bells and whistles for a fraction of what IBM or Oracle want?  There's MySQL.  How about a compelling alternative to WebSphere or WebLogic?  Think JBoss.  These are, obviously, the best-known examples of the second generation of open-source software companies following in the footsteps of Apache, Linux and other software initiatives, but there are far more alternatives than these.  Consider Zope, a content-management system downloaded tens of thousands of times per month free of charge, according to Zope CEO Rob Page.  Some believe Zope and applications built with Zope are better than the commercial alternative they threaten to put out of business, Documentum.  Zope is also often used to help build additional open-source applications.  One such example is Plone, an open-source information-management system.  What began as a fledgling movement at the end of the past decade and later became known as building around the "LAMP stack" (LAMP is an acronym that stands for Linux, Apache, MySQL and PHP or Perl) has exploded to virtually all categories of software.  That includes security, where SpamAssassin is battling spam and Symantec, too.  Popular?  Well, it has now become an Apache Software Foundation official project.  The use of open source is so widespread that the percentage of solution providers who say they partner with MySQL nearly equals the percentage who say they partner with Oracle"23 percent to 25 percent, respectively.There are plenty of choices for those SIs willing to play the open source game.  See .
"It's all about integration" follows.  "There are many reasons for the surge in application-development projects (the recent slowdown in software spending notwithstanding).  For one, many projects that were put on hold when the downturn hit a few years ago are now back in play.  That includes enterprise-portal projects, supply-chain automation efforts, various e-commerce endeavors and the integration of disparate business systems."  Choose carefully, however.  Balance this data with other data.  Right now, I see a lot more play with portals and EAI.  "Indeed, the need for quality and timely information is a key driver of investments in application-integration initiatives and the implementation of database and business-intelligence software and portals.  A healthy majority of solution providers say application integration is a key component of the IT solutions they are deploying for customers.  According to our application-development survey, 60 percent say their projects involved integrating disparate applications and systems during the past 12 months."  "Some customers are moving beyond enterprise-application integration to more standards-based services-oriented architectures (SOAs).  SOAs are a key building block that CIOs are looking to build across their enterprises."  Anyone who regularly reads any one of my three IT-related blogs knows that I'm gung-ho on SOAs.  "Even if your customers are not looking for an SOA, integrating different systems is clearly the order of the day.  To wit, even those partners that say enterprise portals or e-business applications account for the bulk of their business note that the integration component is key."  Yes, integration, integration, integration.  I'll be saying this next year, too.  And the year after ...  "Another way to stay on top of the competition is to participate in beta programs."  Absolutely true -- and a good strategy, too.  See .
The next article is on utility computing versus packaged softwareAgain, if you read what I write, you know that I'm also gung-ho on utility computing.  "According to VARBusiness' survey of application developers, more than 66 percent of the applications created currently reside with the customer, while 22 percent of applications deployed are hosted by the VAR.  And a little more than 12 percent of applications developed are being hosted by a third party.   Where services have made their biggest inroads as an alternative to software is in applications that help companies manage their customer and sales information.The article goes on to state that apps that are not mission-critical have the best chance in the utility computing space.  Time will tell.  Take note, however, that these are often the apps that will most likely be outsourced to partners in China.  "Simply creating services from scratch and then shopping them around isn't the only way to break into this area.  NewView Consulting is expanding its services business by starting with the client and working backward.  The Porter, Ind.-based security consultant takes whatever technology clients have and develops services for them based on need."   And focus on services businesses and .NET, too.  "Most application developers agree that services revenue will continue to climb for the next year or two before they plateau, resulting in a 50-50 or 60-40 services-to-software mix for the typical developer.  The reason for this is that while applications such as CRM are ideally suited to services-based delivery, there are still plenty of other applications that companies would prefer to keep in-house and that are often dependent on the whims of a particular company."  Still, such a split shows a phenomenal rise in the importance of utility computing offerings.  See .
Next up:  Microsoft wants you!!  (Replace the image of Uncle Sam with the image of Bill Gates!!)  Actually, the article isn't specifically about Microsoft.  "Microsoft is rounding up as many partners as it can and is bolstering them with support to increase software sales.  The attitude is: Here's our platform; go write and prosper.  IBM's strategy, meanwhile, is strikingly different.  While it, too, has created relationships with tens of thousands of ISVs over recent years,  IBM prefers to handpick a relatively select group, numbering approximately 1,000, and develop a hand-holding sales and marketing approach with them in a follow-through, go-to-market strategy."  Both are viable strategies, but NOT both at the same time!!  "To be sure, the results of VARBusiness' 2004 State of Application Development survey indicates that Microsoft's strategy makes it the No. 1 go-to platform vendor among the 472 application developers participating in the survey.  In fact, more than seven out of 10 (76 percent) said they were partnering with Microsoft to deliver custom applications for their clients.  That number is nearly three times the percentage of application developers (26 percent) who said they were working with IBM ..."  Percentages as follows:  Microsoft, 76%; IBM, 26%; Oracle, 25%; MySQL, 23%; Red Hat, 17%; Sun, 16%; Novell, 11%; BEA, 9%.  I said BOTH, NOT ALL.  Think Microsoft and IBM.  However, a Java strategy could be BOTH a Sun AND IBM strategy (and even a BEA strategy).  See .
There was another article I liked called, "How to Team With A Vendor," although it's not part of the app-dev special section per se.  This posting is too long, so I'll either save it for later or now note that it has been urled.  See .  Also a kind of funny article on turning an Xbox into a Linux PC.  See .  See also .
Quick note:  I'll be in SH and HZ most of next week, so I may not publish again until the week of the 23rd.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          Open Source at The Large Hadron Collider and Data Gravity        
I am delighted to announce a new Open Source cybergrant awarded to the Caltech team developing the ANSE project at the Large Hadron Collider. The project team lead by Caltech Professor Harvey Newman will be further developing the world’s fastest data forwarding network with Open Daylight. The LHC experiment is a collaboration of world’s top Universities […]
          The First Open Source Project to Win the Interop Grand Prize…        
… is none other than…  (drum roll, please!) … our one year old baby, OpenDaylight! My heartfelt congratulations go to the OpenDaylight committers and contributors, the open source collaborators who have poured their heart and soul into this wonderful project. This is indeed a remarkable event, considering the skepticism surrounding its start just about one year […]
          In Search of The First Transaction        
At the height of an eventful week – Cloud and IoT developments, Open Source Think Tank,  Linux Foundation Summit – I learned about the fate of my fellow alumnus, an upperclassman as it were, the brilliant open source developer and crypto genius known for the first transaction on Bitcoin. Hal Finney is a Caltech graduate who went […]
          Open Source is just the other side, the wild side!        
March is a rather event-laden month for Open Source and Open Standards in networking: the 89th IETF, EclipseCon 2014, RSA 2014, the Open Networking Summit, the IEEE International Conference on Cloud (where I’ll be talking about the role of Open Source as we morph the Cloud down to Fog computing) and my favorite, the one […]
          My Top 7 Predictions for Open Source in 2014        
My 2014 predictions are finally complete.  If Open Source equals collaboration or credibility, 2013 has been nothing short of spectacular.  As an eternal optimist, I believe 2014 will be even better: Big data’s biggest play will be in meatspace, not cyberspace.  There is just so much data we produce and give away, great opportunity for […]
          The Age of Open Source Video Codecs        
The first time I met Jim Barton (DVR pioneer and TiVo co-founder) I was a young man looking at the hottest company in Silicon Valley in the day: SGI, the place where Michael Jackson and Steven Spielberg just arrived to visit, the same building in Mountain View as it were, that same week in late […]
          Administrateur système UNIX / Linux (H/F) - ACENSI - Lille        
Dans le cadre du développement de l'agence de Lille et pour répondre aux besoins de l'un de nos clients nous recherchons un administrateur système et réseaux : Vous aurez pour missions : Administration et exploitation des systèmes Open sources (Redhat) Mise en oeuvre de nouvelles plateformes Linux et Windows, en charge du maintien en condition opérationnelle En charge du suivi et de la résolution des incidents de Niveau 2/3 sur l'ensemble des problématiques serveurs. ...
          Comment on Open Source Flying by mosleybond5257        
I’d seen the Gmail-Hosted some weeks ago.nGreat [another] job by JraNil! Come on <a href='' rel="nofollow"></a>
          GnomeSword 2.2        

Merry Christmas from the GnomeSword development team!

Release 2.2 (STABLE) of GnomeSword Bible study software is available. GnomeSword builds on the support of Crosswire Bible Society's cross-platform open source tools for scriptural study, to provide a high-quality study environment for those working within the GNOME desktop.

This stable release is the culmination of the extended 2.1.x unstable/development period.

Check it out here
          Apple stelt Swift beschikbaar als open source        

Get stuff done with Nitro

Version: 1.5

Nitro makes tasks management super easy and awesome. It's super fast, simple and offline and can be used without an internet connection. Nitro also packs Dropbox and Ubuntu one sync.

Nitro has a bunch of awesome features including:

- Dropbox and Ubuntu One Sync
- Magic Sort
- Smart Lists
- Search
- Themes
- Translations
- Retina Support
- Keyboard Shortcuts and more!

If you like it, share Nitro with your friends and give it a 5 star rating! If you find a bug or you don't like it, let us know and we'll get it fixed.

Nitro is also free open source software.

batch image converter and resizer

Version: 0.4.9

Converseen is an open source project written in C++ with the powerful Qt4 libraries. Thanks to the Magick++ image libraries it supports more than 100 image formats. You can convert and resize an unlimited number of images to any of the most popular formats: DPX, EXR, GIF, JPEG, JPEG-2000, PDF, PhotoCD, PNG, Postscript, SVG, and TIFF.

With Converseen you can save your time because it allows you to process more than one image with one mouse click!

Converseen is very simple: it features a very simple user interface without strange options.
          Most interesting links of February ’12        
Recommended Readings List of open source projects at Twitter including e.g. their scala_school – Lessons in the Fundamentals of Scala and effectivescala – Twitter’s Effective Scala Guide M. Fowler & P. Sadalage: Introduction into NoSQL and Polyglot Persistence (pdf, 11 slides) – what RDBMS offer and why it sometimes isn’t enough, what the different NoSQL […]
          AJAX Chat By Blueimp        
If you’re looking for a nice open source web chat for your site, I suggest you take a look at AJAX Chat by Blueimp. This free and fully customizable chat client can easily be integrated in a number of common forum systems such as phpBB, MyBB, PunBB, SMF, vBulletin and other PHP community software. Other […]
          Anologue – Open Source Chat Application        
Anologue is like comments, meets im, meets irc, meets your favorite paste app, meets instant coffee. With anologue you can quickly and easily engage in an anonymous (or not) linear dialogue with any number of people . It works by simply clicking on a “new room” link and the application instantly creates a chat room […]
          What is Node Express?        
Node Express is a minimal, open source and flexible node.js web app framework designed to develop websites, web apps and APIs much easier. Especially developing single page applications is easy with node express. This post outlines the node express features , creating a simple REST API and interacting with Mongo database to get the data [...]

          Amazing Online Roulette Systems        
Free Open Source Roulette System And how to find it? I know you all are looking for the best roulette system but instead of find it you always get something acceptable only for a short period of...

Winning Roulette System Tested by People like You. For more information visit the official Money Maker Machine website!
          VLC Player Coming To Android In "A Matter Of Weeks"        

VLC on AndroidThe incredibly popular VLC Player is finally coming to Android after months of hard work by the open source project developers. Originally a desktop media center for Linux, Windows, and Mac, this versatile player will bring many new video-playing features to our beloved OS including a wide variety of formats such as DivX and Dolby TrueHD. The lead developer in the project, Jean-Baptiste Kempf, has confirmed that it will hit the Android Market in "just a few weeks", which means that Android will be the first mobile platform to have a version of this software finally follow iOS and get its own port (thanks, Mikeyy).

Read More

VLC Player Coming To Android In "A Matter Of Weeks" was written by the awesome team at Android Police.

          Global Big Data Infrastructure Market Growth, Drivers, Trends, Demand, Share, Opportunities and Analysis to 2020        

Global Big Data Infrastructure Market 2016-2020, has been prepared based on an in-depth market analysis with inputs from industry experts. The report covers the market landscape and its growth prospects over the coming years. The report also includes a discussion of the key vendors operating in this market.

Pune, Maharashtra -- (SBWIRE) -- 02/09/2017 -- The Global Big Data Infrastructure Market Research Report covers the present scenario and the growth prospects of the Global Big Data Infrastructure Industry for 2017-2021. Global Big Data Infrastructure Market, has been prepared based on an in-depth market analysis with inputs from industry experts. The report covers the market landscape and its growth prospects over the coming years and discussion of the key vendors effective in this market.

Big data refers to a wide range of hardware, software, and services required for processing and analyzing enterprise data that is too large for traditional data processing tools to manage. In this report, we have included big data infrastructure, which includes mainly hardware and embedded software. These data are generated from various sources such as mobile devices, digital repositories, and enterprise applications, and their size ranges from terabytes to exabytes. Big data solutions have a wide range of applications such as analysis of conversations in social networking websites, fraud management in the financial services sector, and disease diagnosis in the healthcare sector.

Report analysts forecast the Global Big Data Infrastructure Warming Devices market to grow at a CAGR of 33.15% during the period 2017-2021.

Browse more detail information about Global Big Data Infrastructure Report at:  

The Global Big Data Infrastructure Market Report is a meticulous investigation of current scenario of the global market, which covers several market dynamics. The Global Big Data Infrastructure market research report is a resource, which provides current as well as upcoming technical and financial details of the industry to 2021.

To calculate the market size, the report considers the revenue generated from the sales of Global Big Data Infrastructure globally.

Key Vendors of Global Big Data Infrastructure Market:
- Dell
- HP
- Fusion-io
- NetApp
- Cisco


Other prominent vendors
- Intel
- Oracle
- Teradata

And many more……


Get a PDF Sample of Global Big Data Infrastructure Research Report at:  

Global Big Data Infrastructure market report provides key statistics on the market status of the Global Big Data Infrastructure manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the Global Big Data Infrastructure industry.

Global Big Data Infrastructure Driver:
- Benefits associated with big data
- For a full, detailed list, view our report

Global Big Data Infrastructure Challenge:
- Complexity in transformation of procured data to useful data
- For a full, detailed list, view our report

Global Big Data Infrastructure Trend:
- Increasing presence of open source big data technology platforms
- For a full, detailed list, view our report

Purchase report @  


Geographical Segmentation of Global Big Data Infrastructure Market:
· Global Big Data Infrastructure in Americas
· Global Big Data Infrastructure in APAC
· Global Big Data Infrastructure in EMEA


The Global Big Data Infrastructure report also presents the vendor landscape and a corresponding detailed analysis of the major vendors operating in the market. Global Big Data Infrastructure report analyses the market potential for each geographical region based on the growth rate, macroeconomic parameters, consumer buying patterns, and market demand and supply scenarios.

Have any query? ask our expert @

Key questions answered in Global Big Data Infrastructure market report:
- What are the key trends in Global Big Data Infrastructure market?
- What are the Growth Restraints of this market?
- What will the market size & growth be in 2020?
- Who are the key manufacturer in this market space?
- What are the Global Big Data Infrastructure market opportunities, market risk and market overview?
- How revenue of this Global Big Data Infrastructure market in previous & next coming years?

Get Discount on Global Big Data Infrastructure Research Report at:

The report then estimates 2017-2021 market development trends of Global Big Data Infrastructure market. Analysis of upstream raw materials, downstream demand, and current market dynamics is also carried out. In the end, the report makes some important proposals for a new project of Global Big Data Infrastructure market before evaluating its feasibility.

And continued….

About Absolute Report:

Absolute Reports is an upscale platform to help key personnel in the business world in strategizing and taking visionary decisions based on facts and figures derived from in depth market research. We are one of the top report resellers in the market, dedicated towards bringing you an ingenious concoction of data parameters.

For more information on this press release visit:

Media Relations Contact

Ameya Pingaley
Absolute Reports
Telephone: +14085209750
Email: Click to Email Ameya Pingaley

          SharpDX, a new managed .Net DirectX API available        
If you have followed my previous work on a new .NET API for Direct3D 11,  I proposed SlimDX team this solution for the v2 of their framework, joined their team around one month ago, and I was actively working to widen the coverage of the DirectX API. I have been able to extend the API coverage almost up to the whole API, being able to develop Direct2D samples, as well as XAudio2 and XAPO samples using it. But due to some incompatible directions that the SlimDX team wanted to follow, I have decided to release also my work under a separate project called SharpDX. Now, you may wonder why I'm releasing this new API under a separate project from SlimDX?

Well, I have been working really hard on this from the beginning of September, and I explained why in my previous post about Direct3D 11. I have checked-in lots of code under the v2 branch on SlimDX, while having lots of discussion with the team (mostly Josh which is mostly responsible for v2) on their devel mailing list. The reason I'm leaving SlimDX team is that It was in fact not clear for me that I was not enrolled as part of the decision for the v2 directions, although  I was bringing a whole solution (by "whole", I mean a large proof of concept, not something robust, finished). At some point, Josh told me that Promit, Mike and himself, co-founders of SlimDX, were the technical leaders of this project and they would have the last word on the direction as well as for decisions on the v2 API.

Unfortunately, I was not expecting to work in such terms with them, considering that I had already made 100% of the whole engineering prototype for the next API. From the last few days, we had lots of -small- technical discussions, but for some of them, I clearly didn't agree about the decisions that were taken, whatever the arguments I was trying to give to them. This is a bit of disappointment for me, but well, that's life of open source projects. This is their project and they have other plans for it. So, I have decided to release the project on my own with SharpDX although you will see that the code is also currently exactly the same on the v2 branch of SlimDX (of course, because until yesterday, I was working on the SlimDX v2 branch).

But things are going to change for both projects : SlimDX is taking the robust way (for which I agree) but with some decisions that I don't agree (in terms of implementation and direction). Although, as It may sound weird, SharpDX is not intended to compete with SlimDX v2 : They have clearly a different scope (supporting for example Direct3D 9, which I don't really care in fact), different target and also different view on exposing the API and a large existing community already on SlimDX. So SharpDX is primarily  intended for my own work on demomaking. Nothing more. I'm releasing it, because SlimDX v2 is not going to be available soon, even for an alpha version. On my side, I'm considering that the current state (although far to be as clean as It should be) of the SharpDX API is usable and I'm going to use it on my own, while improving the generator and parser, to make the code safer and more robust.

So, I did lots of work to bring new API into this system, including :
  • Direct3D 10
  • Direct3D 10.1
  • Direct3D 11
  • Direct2D 1
  • DirectWrite
  • DXGI
  • DXGI 1.1
  • D3DCompiler
  • DirectSound
  • XAudio2
  • XAPO
And I have been working also on some nice samples, for example using Direct2D and Direct3D 10, including the usage of the tessellate Direct2D API, in order to see how well It works compared to the gluTessellation methods that are most commonly used. You will find that the code is extremely simple in SharpDX to do such a thing :
using System;
using System.Drawing;
using SharpDX.Direct2D1;
using SharpDX.Samples;

namespace TessellateApp
/// Direct2D1 Tessellate Demo.

public class Program : Direct2D1DemoApp, TessellationSink
EllipseGeometry Ellipse { get; set; }
PathGeometry TesselatedGeometry{ get; set; }
GeometrySink GeometrySink { get; set; }

protected override void Initialize(DemoConfiguration demoConfiguration)

// Create an ellipse
Ellipse = new EllipseGeometry(Factory2D,
new Ellipse(new PointF(demoConfiguration.Width/2, demoConfiguration.Height/2), demoConfiguration.Width/2 - 100,
demoConfiguration.Height/2 - 100));

// Populate a PathGeometry from Ellipse tessellation
TesselatedGeometry = new PathGeometry(Factory2D);
GeometrySink = TesselatedGeometry.Open();
// Force RoundLineJoin otherwise the tesselated looks buggy at line joins

// Tesselate the ellipse to our TessellationSink
Ellipse.Tessellate(1, this);

// Close the GeometrySink

protected override void Draw(DemoTime time)

// Draw the TextLayout
RenderTarget2D.DrawGeometry(TesselatedGeometry, SceneColorBrush, 1, null);

void TessellationSink.AddTriangles(Triangle[] triangles)
// Add Tessellated triangles to the opened GeometrySink
foreach (var triangle in triangles)
GeometrySink.BeginFigure(triangle.Point1, FigureBegin.Filled);

void TessellationSink.Close()

static void Main(string[] args)
Program program = new Program();
program.Run(new DemoConfiguration("SharpDX Direct2D1 Tessellate Demo"));

This simple example is producing the following ouput :

which is pretty cool, considering the amount of code (although the Direct3D 10 and D2D initialization part would give a larger code), I found this to be much simpler than the gluTessellation API.

You will find also some other samples, like the XAudio2 ones, generating a synthesized sound with the usage of the reverb, and even some custom XAPO sound processors!

You can grab those samples on SharpDX code repository (there is a with a working solutions with all the samples I have been developing so far, with also MiniTris sample from SlimDX).
          Democoding, tools coding and coding scattering        
Not so much post here for a while... So I'm going to just recap some of the coding work I have done so far... you will notice that It's going in lots of direction, depending on opportunities, ideas, sometimes not related to democoding at all... not really ideal when you want to release something! ;)

So, here are some directions I have been working so far...

C# and XNA

I tried to work more with C#, XNA... looking for an opportunity to code a demo in C#... I even started a post about it few months ago, but leaving it in a draft state. XNA is really great, but I had some bad experience with it... I was able to use it without requiring a full install but while playing with model loading, I had a weird bug called the black model bug. Anyway, I might come back to C# for DirectX stuff... SlimDx is for example really helpful for that.

A 4k/64k softsynth

I have coded a synth dedicated to 4k/64k coding. Although, right now, I only have the VST and GUI fully working under Renoise.. but not yet the asm 4k player! ;)

The main idea was to build a FM8/DX7 like synth, with exactly the same output quality (excluding some fancy stuff like the arpegiator...). The synth was developed in C# using vstnet, but must be more considered as a prototype under this language... because the asm code generated by the JIT is not really good when it comes to floating point calculation... anyway, It was really good to develop under this platform, being able to prototype the whole thing in few days (and of course, much more days to add rich GUI interaction!).

I still have to add a sound library file manager and the importer for DX7 patch..... Yes, you have read it... my main concern is to provide as much as possible a tons of ready-to-use patches for ulrick (our musician at FRequency)... Decoding the DX7 patch is well known around the net... but the more complex part was to make it decode like the FM8 does... and that was tricky... Right now, every transform functions are in an excel spreadsheet, but I have to code it in C# now!

You may wonder why developing the synth in C# if the main target is to code the player in x86 asm? Well, for practical reasons : I needed to quickly experiment the versatility of the sounds of this synth and I'm much more familiar with .NET winform to easily build some complex GUI. Although, I have done the whole synth with 4k limitation in mind... especially about data representation and complexity of the player routine.

For example, for the 4k mode of this synth, waveforms are strictly restricted to only one : sin! No noise, no sawtooth, no square... what? A synth without those waveform?.... but yeah.... When I looked back at DX7 synth implem, I realized that they were using only a pure "sin"... but with the complex FM routing mechanism + the feedback on the operators, the DX7 is able to produce a large variety of sounds ranging from strings, bells, bass... to drumkits, and so on...

I did also a couple of effects, mainly a versatile variable delay line to implement Chorus/Flanger/Reverb.

So basically, I should end up with a synth with two modes :
- 4k mode : only 6 oscillators per instrument, only sin oscillators, simple ADSR envelope, full FM8 like routing for operators, fixed key scaling/velocity scaling/envelope scaling. Effects per instrument/global with a minimum delay line + optional filters. and last but not least, polyphony : that's probably the thing I miss the most in 4k synth nowadays...
- 64k mode : up to 8 oscillators per instrument, all FM8 oscillators+filters+WaveShaping+RingModulation operators, 64 steps FM8's like envelope, dynamic key scaling/velocity scaling/envelope scaling. More effects, with better quality, 2 effect //+serial line per instrument. Additional effects channel to route instrument to the same effects chain. Modulation matrix.

The 4k mode is in fact restricting the use of the 64k mode, more at the GUI level. I'm currently targeting only the 4k mode, while designing the synth to make it ready to support 64k mode features.

What's next? Well, finish the C# part (file manager and dx7 import) and starting the x86 asm player... I just hope to be under 700 compressed byte for the 4k player (while the 64k mode will be written in C++, with an easier limitation around 5Ko of compressed code) .... but hey, until It's not coded... It's pure speculation!.... And as you can see, the journey is far from finished! ;)

Context modeling Compression update

During this summer, I came back to my compression experiment I did last year... The current status is quite pending... The compressor is quite good, sometimes better than crinkler for 4k... but the prototype of the decompressor (not working, not tested....) is taking more than 100 byte than crinkler... So in the end, I know that I would be off more than 30 to 100 byte compared to crinkler... and this is not motivating me to finish the decompressor and to get it really running.

The basic idea was to take the standard context modeling approach from Matt Mahoney (also known as PAQ compression, Matt did a fantastic job with his research, open source the way), using dynamic neural network with an order of 8 (8 byte context history), with the same mask selection approach than crinkler + some new context filtering at the bit level... In the end, the decompressor is using the FPU to decode the whole thing... as it needs ln2() and pow2() functions... So during the summer, I though using another logistic activation function to get rid of the FPU : the standard sigmoid used in the neural network with a base 2 is 1/(1+2^-x)), so I found something similar with y = (x / (1 + |x|) + 1) /2 from David Elliot (some references here). I didn't have any computer at this time to test it, so I spent few days to put some math optimization on it, while calculating the logit function (the inverse of this logistic function).

I came back to home very excited to test this method... but I was really disappointed... the function had a very bad impact on compression ratio by a factor of 20%, in the end, completely useless!

If by next year, I'm not able to release anything from this.... I will put all this work open source, at least for educational purposes... someone will certainly be clever than me on this and tweak the code size down!

SlimDx DirectX wrapper's like in C++

Recall that for the ergon intro, I have been working with a very thin layer around DirectX to wrap enums/interfaces/structures/functions. I did that around D3D10, a bit of D3D11, and a bit of D3D9 (which was the one I used for ergon). The goal was to achieve a DirectX C# like interface in C++. While the code has been coded almost entirely manually, I was wondering If I could not generate It directly from DirectX header files...

So for the last few days, I have been a bit working on this... I'm using boost::wave as the preprocessor library... and I have to admit that the C++ guy from boost lost their mind with templates... It's amazing how they did something simple so complex with templates... I wanted to use this under a C++/Cli managed .NET extension to ease my development in C#, but I end up with a template error at link stage... an incredible error with a line full of concatenated template, even freezing visual studio when I wanted to see the errors in the error list!

Template are really nice, when they are used not too intensively... but when everything is templatized in your code, It's becoming very hard to use fluently a library and It's sometimes impossible to understand the template error, when this error is more than 100 lines full of cascading template types!

Anyway, I was able to plug this boost::wave in a native dll, and calling it from a C# library... next step is to see how much I can get from DirectX header files to extract a form of IDL (Interface Definition Language). If I cannot get something relevant in the next week, I might postpone this task when I won't have anything more important to do! The good thing is for example for D3D11 headers, you can see that those files were auto-generated from a mysterious... d3d11.idl file...used internally at Microsoft (although It would have been easier to get directly this file!)... so It means that the whole header is quite easy to parse, as the syntax is quite systematic.

Ok, this is probably not linked to intros... or probably only for 64k.... and I'm not sure I will be able to finish it (much like rmasm)... And this kind of work is keeping me away from directly working with DirectX, experimenting rendering techniques and so on... Well, I have to admit also that I have been more attracted for the past few years to do some tools to enhance coding productivity (not necessary only mine)... I don't like to do too much things manually.... so everytime there is an opportunity to automatize a process, I can't refrain me to make it automatic! :D

AsmHighlighter and NShader next update

Following my bad appetite for tools, I need to make some update to AsmHighlighter and NShader, to add some missing keywords, patch a bug, support for new VS2010 version... whatever... When you release this kind of open source project, well, you have to maintain them, even if you don't use them too much... because other people are using them, and are asking for improvements... that's the other side of the picture...

So because I have to maintain those 2 projects, and they are in fact sharing logically more than 95% of the same code, I have decided to merge them into a single one... that will be available soon under codeplex as well. That will be easier to maintain, ending with only one project to update.

The main features people are asking is to be able to add some keywords easily and to map file extensions to the syntax highlighting system... So I'm going to generalize the design of the two project to make them more configurable... hopefully, this will cover the main features request...

An application for Windows Phone 7... meh?

Yep... I have to admit that I'm really excited by the upcoming Windows Phone 7 metro interface... I'm quite fed up with my iPhone look and feel... and because the development environment is so easy with C#, I have decided to code an application for it. I'm starting with a chromatic tuner for guitar/piano/violins...etc. and it's working quite well, even if I was able to test it only under the emulator. While developing this application, I have learned some cool things about pitch detection algorithm and so on...

I hope to finish the application around september, and to be able to test it with a real hardware when WP7 will be offcialy launched... and before puting this application on the windows marketplace.

If this is working well, I would study to develop other applications, like porting the softsynth I did in C# to this platform... We will see... and definitely, this last part is completely unrelated to democoding!

What's next?

Well, I have to prioritize my work for the next months:
  1. Merge AsmHighlighter and NShader into a single project.
  2. Play a bit for one week with DirectX headers to see if I could extract some IDL's like information
  3. Finish the 4k mode of the softsynth... and develop the x86 asm player
  4. Finish the WP7 application
I still have also an article to write about ergon's making of, not much to say about it, but It could be interesting to write down on a paper those things....

I need also to work on some new directX effects... I have played a bit with hardware instantiating, compute shaders (with a raymarching with global illumination for a 4k procedural compo that didn't make it to BP2010, because the results were not enough impressive, and too slow to calculate...)... I would really want to explore more about SSAO things with plain polygons... but I didn't take time for that... so yep, practicing more graphics coding should be on my top list... instead of all those time consuming and - sometimes useful - tools!
          NShader 1.1, hlsl, glsl, cg syntax coloring for Visual Studio 2008 & 2010        
I have recently released NShader 1.1 which adds support for Visual Studio 2010 as well as bugfixes for hlsl/glsl syntax highlighting.

While this plugin is quite cool to add a basic syntax highlighting for shader languages, It lacks intellisense/completion/error markers to improve the editor experience. I didn't have time to add such a functionality in this release as... I don't really have too much time dedicated to this project... and well, I have so much to learn from effectively practicing a lot more shader languages that I'm fine with this basic syntax highlighting! ;) Is it a huge task to add intellisense? It depends, but concretely, I need to implement for each shading language a full grammar/lexer parser in order to provide a reliable intellisense. Of course, a very basic intellisense would be feasible without this, but I would rather not to use an annoying/unreliable intellisense popup.

Although, I did some research about existing lexers for shading languages, surprisingly, this is not something you can find easily. For hlsl for example, afaik, there is no bnf grammar published by Microsoft, so If you want to do it yourself, you need to go through the whole HLSL reference documentation and compile yourself a bnf... and that's something I can't afford in my spare time. One could argue that there are some startup code available on the net (O3D from google has an antlr parser/lexer, or a relative simpler one from Christian Schladetsch), agree with that, but well... It still ask a bit more time to patch them, add support for SM5.0, handle correctly preprocessor directives... and so on... After that, I need to integrate it through the language service API, not the worst part. Anyway, If someone is motivated to help me on this, we could come with something. We will follow also if Intelishade is able to resurrect in an open source way... a joint venture would be interesting.

Also, what's my feedback about migrating VS2008 language service to VS2010? Well, It was pretty straightforward! I did follow the sdk instructions about "Migrating a Legacy Language Service" but It was not fully working as expected. In fact, the only remaining problem was that the WSIX VS2001 installer didn't register automatically the NShader Language Service. I was forced to add manually the pkgdef file (containing registry update for the language service) to the vsix archive. While I was working on the migration to VS2010, I had a look at the new extensibility framework and was surprised to see that the new framework is by far much easier to implement in VS2010. Although, I didn't take the time to migrate NShader to use this new framework, It seems to be pretty easy... also nice thing is that they did provide a compatibility layer for legacy Language Service, so I didn't bother with the new api. But If I had to write a new plugin for VS, I would definitely use the new API, although It would only work with VS2010+ versions...

One small recurrent disappointment : Visual Studio is still restricting to provide plugins for Express editions. From a "commercial point of view", I understand this restriction, although for the thousands (million maybe?) of people using express edition, this is a huge lack of functionality.I'm sure that allowing community plugins into Express Editions would in fact improve a lot more Visual Studio adoption.

My next post should be about the making of Ergon at BP2010. I have a couple of things to share about it, but I'm quite lazy at that time to write this post... but It's on the way! ;)
          Baidu's Political Censorship is Protected by First Amendment, but Raises Broader Issues        

Baidu, the operator of China’s most popular search engine, has won the dismissal of a United States lawsuit brought by pro-democracy activists who claimed that the company violated their civil rights by preventing their writings from appearing in search results. In the most thorough and persuasive opinion on the issue of search engine bias to date, a federal court ruled that the First Amendment protects the editorial judgments of search engines, even when they censor political speech. This post will introduce the debate over search engine bias and the First Amendment, analyze the recent decision in Zhang v. Baidu, and discuss the implications of the case for both online speech and search engines.

Search Engine Bias and the First Amendment

When users enter a query into a search engine, the search engine returns results ranked and arranged by an algorithm. The complicated algorithms that power search engines are designed by engineers and modified over time. These algorithms, which are proprietary and unique to each search engine, favor certain websites and types of content over others. This is known as “search engine bias.”

The question of whether search engine results constitute speech protected by the First Amendment is particularly important in the context of search engine bias, and has been the subject of considerable academic debate. Several prominent scholars (including Eric Goldman, Eugene Volokh, and Stuart M. Benjamin) have argued that the First Amendment encompasses results generated by search engines, thus largely immunizing the operators search engines from liability for how they rank websites in search results. Others (primarily Tim Wu) have maintained that because search engine results are automated by algorithm, they should not be granted the full protection of the First Amendment.

Until now, only two federal courts had addressed this issue. See Langdon v. Google, 474 F. Supp. 2d 622 (D. Del. 2007); Kinderstart v. Google, 2007 WL 831806 (N.D. Cal. 2007). In dismissing claims against Google, Microsoft, and Yahoo brought by private plaintiffs dissatisfied with how their websites ranked in search results, both courts concluded after limited analysis that search engine results are protected under the First Amendment.

Baidu in Court

In May 2011, eight Chinese-American activists who described themselves as “promoters of democracy in China” filed a complaint against Baidu in the United States District Court for the Southern District of New York. The plaintiffs, who are residents of New York, alleged that Baidu had violated their First Amendment and equal protection rights by “censoring and blocking” the pro-democracy content they had published online from its search results, purportedly at the behest of the People’s Republic of China. While the plaintiffs’ content appeared in results generated by Google, Yahoo, and Bing, it was allegedly “banned from any search performed on … Baidu.”

Baidu responded by filing a motion for judgment on the pleadings. Baidu argued that the plaintiffs’ suit should be dismissed based on the longstanding principle that the First Amendment “prohibits the government from compelling persons to speak or publish others’ speech.” Baidu also accused the plaintiffs of bringing a meritless lawsuit “for the purpose of drawing attention to their views.”

Last month, United States District Judge Jesse M. Furman concluded in a thoughtful decision that that the results returned by Baidu’s search engine constituted speech protected by the First Amendment, dismissing the plaintiffs’ lawsuit in its entirety.

Judge Furman began his analysis with a discussion of Miami Herald Publishing Co. v. Tornillo, a 1974 decision in which the Supreme Court held that a Florida statute requiring newspapers to provide political candidates with a right of reply to editorials critical of them violated the First Amendment. By requiring newspapers to grant access to their pages the messages of political candidates, the Florida law imposed an impermissible content-based burden on newspapers’ speech. Moreover, the statute would have had the effect of deterring newspapers from running editorials critical of political candidates. In both respects, the statute was an unconstitutional interference with newspapers’ First Amendment right to exercise “editorial control and judgment.”

The court then cited Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, which extended the Tornillo principle beyond the context of the press. In that case, the Supreme Court ruled that Massachusetts could not require organizers of a private St. Patrick’s Day parade to include among marchers a group of openly gay, lesbian, and bisexual individuals. This was true even though parade organizers did not create the floats themselves and did not have clear guidelines on who and what groups were allowed to march in the parade. Once again, the Court held that requiring private citizens to impart a message they did not wish to convey would “violate[] the fundamental rule of protection under the First Amendment . . . that a speaker has the autonomy to choose the content of his own message.”

These decisions taken together, according to the court, established four propositions critical to its analysis. First, the government “may not interfere with the editorial judgments of private speakers on issues of public concern.” Second, this rule applies not only to the press, but to private companies and individuals. Third, First Amendment protections apply “whether or not a speaker articulates, or even has, a coherent or precise message, and whether or not the speaker generated the underlying content in the first place.” And finally, that the government has noble intentions (such as promoting “press responsibility” or preventing hurtful speech) is of no consequence. Disapproval of a speaker’s message, regardless how justified the disapproval may be, does not legitimize attempts by the government to compel the speaker to alter the message by including one more acceptable to others.

In light of these principles, the court reasoned that “there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation.” In retrieving relevant information from the “vast universe of data on the Internet” and presenting it in a way that is helpful to users, search engines make editorial judgments about what information to include in search results and how and where to display it. The court could not find any meaningful distinction between these judgments and those of a newspaper editor deciding which wire-service stories to run and where to place them, a travel guidebook writer selecting which tourist attractions to mention and how to display them, or a political blog choosing which stories it will link to and how prominently they will be featured.

Judge Furman made clear that the fact that search-engine results are produced algorithmically had no bearing on the court’s analysis. Because search algorithms are written by human beings, “‘they ‘inherently incorporate the search engine company engineers’ judgments about what materials users are most likely to find responsive to their queries.’” When search engines return results, ordering them from first to last, “they are engaging in fully protected First Amendment expression,” the court concluded.

The court declined to see any irony in holding that the democratic ideal of free speech protects Baidu’s decision to disfavor speech promoting democracy. “[T]he First Amendment protects Baidu’s right to advocate for systems of government other than democracy (in China or elsewhere) just as surely as it protects Plaintiffs’ rights to advocate for democracy.”

Implications for Online Speech and Search Engines

As the amount of content on the Internet grows exponentially, search engines play an increasingly important role in helping users navigate an overwhelming expanse of data – Google alone processes 100 billion search queries each month. As such, there is a definite public interest in shielding search engines from civil liability and government regulation. The decision in Zhang v. Baidu promotes strong constitutional protections for some of the Internet’s most heavily relied-upon intermediaries, making it clear that search engines cannot be compelled to include in their results the speech of others. Though not addressed in this case, these protections complement those guaranteed to search engines by Section 230 of the Communications Decency Act . CDA § 230(c)(1) immunizes search engines from most kinds of tort liability for publishing the third-party content of others, while CDA § 230(c)(2) protects their decisions to remove it.

If search engines were subject to civil liability in the United States for the ways in which they display and rank content in search results, individuals would have the power to alter or censor those results via the federal courts. In addition to the obvious financial consequences of civil liability for search engine operators (the plaintiffs in Zhang v. Baidu sought more than $16 million in damages), such a course could result in significant compliance burdens. To better understand how this might play out, one must look no further than this order by a French court requiring Google to remove from search results at the request of a British executive certain images which had been deemed to violate his right of privacy in a United Kingdom lawsuit. The court seemed to take the position that Google’s argument that the First Amendment protected its search results was inconsistent with the “neutral and passive role of a host,” as required to claim the protection of French intermediary law. Marie-Andree Weiss did an excellent write-up on this controversial decision for the Digital Media Law Project.

Though it has been rightfully heralded for reaching the conclusion that operators of search engines are exercising their First Amendment rights when deciding which websites to display in what order, the decision in Zhang v. Baidu has serious and potentially negative practical consequences for online speakers. Search engines play a critical role in helping online speech be discovered. Allowing search engines to prevent certain types of content from being indexed in search results could mean that some online speech will be nearly impossible to find without a direct link to where it exists online. A tremendous amount of power over what online speech can be easily located now rests in an ever-dwindling number of private entities. Proposals for a publicly-controlled, open source search engine belonging to “The People” have yet to gain traction.

Attorneys for the plaintiffs in Zhang v. Baidu have announced plans to appeal the decision to the U.S. Court of Appeals for the Second Circuit. Should the Second Circuit adopt the line of reasoning laid out so clearly by the district court, plaintiffs across the country considering bringing a lawsuit over search engine bias would be hard-pressed to overcome the First Amendment hurdles put in place by this likely influential precedent.

Natalie Nicol earned her J.D. from University of California, Hastings College of the Law. During law school, she worked as an intern at the Digital Media Law Project, the Electronic Frontier Foundation, and the First Amendment Project.

(Image courtesy of Flickr user simone.brunozzi pursuant to a Creative Commons CC BY-SA 2.0 license.)

Subject Area: 


          Pengertian CodeIgniter        
CodeIgniter merupakan sebuah framework PHP dan juga open source, framework ini menggunakan model MVC (Model, View, Controller) dalam membangun website dengan menggunakan bahasa pemrograman PHP. Framework CodeIgniter atau biasa disebut CI sudah banyak digunakan oleh para developer web dalam membangun sebuah web. Versi CodeIgniter terbaru saat penulis membuat tulisan ini adalah versi 2.1.3, anda bisa mendownload
CI disini.

Sebelum membahas apa itu model MVC, terlebih dahulu kita pahami tentang framework itu sendiri. Framework merupakan sekumpulan dari prosedur-prosedur, fungsi-fungsi, dan kelas-kelas yang memiliki tujuan tertentu yang siap digunakan sehingga pengguna tidak perlu membuat fungsi-fungsi tertentu dari awal. Framework tersebut dapat memudahkan dan mengefisienkan pekerjaan seorang web developer.

Pertanyaan yang sering muncul pertama kali ketika anda mengenal framework adalah "Mengapa kita harus menggunakan sebuah framework?" alasannya adalah sebagai berikut :

  1. Framework mempercepat dan mempermudah pembuatan sebuah aplikasi web
  2. Lebih bebas mengembangkan web jika dibandingkan dengan CMS
  3. Framework telah menyediakan fasilitas-fasilitas yang biasa digunakan sehingga seorang programmer tidak perlu membangun dari awal(contoh : pagination, error handling, validasi, dll)
  4. Pada umumnya framework lebih memudahkan programmer dalam maintenance atau pengembangan sebuah web karena sudah tersusun secara teratur dalam sebuah framework jika pembuatan web tersebut mengikuti standar yang ada.
Jadi, sekarang sudah tahu alasan kenapa harus menggunakan framework. Untuk selanjutnya akan kita bahas pengertian dari MVC. MVC atau model view controller adalah sebuah konsep yang sangat populer dalam pembuatan web, MVC memisahkan pengembangan aplikasi menjadi 3 komponen utama seperti mendesain user interface, manipulasi data, dan bagian kontrol aplikasi tersebut. Berikut penjelasan ketiga komponen tersebut :
  1. View adalah bagian yang menangani tampilan pada aplikasi web, biasanya bagian ini berisi template HTML yang diatur oleh controller. View berfungsi untuk menerima dan menampilkan data kepada user.
  2. Model adalah bagian yang berhubungan langsung dengan database untuk memanipulasi data seperti insert, update, delete, read, dll. bagian ini juga menangani validasi dari bagian controller.
  3. Controller adalah bagian  yang mengatur hubungan antara bagian model dan bagian view, controller juga memiliki fungsi untuk menerima request dan data dari user kemudian menentukan apa yang akan diproses oleh aplikasi.
Dengan menggunakan konsep MVC ini, suatu aplikasi web dapat dikembangkan sesuai dengan kemampuan para developernya. Bagian model dan controller akan ditangani oleh programmer sedangkan untuk bagian view akan ditangani oleh seorang desainer. Sehingga dalam pengembagannya akan semakin lebih efisien dan efektif.

Kelebihan CI dibandingkan framework PHP lainnya :
  1. Performa yang cepat
  2. Pengaturan yang sedikit
  3. Komunitas CI yang banyak
  4. Dokumentasi yang sangat lengkap

          BITCOIN : What is it? How does this work?        
Okay, so maybe you would've been wondering from a while that what is a bitcoin that everyone has been talking about, how does this thing work and more importantly, how you can get them. So calm down peeps because you are about to know everything.

Basically, a BITCOIN is peer-to-peer payment system and digital currency introduced as open source software in 2009 by pseudonymous developer Satoshi Nakamoto.
It is a cryptocurrency, so-called because it uses cryptography for security. Users send payments by broadcasting digitally signed messages to the network. Transactions are verified, timestamped, and recorded by specialized computers into a shared public transaction history database called the block chain. The operators of these computers, known as miners, are rewarded with transaction fees and newly minted bitcoins.

Don't be surprised after knowing this but Bitcoin's exchange rates are way to high. It is damn true 1 BTC(bitcoin) has currently 952.5 USD. Isn't that super crazy. If you don't believe feel free to check the exchange rates here yourself.

That was the very basic knowledge about a bitcoin. But how does this thing work exactly ? This is the question that causes confusion. Here's a quick explanation.

Well you can watch this video for explanation of what is a bitcoin and how does this thing work or you can read the summary after the video.

Nmap (“Network Mapper”) is a free and open source (license) utility for network discovery and security auditing. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.


Wireshark is a network protocol analyzer. It lets you capture and interactively browse the traffic running on a computer network.

3.Metasploit Community edition

Metasploit Community Edition simplifies network discovery and vulnerability verification for specific exploits, increasing the effectiveness of vulnerability scanners. This helps prioritize remediation and eliminate false positives, providing true security risk intelligence.


Nikto is an Open Source (
GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software.

5.John The ripper

John the Ripper is a fast password cracker, currently available for many flavors of Unix, Windows, DOS, BeOS, and OpenVMS. Its primary purpose is to detect weak Unix passwords. Besides several crypt(3) password hash types most commonly found on various Unix systems, supported out of the box are Windows 
LM hashes, plus lots of other hashes and ciphers in the community-enhanced version.


Ettercap is a comprehensive suite for man in the middle attacks. It features sniffing of live connections, content filtering on the fly and many other interesting tricks. It supports active and passive dissection of many protocols and includes many features for network and host analysis.

7.NexPose Community edition

The Nexpose Community Edition is a free, single-user vulnerability management solution. Nexpose Community Edition is powered by the same scan engine as Nexpose Enterprise and offers many of the same features.


Ncat is a feature-packed networking utility which reads and writes data across networks from the command line. Ncat was written for the Nmap Project as a much-improved reimplementation of the venerable Netcat. It uses both TCP and UDP for communication and is designed to be a reliable back-end tool to instantly provide network connectivity to other applications and users. Ncat will not only work with IPv4 and IPv6 but provides the user with a virtually limitless number of potential uses.


Kismet is an 802.11 layer2 wireless network detector, sniffer, and intrusion detection system. Kismet will work with any wireless card which supports raw monitoring (rfmon) mode, and (with appropriate hardware) can sniff 802.11b, 802.11a, 802.11g, and 802.11n traffic. Kismet also supports plugins which allow sniffing other media such as DECT.


w3af is a Web Application Attack and Audit Framework. The project’s goal is to create a framework to find and exploit web application vulnerabilities that is easy to use and extend.

           The Big 2014 MOOC Re-Boot        

The big MOOCPlatforms are undergoing a large scale re-boot at the beginning of this new year. If 2012 was "the year of the MOOC" and 2013 the "Year of the Deflation of MOOC Hype," then 2014 may well be the "Year of MOOC's Second Chance." Here I focus on new efforts at Udacity and Coursera that are designed both to improve the learning experience and generate revenues.


Udacity, which performed its famous "pivot" in mid 2013, and labeled its first efforts a "a lousy product" turned to revenues from corporate training. Since that time, Sebastian Thrun has continued to offer Udacity MOOCs free to the general public, but has emphasized the need - and availability for a price - of auxiliary services including mentoring and tutoring. The new, re-booted Udacity website puts these coaching services front and center:

Learning is a collaborative process, and we're here to provide you with guidance every step of the way. We'll help you select the right class, navigate challenging content, and improve your projects and code.

Given the major emphasis on mentoring and tutoring in my account of online learning in Education 2.0, this is hardly a surprise. Most learners, at least those with little prior academic experience and success, and lacking well developed self-directed learning habits, are unable to get much value from MOOC-based learning unless aided by mentors and tutors.

The mentors help them focus down on why they are learning, and what they need to be learning to move forward with their lives and achieve their aims - and even how to formulate some basic life goals.

The tutors then help them focus down own on how to learn, on how to overcome misunderstandings, on how to motivate themselves to get through those course segments when the learning curve steepens - in addition to how to solve this or that problem or remember how to define this or that concept.

Both are necessary for most learners - not just those from disadvantaged communities. Kids who take to academic work and thrive without some of this hand holding are out-liers. It will be very interesting to see how the new MOOC on "Preparing for Uni" on the FutureLearn" platform will fare. Can we bootstrap MOOC-based Learning through MOOC-Based Learning? Or will we require some personal interventions with real humans?

A question for another post: Can MOOC platforms - or at least the non-profit ones - figure out a way of providing personal mentoring and coaching through some combination of crowd-sourcing and what Clay Shirky calls the Cognitive Surplus. If Yahoo Answers can elicit dozens of answers to each of thousands of questions daily, and Wikipedia can elicit encyclopedia articles, edits and additions on every conceivable topic, and open source enterprises can call out the collective talent of software engineers, then why can't either the MOOC Platforms or some auxiliary enterprise (like all those wonderful add-ons to Twitter) figure out how to source online (or even offline) mentoring and tutoring for MOOC learners?


Coursera's new front-line product is its Specializations Program. The Platform organizes course sequences which collectively build a skill with current workplace demand. The Specializations Page showcases ten of these programs. Coursera and edX - and other MOOC platforms, have already offered course sequences - most dramatically, entire foundation year MBA course sequences from top business schools like Wharton. What is new with the Specializations is (1) the specific skill- with-workplace-demand promise, and (2) the price tag per each course in the sequence. Specialization courses can still be taken for free, but only those enrolled in the signature verification tracks can complete the final projects and get the certificate for the Specialization.

So for now, Coursera, like Udacity, continues to offer its MOOCs in a cost-free version, but pins its hopes for revenue generation on add-ons.

Like all disruptive technologies, MOOCs start with one set of core images and expectations forged by founders, and gravitate to other images and expectations in the inevitable back and forth of 'social construction'. It took the telephone a few decades to become what we have long since been familiar with. Some of the early adopters thought it would be a device for listening to classical music! The MOOCs will settle in, and as always, public uses and private ventures and their revenue streams will be the key determinate of what they ultimately become for us.

          MOOCs and the Manufactured Crisis of Higher Education        
Over at Slate, authors Christian and Calvin Exoo argue that MOOCs represent the entry of large, dominant corporations into the Education space. They paint Coursera, edX and Udacity as emerging giants - controlling education as Big Pharma and Big Oil now dominate their industries. 

The big MOOC firms claim to be responding to the problem of restricted access to higher education. But, the authors claim, the U.S. has no such problem - the MOOC firms are 'manufacturing' the problem through their press releases to blind us from reality. A huge percentage of kids, the authors counter, gain access to some form of higher education. The real problem, the authors state, lies in retention: kids drop out because they are unprepared for college and can't afford tuition. The solution they propose is adding lots of remedial services and extra financial aid around the margins for disadvantaged kids. These two steps could get a huge percentage through college. MOOCs provide neither financial aid or remediation, and hence offer nothing for the "real" crisis. 

Leaving beside edX's non-profit status and its decisive move to open source, and Udacity's apparent exit from the education market to concentrate on corporate training - which leaves only Coursera still standing as a large corporate entity in the MOOC education space, this analysis still seems wrong-headed.  

My response is that it is the authors, and not the MOOC firms, who are manufacturing a crisis.

The real crisis is not that too many kids fail to complete college - but rather that too many kids get forced into college because no other pathways to dignified work exist in our society. If we didn't make college a more or less compulsory job qualification - even for jobs that make no use of college related knowledge or skills - we wouldn't need to push these kids through college in the first place. Then they would not need all of these extra-ordinary measures to graduate.

Suppose we do get these kids through college. They will still be facing both a poor and contracting job market - especially for that fragment that needed even more remedial services and extra tuition help. And those kids would also still be burdened with huge debts, which will cripple them financially for life. These are the real crises, and the authors' solutions completely ignore them.

The real challenge is to envision and implement alternative pathways to the workplace. The promise of MOOCs lies in their providing one element in the educational mix aligned with these new pathways. The authors propose to push these kids through a college education. But the kids and their families only seek college education because it has been made into a compulsory qualification. The kids and their families, however, are being sold a bill of goods. There will be no college-level jobs waiting at the end of the compulsory college education. And the kids will end up deep in debt.

The alternative I envision gets these young people more rapidly into workplaces - as apprentices, interns, or small entrepreneurs - and provides the educational backup they need to progress in their work paths without taking on debt. In such arrangements, MOOCs will find one of their most important educational roles

          EdX and the Convergence of MOOC Learning Management Systems (LMS).         

The past week has witnessed an important development in the convergence of open course management systems for MOOCs. 

In September 2012 Stanford released Course2GO - built on top of Stanford Courseware  - as open source software.  

Jane Manning, Class2Go product manager, explained that the idea started with a six-member team in Stanford’s computer-science department. The team built Class2Go using code from Stanford’s Courseware course-hosting platform, a similar platform from the nonprofit Khan Academy, and software for integrated online classroom forums hosted by Piazza

At the same time, Google released its open source CourseBuilder systemGoogle explained that 
Course Builder open source project is an experimental early step for us in the world of online education. It is a snapshot of an approach we found useful and an indication of our future direction. . . . edX shares in the open source vision for online learning platforms, and Google and the edX team are in discussions about open standards and technology sharing for course platforms. 

In June 2013 edX released its own open source MOOC management system.  

At about the same time, Stanford announced that it would be closing Course2GO and partnering with edX for further development of open source MOOC management tools. 

According to Stanford's announcement, open source online learning platforms such as edX will allow universities to develop their own delivery methods, partner with other universities and institutions as they choose, collect data and control branding of their educational material.
While Stanford and its professors will continue to use several providers of online courses, including Coursera and Venture Lab, the university will stop developing its own platform, Class2Go. Instead, aspects of Class2Go will be incorporated into the program developed at edX, a nonprofit launched by Harvard and MIT last year. The resulting software code will become available, or open source, on June 1.
In Stanford's news release, edX president Anant Agarwal predicted that the edX platform will now become the "Linux of learning."  
Now edX has partnered with Google to form, a platform to enable all schools, organizations or individuals to author and manage their own MOOCs. 

As Steve Kolowich reports in the Chronicle
The new site,, will provide tools and a platform that “will allow any academic institution, business, and individual to create and host online courses,” says a blog post by Dan Clancy, a research director at Google. In an interview, Anant Agarwal, president of edX, referred to the site as a “YouTube for courses.” 
The resulting open source system will by 2014 enable anyone, anywhere, to develop MOOCs free from dependence upon commercial learning management systems like Blackboard.   

EdX won't be all things for all people, but it promises to be both the Linux and the YouTube for massive online courses.

          AI – What Chief Compliance Officers Care About        

AI conference logo

Arguably, there are more financial institutions located in the New York metropolitan area than anywhere else on the planet, so it was only fitting for a conference on AI, Technology Innovation & Compliance to be held in NYC – at the storied Princeton Club, no less. A few weeks ago I had the pleasure of speaking at this one-day conference, and found the attendees’ receptivity to artificial intelligence (AI), and creativity in applying it, to be inspiring and energizing. Here’s what I learned.

CCOs Want AI Choices

As you might expect, the Chief Compliance Officers (CCOs) attending the AI conference were extremely interested in applying artificial intelligence to their business, whether in the form of machine learning models, natural language processing or robotic process automation – or all three. These CCOs already had a good understanding of AI in the context of compliance, knowing that:

  • Working the sets of rules will not find “unknown unknowns”
  • They should take a risk-based approach in determining where and how to divert resources to AI-based methods in order to find the big breakthroughs.

All understood the importance of data, and how getting the data you need to provide to the AI system is job number one. Otherwise, it’s “garbage in, garbage out.” I also discussed how to provide governance around the single source of data, the importance of regular updating, and how to ensure permissible use and quality.

AI Should Explain Itself

Explainable AI (XAI) is a big topic of interest to me, and among the CCOs at the conference, there was an appreciation that AI needs to be explainable, particularly in the context of compliance with GDPR. The audience also recognized that their organizations need to layer in the right governance processes around model development, deployment, and monitoring––key steps in the journey toward XAI. I reviewed the current state of art of Explainable AI methods, and where their road leads to getting AI that is more grey-boxed.

Ethics and Safety Matter

In pretty much every AI conversation I have, ethics are the subject of lively discussion. The New York AI conference was no exception. The panel members and I talked about how any given AI system is not inherently ‘ethical’; it learns from the inputs it’s given. The modelers who build the AI system need to not pass sensitive data fields, and those same modelers need to examine if inadvertent biases are derived from the inputs in the training of the machine learning model.

Here, I was glad to be able to share some of the organizational learning FICO has accumulated over decades of work in developing analytic models for the FICO® Score, our fraud, anti-money laundering (AML) products and many others.

AI safety was another hot topic. I shared that although models will make mistakes and there needs to be a risk-based approach, machines are often better than human decision-making, such as autopilots on airplanes. Humans need to be there to step in if something is changing, to the degree that the AI system may not make an optimal decision. This could arise as a change in environment or data character.

In the end, an AI system will work with the data on which it has trained, and is trained to find patterns in it, but the model itself is not necessarily curious; the model is still constrained by the algorithm development, data posed in the problem, and the data it trains on.

Open Source Is Risky

Finally, the panel and I talked about AI software and development practices, including the risks of open source software and open source development platforms. I indicated that I am not a fan of open source, as it often leads to scientists using algorithms incorrectly, or relying on someone else’s implementation. Building an AI implementation from scratch, or from an open source development platform, gives data scientists more hands-on control over the quality of the algorithms, assumptions, and ultimately the AI model’s success in use.

I am honored to have been invited to participate in Compliance Week’s AI Innovation in Compliance conference. Catch me at my upcoming speaking events in the next month: The University of Edinburgh Credit Scoring and Credit Control XV Conference on August 30-September 1, and the Naval Air Systems Command Data Challenge Summit.

In between speaking gigs I’m leading FICO’s 100-strong analytics and AI development team, and commenting on Twitter @ScottZoldi. Follow me, thanks!

The post AI – What Chief Compliance Officers Care About appeared first on FICO.

          BANG BANG: Dave McDougall        
[BANG BANG is our week-long look back at 20!!, or "Twenty-bang-bang," or 2011, with contributions from all over aiming to cover all sorts of enthusiasms from film to music to words and beyond.]

Selected 2011 discoveries, briefly noted and across various media by Dave McDougall.


Homeland —— the characters on this show run deep; their history and demons are as much a driver as the twists of plot. Which certainly helps Claire Danes and Mandy Patinkin and Damian Lewis and Morena Baccarin act their asses off. Allegiances don't shift as much as they are gradually revealed; even though the audience isn't only in the headspace of Danes' rebellious CIA agent, everything is filtered through the line between the watchers and the suspects, and the further into each world we're given access, the more complicated the line between terrorist and hero. This isn't a war of ideas as much as a war between wounded people who've sided with ideas, and those wounds are what drive both the terrorists and those trying to stop them. This week's showstopping season finale toyed with heavy political and personal dénouement and teased an even greater moral complexity to come. If there's a better show on television right now, I'd like to see it. 

The Color Wheel (Alex Ross Perry, 2011) —— A masterpiece, a perfect screwball comedy, and a vicious, misanthropic, prickly little thing. What Ignatiy said, and then some.

And two other filmic masterpieces-to-be-named-later that also tackle communication (and shared histories) between men and women, on which I'll have more to say in the Mubi year-end roundup.


Governments toppled, not by social media but by people going to the streets to battle for their due. But the dynamics of open source protest and new media communication flows were a big part of why this was the year that kicked off an #ArabSpring, an indignado movement, a global coalition of #Occupy protests. It's not just coordination of protests but the ability for knowledge flows to reveal the silent political preferences of a people, and to rally supporters to the cause. None of these movements were created by the emergence of social media -- all grew out of previous organization by activists on the ground, over years and decade -- but it's hard to deny that these movements could only coalesce through communication, and that new forms of one-to-many communication smooth the friction of reaching out to wide audiences. 


As the 2008 financial crisis has shifted to become a crisis of solvency and liquidity in the Eurozone, the economic intelligence of the left-ish political blogotwittersphere rises almost as fast as events shift; but the key insight is that, unlike the people-powered movements and revolutions mentioned above, the fate of all of our economic lives still hangs in the balance of deals to be cut in back rooms by power brokers. Which, as those same movements will attest, is the opposite of democracy. If the revolutions of Egypt or Libya or Tunisia (or Syria or Bahrain or Yemen, if you're looking for revolutions-in-the-making) were best revealed by the participants themselves in 140 characters (or 140 character updates, compiled), then the stories of our economic dilemmas have been best told by those savvy enough to get to the bottom of capital flows and reveal these inner workings via blogs, articles, and interviews, whose links were embedded in 140-character updates themselves. Information, in all its forms -- pictures, videos, charts, analysis, stories from the front lines -- move and flicker and flow just the ways frames do in the cinema. For me, these were a few of the sources that made the leap to essential in 2011, from the MENA uprisings to the Econopocalyse and the social movements pushing back:


Among all the books and blogs and analysis, an epic cornerstone of how to even begin to think of how we got here — David Graeber's Debt: The First 5000 Years. 

David McDougall is a writer, filmmaker, and media strategist based in London and Los Angeles. He's got blogs and films and words in various places, some of them on the internet. He twitters here.

          Seeed’s RePhone modular phone hits Kickstarter goal in two days        

Shenzhen, China-based Seeed has nailed its Kickstarter goal for its RePhone open source modular smartphone in just two days. The smartphone kit consists of 11 individual modules, which contain various parts of the phone, such as the display, GPS radio, and cellular radio. The company started the RePhone Kickstarter campaign on September 22 and reached its […]

          Is the Vert.x episode spotlighting an open source weakness?        
With all my Sun years advocating open source and my following closely of the Hudson/Jenkins drama from within Oracle some two years ago, I’ve been tracking the recent vert.x issue with quite some detachment (I’m no longer at Oracle and I’m not involved in any way in this technology) but also with a lot of … Continue reading "Is the Vert.x episode spotlighting an open source weakness?"
          How to write a standalone program?        
Several years ago I write some program to be run on Windows machine. This programs need to be run from CD so no installation program can be used. At this time the mail solution are two: use Visual Basic or Director. I am mainly like to have the full control of my program so prefer to use Visual Basic. Several month ago e new customer ask to me to realize a CD to put in a book. Now, which development tool use this time? In fact in the last year I'm developing web application using mainly PHP and JSP on linux platform. Again, which development tool I can use to do this? So I give a look about the old Visual Basic plaftorm. Ok the old Visual Basic is dead, and it evolve to VB.NET while Director I never use before so why use it now? So give a look to VB.NET. Ok, seems to be very interesting and powerful .... but there is a LITTLE problem ...... to run the program the virtual machine need to be installed on machine and this is not a good thing for this type of application .... So, no solution? No, of course. From several month I start to look inside MONO, an open source reimplementation of .NET Virtual Machine. In this environment I can create a standalone program using mkbundle utility shipped with mono. Using this solution you have a program that can be run where mono runtime is supported (almost everywhere) and can be transformed on a standalone program for the target that you need. In my case only windows. This utility generate a C program that include all the library needed to start the mono runtime without need to install on the computer. I listen you, and to get the data used by the application? In the past you can store you data inside the program as resource or use a MDB file. Obviously this solution is not portable so I make some test using sqlite to manage the data. This solution is high portable because sqlite is available everywhere. But, it works? Sure, after I post some snapshot taken from Windows and Linux.
          DNN Hangout - August 2015 - Understanding the DNN Platform Source Code        

Understanding the DNN Platform Source Code

Every now and again, we’ll throw in a special episode of DNN Hangout, where we talk about something that we believe will be extra beneficial to the community. Sometimes this will have one or more special guests attached to it, sometimes it won’t. In this special episode, Joe Brinkman and I walk through the source code of DNN platform. It’s grown so much over the years that we realize just how intimidating it can be to someone. After watching this hangout, that intimidation should be at a minimum.

Want to Be on the Show?

We are always looking for new people to be featured on the show. You don’t have to be an “expert” in anything. Just be prepared to chat with us about anything interesting about DNN, no matter how big or small.

Please let me know in the comments or via email if you’d like to be on DNN Hangout.

Next Episode

In our next hangout, we’ll be speaking with a return guest. I heard about and unfortunately had to miss his session at last year’s DNNCon. He showed everyone his tips and tricks to putting together a socially engaging DNN site, and he used DNN-Connect as an example. Phillip Becker will be joining us and filling us in on the things he learned.

Join the Hangout

Site of the Month

We’re always looking for sites to feature in our Site of the Month segment. Please let me know if you’d like for me to do a quick segment on one of your sites.

Understanding the DNN Platform Source Code

Show Notes

          Know Your ABB’s: Always Be Branding!        

A filtered view of yourself

When people look at you, they see you brand, whether intentional or not.  It’s a lot like looking at things through a camera lense.  It’s a similar thing when you look at any successful business.  No matter if you see it’s sign on the side of the road, in an ad on a website or billboard, or you visit their booth at a conference – you know where you are and who you’re interacting with.  When you walk into any In-N-Out location or visit their website, you can feel their brand all around you.  You’ll never look at one and confuse it with Burger King or McDonald’s.  When you think about your personal brand, you need to attempt to achieve this as best you can.  Be the CMO of your brand!

What You Wear

Steve Jobs & Mark Zuckerberg's brand styleThere are a few notable people that we’d all recognize who were masters of their personal brand.  It wasn’t always because they were trying to do anything related with branding – sometimes it was purely functional – personal branding was the happy accident.  Steve Jobs is probably the most well-known, with his black turtle necks, loose-fitting jeans, and New Balance tennis shoes.  Mark Zuckerberg wears the same grey shirt mostly everyday.  Johnny Cash became known as “the man in black” for the very reason that the phrase suggests.  And Dean Kamen (he invented the Segwey) always was seen wearing jeans with his denim shirts.  These people embodied what was to become their personal brand, every day.

When I took over as president of a little known user group in Orlando, it had maybe 20 members, 9 of which you might see together at any given moment.  I took this responsibility seriously.  I wanted to grow the user group as fast and as large as I could.  I quickly learned and employed numerous techniques to do this.  One of the primary things I did was go to any other user group event in my area, and when I did, I always wore the same shirt, jeans, and matching shoes.  The shirt had the user group logo on it – large and pronounced.  Whenever you saw me at one of these events, you knew what I was there to represent.  You were aware of the user group I was leading. 

This was only one of many things I did, but it worked incredibly well.  We ended up having hundreds of members in our mailing list, a live streamed monthly meeting, advertising in the other user groups, regular members that would drive as far as 3 hours to a weeknight meeting, and a full board before I handed the reigns to the next leader.

How You Look and How You Sound

This is more related to the ways you’re represented when you’re not directly representing yourself.  This includes your personal website, social media profiles/bios, and so on.  Every surface, laptop, tablet, slide background, and even the moleskin notebook you might be carrying is an opportunity to showcase your personal brand to the people around you.  Don’t waste the opportunity.

I don’t represent the user group any longer, but I do represent an e-commerce company right now.  When you see me speak at events or visit clients, there’s no mistaking the brand I am projecting to you, and you won’t forget me.

Will Strohl speaking at an open source conference

If you have a personal website, you should make every effort for it to represent who you are.  It should also use photography, content, and colors that reinforce the personal brand you’re trying to build.  Your emails should all reflect the same branding you’ve settled on, and so on.

When you decide to have an avatar, follow the LinkedIn best practices, and make sure it’s the same avatar on all of your public profiles.  Also do the same with profile backgrounds and bio’s, if possible.  Keep them for long periods of time.  Every time you change your bio or avatar, people end up having to spend time getting to to know your brand again.

If you have a laptop or tablet that will be sitting in front of potential clients or opportunities, use a service like SkinIt to brand it.  It’s fun, it’s cheap, and it’s super easy.  (Also, it comes off easily.)  If you don’t do this, you’re wasting an incredibly effective way to spread your and reinforce your brand.  Like trailers in front of a movie, this is a simple method to connect with your captive audience.

You’ll undoubtedly meet people that could help spread your brand for you.  You’ll need to introduce yourself.  Like a business with their tag line, you should have an easy-to-remember phrase or short sentence that describes who you are and what you do.  If you don’t, recommending you will be much more difficult.  There should be nothing cocky about your introduction, but you really should think of something memorable – but please leave out ninja and guru (even expert is pushing it).  When you say your introduction, do it with confidence and always keep eye contact. 

Another part of your brand will often go overlooked… Your brand should be and project positivity.  It should help connect others.  Sure, a negative or controversial post will get a lot of likes, but it’s simply catering to the lowest common denominator of human nature.  In the end, it won’t inspire people, and it won’t reflect the best parts of you.  Your social currency is very valuable.  Every word you say costs you.  Spend it wisely.

Just to sum this up, I’m not here to suggest that you should wear the same clothing everyday.  However, you should have a consistent style in what you wear, what you say, and what you do. The whole point of this is that when someone needs to recommend someone like you – they remember you for the right reasons and recommend you before anyone else!

          DNN Hangout - July 2015 - How Many Ways Can You Extend DNN?        

DNN Hangout July 2015: Mitchel Sellers presents all of the ways you can extend DNN

This month we had the pleasure to speak with Mitchel Sellers.  He’s a long-time DNN supporter and advocate.  Aside from being a DNN and Microsoft MVP, he’s literally written the book on extending DNN through modules.  He’s regularly found at community events, and pretty much everywhere online, attempting to help people with their C# and DNN needs. 

Want to Be on the Show?

We are always looking for new people to be features on the show.  Please let me know in the comments of via email if you’d like to be on the DNN Hangout.

Next Episode

Next month we’ll be speaking to the CEO of Aricie, Jean Sylvain Boige.  He’s going to be walking us through his open source Portal Keeper module, and how it will help you make your DNN site administration easier.

Join the Hangout

Site of the Month

Site of the Month: MRAMRA was our site of the month this episode, courtesy of our speaker, Mitchel Sellers.  MRA is a non-profit organization that has been around for over 100 years, and helps companies with their HR needs. 

Mitchel was kind enough to walk us through their site, where he pointed out that the upgrade helped them getting to a responsive site to help increase membership and membership activity.  Mitchel and his team also paid a lot of attention to make sure that document management and search worked well, using Ventrian News Articles and DNN Sharp Search Boost.

Mitchel Sellers: How Many Ways Can You Extend DNN?

Show Notes

The resource listing below is provided for your reference and convenience.  Listing a resource does not imply an endorsement or guaranty of any kind. 

Articles, Videos, Blogs

Extension Updates

          Microsoft announces open source Coco Framework to speed up enterprise blockchain adoption        
Microsoft has today announced Coco Framework, a means of simplifying the adoption of blockchain protocol technology. The aim is to speed up the adoption of blockchain-based systems in the enterprise, whilst simultaneously increasing privacy. Coco -- short for Confidential Consortium -- will be available in 2018, and Microsoft will be making the technology open source to help increase uptake. Intel is working with Microsoft as a hardware and software partner, and Coco Framework features Intel Software Guard Extensions (Intel SGX) to improve transaction speed at scale. The framework is compatible with Ethereum, but Microsoft envisions it being used across financial… [Continue Reading]
          Search for a job in the UK        

Being a recruiter the most commonly asked question which i regularly face is that,how to search for a job in the uk ? and my answer to that is always .....huh???? Recently the UK has opened its borders to the geeks and has virtually put a red carpet for IT professionals. This move by the is being seen as a major way to boost its inflow of high tech labour.

There yet are many countries yet to follow the suit but the UK has been the first one to implement this kind of policy after Uncle sam did way back in the 90's.Well over the times software industry has gone through some radical changes right from algorithms to outsourcing.People might argue that outsourcing is bad but frankly speaking its helping the economies more than ever.As per a report the average employment in th US has been to a record low,i guess this is a suitable explanation for the above.

Nowdays skills like SAP and open source technologies are in huge demand.SAP is very hot these days and of course traditional programming languages like C++ and Java are also in the wanted list.In the united states market SAP is in huge demand and so it is in UK.As the software industry is booming and cash is flowing like never before the companies have started investing more in ERP/CRM applications and even provide free training to its employees.If i would have been in some candidates shoes i would go for SAP or any other ERP/CRM package

Now comes the question "how to search for a job in the UK" ,well it isn't easy as it seems you can either post your resume on a job board or look for headhunters.While posting your resume on a job board is the easiest option but it may not be the most effective option( correct me if i am wrong ) but the best way out is through traditional means a.k.a Network.Job boards do provide a global reach for your resume but the key is posting your resume on a niche job board or a industry specific job board...that will greatly increase your chances of being noticed.

Add to Technorati Favorites

This weblog is sponsored by iitjobs. Add to Technorati Favorites
          Thoughts for ArtServe Interview        
Computer interface in a shoe box.

Today I had an interview with Jennifer Baum, a writer for ArtServe Michigan. They're doing an article on the Kalamazoo Makers Guild meetup group. In preparation for our discussion Jennifer was kind enough to supply me with some topics we might discuss and I jotted down some notes while I thought about what I would say. Here are those notes and roughly what I said.
About Kalamazoo Makers Guild...

The Kalamazoo Maker's Guild is a group of people interested in DIY technology, science and design. We more or less pattern ourselves after the Homebrew Computer Club that founded Silicon Valley. Like them, our members tend to have some background in a related profession, but that's by no means a prerequisite. This group is about the things we do for fun, because they interest us, and anybody can be interested in making stuff. We meet every couple of months, report on the status of our various projects and sometimes listen to a presentation or hold an ad hoc roundtable on a topic that catches our interest. "Probably the most useful aspect of the group is that you start to feel accountable to the other members of the group and you're motivated to make progress on your project before the next meeting."

How did it get started...

When I gave up my web design business I ended the professional graphic design association I'd formed on, and then I had room on the service to start another group. MAKE magazine had really caught my attention. I did a few projects from the magazine and thought it would be fun and helpful to know other people who were working on the same kinds of things. The group didn't get going, though, until about 8 months ago when Al Hollaway from the       posted to an online forum about RepRap 3D printers at the same time I was building one. He wanted to meet and talk about RepRap. I told him about my Meetup group. We joined forces and here we are. is a great web site because it's a web service that's all about meeting people nearby in person to share a common interest.

About membership and kinds of projects ...

The group is growing steadily now. We have twenty something members and we're seeing membership tick up at an increasing rate month to month. We have a high school student who is working designing assistive devices for the blind using sonic rangefinders, one member who last meeting showed off a prototype of computer interface built into a shoe box, and another member is on the verge of completing a working DIY Segway (the self-balancing scooter) made using a pair of battery-powered drills for motors. Al should be done with his RepRap 3D printer and I've just finished my 2nd. At least two other members are in some stage of building their own 3D printers. I'm building both a laser etcher and a 3D scanner right now, and I'm excited to start playing with the products of a couple Kickstarter projects I've backed. There are a few of us about to start building CNC milling machines, and there's been a lot of excitement in the group around the brand new, hard-to-get Raspberry Pi (a $25 computer.) Almost all the members so far have dabbled in a bit of Arduino hacking. One member is designing a flame thrower for Burning Man. Another is making a calibration device for voltage meters. So, there's a range of things going on.

Where do I see this headed....

Our approach to this group has been to learn from the mistakes other groups have made. All of the other groups I've seen in Kalamazoo start out with facilities and try to bring in members to support and justify it. Getting people to work on actual projects that interest them is something that comes later down the road. It's the, "if you build it they will come" approach. Those groups quickly get into trouble managing the building and funding, and they go away. We're coming at it from the opposite direction. We're gathering together a community of makers first, people who are already doing things on their own. Once we reach a tipping point then we'll worry about the next step, like getting a hackerspace put together. That kind of bottom-up approach is, I think, much more sustainable and durable, and it fits in with our modern culture (particularly in the maker subculture.)  It was good enough for Homebrew, so it's good enough for us.

About impact...

Silicon Valley came out of a group like this, so the potential is there for us to have a big impact on the community. Being a college town we have access to a lot of smart people, and Kalamazoo has a strong progressive, energetic, entrepreneurial vibe going on. I think what's more likely, though, is that we will have an impact in aggregate with all the other makers--groups and individuals--around the globe.

"Makers aren't just hacking new technologies, we're hacking a new economy. We're trying to figure out how to live in a world without scarcity."

The unsung official slogan of the RepRap project is, "wealth without money."

I don't know that another story like Apple is likely to happen again. Steve Jobs relied on a very traditional, very closed model for his business, as did most of the people of that era who went on to make a name for themselves in technology. The ethos of that time was centered around coming up with a big idea and capitalizing on that idea to the exclusion of the competition. It's interesting that even then this view was at odds with that of his partner, Steve Wozniak, who was content to build computers in his garage and share what he learned with his friends at Homebrew. In this way Wozniak was much more like the modern maker/hacker and is probably one of this hobby's forefathers.

Makers/hackers today are all about open-ness and sharing -- not in a hippy, touchy-feely kind of way, but in a calculated way that weighs the costs and benefits of being open verses closed. The success of Linux and the ever increasing number of open source software, and now hardware, projects has proven that there's enormous power in being open. "We tend to think that's the way to change the world."

About the Maker Movement....

I know there are a lot of people who are keen to talk about the "maker movement" but I'm not so sure that I would characterize it as a movement. If it is, then it started in the 60's with people like my dad who were HAM radio enthusiasts and tinkered around with making their own radios and antennas. I think that what we're observing and calling a movement is really an artifact of reaching the steep part of Moore's Law. Ray Kurzweil is famous for talking about this phenomenon. The pace of advances in technology is itself accelerating, it's exponential, and moving so fast now that if you're not paying close attention things seem to pop out of nowhere. For makers, technology has reached a point where Moore's Law has forced down prices and increased the availability of things that just a few years ago were far out of reach. We're just taking those things and running with it. In effect, we're just the people paying close attention.

About me...

I started college in the engineering program at WMU, but I couldn't hack it and dropped out. I went back to community college and got a degree in graphic design. In my professional life I've been paid to be a web designer, photographer, videographer, IT manager, technical document writer, photo lab manager, artist, and I've even been paid to be a poet. For fun I do all those things and also play guitar, peck at a piano, and watch physics and math lectures from the MIT OpenCourseWare web site, do exercises on Khan Academy, play board games and roleplaying games, and commit acts of crafting -- woodworking and model making. For work, I now teach at the Kalamazoo Institute of Arts. I've taught web design, digital illustration and this fall I'll be teaching classes in 3D modeling and 3D printing with the RepRap 3D printer I have on loan there. I live near downtown Kalamazoo with my wife and many pets, including a 23 year old African Grey parrot named KoKo.

Post interview notes...

I mentioned SoliDoodle, the fully assembled, $500 3D printer. The big hackerspace in Detroit is called i3detroit. Also, Chicago has Pumping Station: One. I'm on the forums for both and will be visiting each this summer. The presentation about 3D scanning we had was from Mike Spray of Laser Abilities. You can actually see the entire presentation on my YouTube channel. Thingiverse was the web site that I kept going on about where you can find 3D designs for printing.

Handphone / Hp Android semakin populer di dunia dan menjadi saingan serius bagi para vendor handphone yang sudah ada sebelumnya seperti Nokia, Blackberry dan iPhone.
Tapi bila anda menanyakan ke orang Indonesia kebanyakan “Apa itu Android ?” Kebanyakan orang tidak akan tahu apa itu Android, dan meskipun ada yang tahu pasti hanya untuk orang tertentu yang geek / update dalam teknologi.
Ini disebabkan karena masyarakat Indonesia hanya mengenal 3 merek handphone yaitu Blackberry, Nokia, dan merek lainnya :)

Ada beberapa hal yang membuat Android sulit (belum) diterima oleh pasar Indonesia, antara lain:

  • Kebanyakan handphone Android menggunakan input touchscreen yang kurang populer di Indonesia,
  • Android membutuhkan koneksi internet yang sangat cepat untuk memaksimalkan kegunaannya padahal Internet dari Operator selular Indonesia kurang dapat diandalkan,
  • Dan yang terakhir anggapan bahwa Android sulit untuk dioperasikan / dipakai bila dibandingkan dengan handphone lain macam Nokia atau Blackberry.

Apa itu Android

Android adalah sistem operasi yang digunakan di smartphone dan juga tablet PC. Fungsinya sama seperti sistem operasi Symbian di Nokia, iOS di Apple dan BlackBerry OS.
Android tidak terikat ke satu merek Handphone saja, beberapa vendor terkenal yang sudah memakai Android antara lain Samsung , Sony Ericsson, HTC, Nexus, Motorolla, dan lain-lain.
Android pertama kali dikembangkan oleh perusahaan bernama Android Inc., dan pada tahun 2005 di akuisisi oleh raksasa Internet Google. Android dibuat dengan basis kernel Linux yang telah dimodifikasi, dan untuk setiap release-nya diberi kode nama berdasarkan nama hidangan makanan.
Keunggulan utama Android adalah gratis dan open source, yang membuat smartphone Android dijual lebih murah dibandingkan dengan Blackberry atau iPhone meski fitur (hardware) yang ditawarkan Android lebih baik.
Beberapa fitur utama dari Android antara lain WiFi hotspot, Multi-touch, Multitasking, GPS, accelerometers, support java, mendukung banyak jaringan (GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE & WiMAX) serta juga kemampuan dasar handphone pada umumnya.

Versi Android yang beredar saat ini

Eclair (2.0 / 2.1)

Versi Android awal yang mulai dipakai oleh banyak smartphone, fitur utama Eclair yaitu perubahan total struktur dan tampilan user interface dan merupakan versi Android yang pertama kali mendukung format HTML5.
Apa itu android

Froyo / Frozen Yogurt (2.2)

Android 2.2 dirilis dengan 20 fitur baru, antara lain peningkatan kecepatan, fitur Wi-Fi hotspot tethering dan dukungan terhadap Adobe Flash.

Gingerbread (2.3)

Perubahan utama di versi 2.3 ini termasuk update UI, peningkatan fitur soft keyboard & copy/paste, power management, dan support Near Field Communication.

Honeycomb (3.0, 3.1 dan 3.2)

Merupakan versi Android yang ditujukan untuk gadget / device dengan layar besar seperti Tablet PC; Fitur baru Honeycomb yaitu dukungan terhadap prosessor multicore dan grafis dengan hardware acceleration.
Tablet pertama yang memakai Honeycomb adalah tablet Motorola Xoom yang dirilis bulan Februari 2011.
Tablet Android
Google memutuskan untuk menutup sementara akses ke source code Honeycomb, hal ini dilakukan untuk mencegah perusahaan pembuat handphone menginstall Honeycomb pada smartphone.
Karena pada versi Android sebelumnya banyak perusahaan yang menggunakan Android ke dalam tablet PC yang menyebabkan pengalaman buruk penggunanya dan mengesankan citra Android tidak bagus.

Ice Cream Sandwich (4.0)

Anroid 4.0 Ice Cream Sandwich diumumkan pada 10 Mei 2011 di ajang Google I/O Developer Conference (San Francisco) dan resmi dirilis pada tanggal 19 Oktober 2011 di Hongkong. “Android Ice Cream Sandwich” akan dapat digunakan baik di smartphone ataupun tablet. Fitur utama yang ditambahkan di Android 4.0 ialah Face UnlockAndroid Beam, perubahan major User Interface, dan ukuran layar standar (native screen) beresolusi 720p (high definition).

Market Share Android

Pada tahun 2012 sekitar 630 juta smartphone akan terjual diseluruh dunia, dimana diperkirakan sebanyak 49,2% diantaranya akan menggunakan OS Android.
Data yang dimiliki Google saat ini mencatat bahwa 500.000 Handphone Android diaktifkan setiap harinya di seluruh dunia dan nilainya akan terus meningkat 4,4% /minggu.
PlatformAPI LevelDistribution
Android 3.x (Honeycomb)110,9%
Android 2.3.x (Gingerbread)9-1018,6%
Android 2.2 (Froyo)859,4%
Android 2.1 (Eclair)5-717,5%
Android 1.6 (Donut)42,2%
Android 1.5 (Cupcake)31,4%
Data distribusi versi Android yang beredar di dunia sampai Juni 2011

Applikasi Android

Android memiliki basis developer yang besar untuk pengembangan applikasi, yang membuat fungsi Android menjadi lebih luas dan beragam. Android Market merupakan tempat applikasi Android didownload baik gratis ataupun berbayar yang dikelola oleh Google.
Applikasi Android di handphone
Meskipun tidak direkomendasikan, kinerja dan fitur Android dapat lebih ditingkatkan dengan melakukan Root Android. Fitur seperti Wireless Tethering, Wired Tethering, uninstall crapware, overclock prosessor, dan install custom flash ROM dapat digunakan pada Android yang sudah diroot.

Artikel Terkait

Peristiwa penting yang terjadi di dunia Teknologi pada tahun 2011 (Kaleidoskop)Chrome for Android beta rilis di Android MarketHandphone China menyetrum mati seorang pemuda di IndiaGame Android terbaik dan gratis | Link DownloadTips jika handphone terkena air
          untuk menangkal situs porno        

Untuk menangkal situs porno tidaklah sukar. Secara umum ada dua (2) teknik menangkal situs porno, yaitu:
• Memasang filter di PC pengguna.
• Memasang filter di server yang tersambung ke Internet.

Teknik yang pertama, memasang filter pada PC pengguna, biasanya dilakukan oleh para orang tua di PC di rumah agar anak-anak tidak melakukan surfing ke situs yang tidak di inginkan. Daftar lengkap filter maupun browser yang cocok untuk anak untuk aplikasi rumah tersebut dapat dilihat pada ? parent’s guide ? browser’s for kids. ? parent’s guide ? blocking and filtering.

Beberapa filter yang cukup terkenal seperti
Net Nanny,
I Way Patrol,

Tentunya teknik memfilter seperti ini hanya dapat dilakukan bagi orang tua di rumah kepada anak-nya yang belum begitu tahu Internet.Bagi sekolah yang terdapat fasilitas internet, tentunya teknik-teknik di atas sulit di terapkan. Cara paling effisien untuk menangkal situs porno adalah dengan memasang filter pada server proxy yang digunakan di WARNET / di kantor yang digunakan mengakses Internet secara bersama-sama dari sebuah Local Area Network (LAN).Teknik ke dua (2), memasang filter situs porno tidaklah sukar. Beberapa software komersial untuk melakukan filter konten, antara lain adalah:

Mungkin yang justru paling sukar adalah memperoleh daftar lengkap situs-situs yang perlu di blokir. Daftar tersebut diperlukan agar filter tahu situs mana saja yang perlu di blokir. Daftar ratusan ribu situs yang perlu di blokir dapat di ambil secara gratis, antara lain di:

bagi sekolah atu perkantoran, alternatif open source (Linux) mungkin menjadi menarik karena tidak membajak software. Pada Linux, salah satu software proxy yang paling populer adalah squid ( yang biasanya dapat di install sekaligus bersamaan dengan instalasi Linux (baik Mandrake maupun RedHat).

Untuk melakukan proses filtering pada squid tidaklah sukar, kita cukup menambahkan beberapa kalimat pada file /etc/squid/squid.conf. Misalnya

acl sex url_regex "/etc/squid/sex"
acl notsex url_regex "/etc/squid/notsex"
http_access allow notsex
http_access deny sex

buatlah file /etc/squid/sex

contoh isi /etc/squid/notsex:

contoh isi /etc/squid/sex:

untuk memasukan daftar blacklist yang di peroleh dari squidguard dll, dapat dimasukan dengan mudah ke daftar di atas tampak di bawah ini adalah daftar Access Control List (ACL) di /etc/squid/squid.conf yang telah saya buat di server saya di rumah, yaitu:

acl sex url_regex "
acl notsex url_regex "
acl aggressive url_regex "
acl drugs url_regex "
acl porn url_regex "
acl ads url_regex "
acl audio-video url_regex "
acl gambling url_regex "
acl warez url_regex "
acl adult url_regex "
acl dom_adult dstdomain "
acl dom_aggressive dstdomain "
acl dom_drugs dstdomain "
acl dom_porn dstdomain "
acl dom_violence dstdomain "
acl dom_ads dstdomain "
acl dom_audio-video dstdomain "
acl dom_gambling dstdomain "
acl dom_proxy dstdomain "
acl dom_warez dstdomain "

http_access deny sex
http_access deny adult
http_access deny aggressive
http_access deny drugs
http_access deny porn
http_access deny ads
http_access deny audio-video
http_access deny gambling
http_access deny warezhttp_access deny dom_adult
http_access deny dom_aggressive
http_access deny dom_drugs
http_access deny dom_porn
http_access deny dom_violence
http_access deny dom_ads
http_access deny dom_audio-video
http_access deny dom_gambling
http_access deny dom_proxy
http_access deny dom_warez

Dengan cara di atas, saya tidak hanya memblokir situs porno tapi juga situs yang berkaitan dengan drug, kekerasan, perjudian dll. Semua data ada pada file blacklist dari

Block Situs di Mikrotik Lewat Winbox

1.Buka winbox yang berada pada desktop.

2. Klik tando ( … ) atau isi alamat Mikrotik pada kolom Connect To:

3. Maka akan muncul gambar seperti di bawah ini, kemudian pilih salah satu.

4. Setelah itu isi Username dan Passwort Mikrotik

5. Kemudian klik tanda connect.

6. Dan akan terbuka jendela Mikrotik seoerti gambar di bawah ini.

7. Untuk block situs klik menu IP pilih Web Proxy

8. Kemudian Setting Web Proxy dengan mengeklik tombol Setting.

9. Maka akan muncul jendela seperti gambar di bawah ini.
Setting Web Proxy sepeti gambar di bawah ini, kemudian di klik tombol OK.

10. Sekarang kita mulai buat settingan website yang akan di block.Klik tanda ( + )
Maka akan muncul jendela, dan kemudia setting seperti gambar di bawah ini.

11. Kemudian klik OK, maka akan muncul catatan pada jendela Web Proxy.

12. Coba cek settingan tersebut dengan mengetikan kata “porno” pada google.

13. Dan kemudian enter, jika muncul tampilan seperti gambar di bawah ini maka settingan block situs Kamu berhasil.
Diposkan oleh Diandra Ariesalva Blogs di 10:39 0 komentar

Kejahatan dunia maya (Inggris: cybercrime) adalah istilah yang mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer menjadi alat, sasaran atau tempat terjadinya kejahatan. Termasuk ke dalam kejahatan dunia maya antara lain adalah penipuan lelang secara online, pemalsuan cek, penipuan kartu kredit/carding, confidence fraud, penipuan identitas, pornografi anak, dll.

Walaupun kejahatan dunia maya atau cybercrime umumnya mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer sebagai unsur utamanya, istilah ini juga digunakan untuk kegiatan kejahatan tradisional di mana komputer atau jaringan komputer digunakan untuk mempermudah atau memungkinkan kejahatan itu terjadi.

Contoh kejahatan dunia maya di mana komputer sebagai alat adalah spamming dan kejahatan terhadap hak cipta dan kekayaan intelektual. Contoh kejahatan dunia maya di mana komputer sebagai sasarannya adalah akses ilegal (mengelabui kontrol akses), malware dan serangan DoS. Contoh kejahatan dunia maya di mana komputer sebagai tempatnya adalah penipuan identitas. Sedangkan contoh kejahatan tradisional dengan komputer sebagai alatnya adalah pornografi anak dan judi online.

Pada pemilu 2004 lalu, ada sebuah kasus yang cukup mengegerkan dan memukul telak KPU sebagai institusi penyelenggara Pemilu. Tepatnya pada 17 April 2004 situs KPU diacak-acak oleh seseorang dimana nama-nama partai peserta pemilu diganti menjadi lucu-lucu namun data perolehan suara tidak dirubah. Pelaku pembobolan situs KPU ini dilakukan oleh seorang pemuda berumur 25 tahun bernama Dani Firmansyah, seorang mahasiswa Universitas Muhammadiyah Yogyakarta jurusan Hubungan Internasional.

Pihak Kepolisian pada awalnya kesulitan untuk melacak keberadaan pelaku terlebih kasus seperti ini adalah barang baru bagi Kepolisian. Pada awal penyelidikan Polisi sempat terkecoh karena pelaku membelokan alamat internet atau internet protocol (IP address) ke Thailand namun dengan usaha yang gigih, polisi berhasil meringkus tersangka ini setelah bekerjasama dengan beberapa pihak seperti Asosiasi Penyelenggara jasa Internet Indonesia (APJII) dan pihak penyedia jasa koneksi internet (ISP/Internet Service Provider).

Belakangan diketahui motif tersangka adalah untuk menunjukkan bahwa kinerja KPU sangat buruk terutama di bidang Teknologi Informasi, namun itu tidak bisa dibenarkan dan pelaku tetap diproses sesuai hukum yang berlaku.

          ANIMASI 3D        

Membuat 3D dengan Blender 3D
Membuat 3D dengan Blender 3D
Pusatgratis – Untuk semua pengunjung PG yang tertarik di dunia 3D modelling dan animasi.. Blender 3D adalah software gratis yang bisa anda gunakan untuk modeling, texuring, lighting, animasi dan video post processing 3 dimensi. Blender 3D yang merupakan software gratis dan open source ini merupakan open source 3D paling populer di dunia. Fitur Blender 3D tidak kalah dengan software 3D berharga mahal seperti 3D studio max, maya maupun XSI.
Dengan Blender 3D anda bisa membuat objek 3D animasi, media 3D interaktif, model dan bentuk 3D profesional, membuat objek game dan masih banyak lagi kreasi 3D lainnya.
Blender 3D memberikan fitur – fitur utama sebagai berikut :
1. interface yang user friendly dan tertata rapi.
2. tool untuk membuat objek 3D yang lengkap meliputi modeling, UV mapping, texturing, rigging, skinning, animasi, particle dan simulasi lainnya, scripting, rendering, compositing, post production dan game creation.
3. Cross Platform, dengan uniform GUI dan mendukung semua platform. Blender 3D bisa anda gunakan untuk semua versi windows, Linux, OS X, FreeBSD, Irix, Sun dan sistem operasi yang lainnya.
4. Kualitas arsitektur 3D yang berkualitas tinggi dan bisa dikerjakan dengan lebih cepat dan efisien.
5. Dukungan yang aktif melalui forum dan komunitas
6. File Berukuran kecil
7. dan tentu saja gratis
Berikut ini beberapa screenshot gambar dan animasi 3 dimensi hasil desain dari Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Hasil desain 3D oleh freeware Open Source Blender 3D terbukti tidak kalah dengan software 3D yang berharga mahal happy
Download Blender 3D dari situs resmi Blender 3D | License : Free, open Source | size : 13 – 24 Mb (tergantung sistem operasi anda) | Support : Semua versi Windows, Linux, OS X, FreeBSD, Irix, Sun dan beberapa sistem operasi yang lainnya.
Selamat mendesain

Stop Dreaming start Action (Mau mencoba dan berkarya)

Kata-kata stop dreaming start action mempengaruhi pikiranku, mungkin kita banyak bermimpi atau berkayal untuk mendapatkan sesuatu, mungkin cita-cita bisa selangkit harapan dan keinginan bisa tinggi setinggi gunung. Tapi apakah semua itu kita dapatkan yang kita harapakan tentu tidak karena masih ada tidakan yang harus kita lakukan, itu yang membuat kita susah untuk mendapatkan yang kita inginkan.
Perlu kita ingan didunia ini apa sih yang kita dapatkan dari bermimpi? mukin belum ada didunia ini orang yang kaya atau sukses hanya dengan duduk-duduk, tidur-tiduran atau bermalas -malasan, mungkin ada tapi hanya sedikit yang kita temukan atau bisa dibilang kaya dari warisan,atau terlahir dari anak orang kaya bisa juga suskse dari keberuntungan. Mimpinya orang film sinetron.
Saya banyak mendapatkan kiriman email dari Bapak Joko susilo yang banyak memberi masukan dan motivasi ilmu tentang hal kesuksesan, itu yang mempengaruhi pikiranku sekarang. Mungkin klop aja dari dunia ilmu yang saya geluti yaitu komputer terutama dibidang internet, banyak hal yang saya terima dari kiraman emailnya.yaitu tentang slogan blog harus mempunyai slogan penting. Jadi slogan saya di blog adalah tentang membuat animasi flash. Dimana saya dituntut untuk lebih bayak berkreatifitas dalam berkarya, ada juga email yang dikirimkan kepada saya tentang 6 cara menumbuhkan kreativitas. Saya sangat berterima kasih kepada bapak Joko susilo, walaupun karya saya tidak bagus tapi itu membuat saya berarti karena keinginan saya sejak kecil terwujud sekarang.
Kreativitas yang saya buat hanya untuk menyalurkan hobie, tapi setidaknya saya tidak bermimpi untuk bisa melakukannya walaupun saya tidak pintar menggambar. ungkin rekan - rekan semua bisa sama seperti saya mungkin ada bisa memwujudkan imipin anda walupun hanya sesaat. Hidup selalu penuh pengorbanan dan perjuangan dan kita pasti mendapatkan kemulian dari usahanya.

          Zeebe brings open source order to microservice orchestration        

Looking for an open source option designed for horizontal scalability and fault-tolerence to manage your microservices? Camunda has just launched Zeebe, a big data system orchestrator, to helping you keep track of everything and anything.

The post Zeebe brings open source order to microservice orchestration appeared first on JAXenter.

          5 Lightweight Alternatives to Popular Applications        
Okay so if you have been reading my blog regularly you will have noticed that I love free, open source software, which in many cases is an alternative to popular, expensive and worst of all closed-source software. Here is my list of 5 Lightweight Alternatives to Popular Applications (with a brief introduction and explanation of […]
          openjpeg2 2.2.0-1 x86_64        
An open source JPEG 2000 codec, version 2.2.0
          openjpeg2 2.2.0-1 i686        
An open source JPEG 2000 codec, version 2.2.0
          Live@edu, flexibel platform voor vraaggestuurd leren, bewust gefaseerd ingevoerd bij de Amersfoortse Berg         
Bij de scholengemeenschap voor MAVO, HAVO en VWO stond de invoering van een elektronische leeromgeving en eigen mailaccounts voor de leerlingen al een paar jaar op de agenda. Na een eerste ervaring met een open source-oplossing, werd medio vorig jaar gekozen voor Live@edu. Dit voor scholen gratis platform van Microsoft is geïntegreerd in de nieuwe elektronische leeromgeving (It’s Learning). “Het hele proces is in een stroomversnelling gekomen door de behoefte aan mailaccounts voor leerlingen, om goed met hen te communiceren," schetst Henk van den Bos, conrector op de Amersfoortse Berg met Onderwijs en ICT in portefeuille.
          Fake Artist Portfolio Generator Questions The Open Source Web [Video]        
“Pro-Folio” is a portfolio website built from fictional identities of artists, created by an algorithm using the open source web.
          Dehullu biedt facilitair gebouwmanager centrale en visuele informatie op interactieve plattegrond        
Dehullu Sign Systems in Ochten is toonaangevend op het gebied van tentoonstellingsbouw en interieurbouw en marktleider in bewegwijzering. In de 35 jaar van haar bestaan is de onderneming uitgegroeid tot een holding met drie bedrijfsonderdelen en circa 80 medewerkers. Het bedrijf heeft klanten in binnen- en buitenland, bedrijfsleven, overheid, gezondheidszorg, onderwijs, musea en theaters. Sinds kort biedt Dehullu een oplossing voor facility managers en gebouwbeheerders om zelf de bewegwijzering actueel te houden en mutaties aan te brengen. Maarten van Orden, directeur Dehullu geeft een idee van de besparingen: “Ten opzichte van de traditionele manier wordt dagelijks circa 60% tijd bespaard op het gebied van bewegwijzering, maar ook gebouwmanagement. Want behalve het actueel houden van de bewegwijzering heeft Stipt! nog andere voordelen. Zo kan de beheerder ook facilitaire taken als telefoonbeheer, meubilair, beplanting, lampen en bureaus in het programma Stipt! onderbrengen.” “Visio is grotendeels een open source omgeving, en daarmee kunnen wij als programmeurs prima aan de slag. Bovendien bezit Visio van zichzelf al heel veel standaardfunctionaliteiten die onze klant graag wil. Zo hebben we zonder al te veel ontwikkelinspanningen een goed te onderhouden maatwerkoplossing weten te realiseren”, zegt Constant Sciarone, algemeen directeur bij DotOffice.
          10gen expands MongoDB with storage service        
Open source database provider 10gen is expanding into storage services, offering a hosted backup service for its flagship MongoDB data store.
          The death of Ruby? Developers should learn these languages instead        
Ruby's popularity has dropped in the workplace and in coding bootcamps, while leaders question the open source programming language's staying power.
          Open Source Festival        
Kunde: Open Source Festival GmbH Projekt: Open Air Festival Ort: Düsseldorf Zeitraum: 2007, 2008 Größe: bis zu 5.000 Gäste Das Open Source Festival fand in den Jahren 2007 und 2008 im Strandbad Lörick statt. Dies waren die ersten Veranstaltungen solcher Art im Großraum Düsseldorf. Dabei wurden zwei Bühnen gebaut und eine Indoorlocation durch uns technisch […]
          Scaricare in completa sicurezza da eMule, evitare i fake e i server spia!        
EMule, chi non lo conosce! Il software Open Source per scaricare (in modo completamente illegale pirateria informatica) musica, film, programmi, videogiochi e molto altro. Capita veramente spesso però che il file scaricato non sia quello che volevamo ma un fake, allora ecco una guida per scaricare da eMule in completa sicurezza: Apri eMule e fai click sulla ruota dentellata arancione con scritto ‘Opzioni‘; Fai click sulla linguetta con scritto ‘Server‘; Nel campo ‘Elimina i Server inattivi […]
          Facebook open sources its artificial intelligence        
Facebook open sources its artificial intelligence

Because everyone wants Facebook’s intelligence

Social notworking site Facebook has announced it will open source its artificial intelligence (AI) hardware.

Now before you rush to see if you can wire up the volcano from your secret base to power your new open saucy artificial intelligence super-computer to take over the world, it is important we remind you that this is the sort of intelligence which looks for nipples in photographs.

Needless to say the hardware is high-end server called Big Sur which appears to be so intelligent it can’t spell Sir. It uses Nvidia technology which is legendary for its intelligence, and it can run AI algorithms.

Facebook uses it for facial recognition, news feed curation, and personal assistant features at Facebook.

However for all the sarcasm it all makes perfect sense. Deep learning and AI overall are still relatively small fields and that's the main reason why these companies are open sourcing their hardware. That way they can get lots of people to work on it.

Facebook, Google, and Microsoft have all open sourced their AI doings in the hope that some smart types will work on them. Facebook has shared its AI hardware with the Open Compute Project, which shares designs for computer infrastructure.

          LG Vertretungsplan App for Android        

Disclaimer: All data and names appearing in these screenshots are fictitious. Any resemblance to real data or real persons, living or dead, is purely coincidental. They do not include any data shared by the online substitution plan and therefore comply with its terms of use.

The last few weeks, I’ve been developing an app for the substitution plan of my school (hence the name: “Vertretungsplan” is German for substitution plan).

Previous Situation

Before the development of the Android app, students of my school were able to view the substitution plan…

  • in the school building
  • online (currently limited to students of the MSS)

The online substitution plan was introduced this school year, allowing students to view it “comfortably” after school, or right before at about 7:30am.

Small viewport

Albeit a nice a feature, it also has its problems:

  • Ugly user interface
  • Bad readability, thanks to the poorly chosen contrast of text and background
  • No mobile interface, and a bad user experience for mobile users (too much scrolling, hardly reachable control elements)

The Idea

Basically, the idea was to develop an app that provides the user only the data that is relevant to him. Currently, this means that he is only displayed the courses for his class level.

Android app

To keep things clear, the user would only see the most relevant data at a glance, further information being available at a single click.

The Problem

In theory, receiving the necessary data would be the easiest part. I would contact the responsible AG (“Arbeitsgemeinschaft”, working group) and ask them to either give me access to the source code and let me implement a machine-readable API, or let them implement the API themselves. I didn’t even get a response to my request by the responsible developers.

So as it turns out, it wasn’t that easy. Cooperation was pretty much non-existent and communication was sparse. And having some pride myself, I thought I could do this on my own, without having their support.

The Solution

Thankfully, I am more or less experienced in web scraping, as a result of past jobs involving data gathering from multiple websites. This was no different. The idea is that you would read and use the website as if you were a user. So in this case, I worked with the jsoup library and CSS selectors. Jsoup describes itself as a “convenient API for extracting and manipulating data”. Of course, users don’t parse websites by using CSS selectors, but it’s a good idea to begin with.


The steps for gathering the plan data are:

  1. Get the HTML code for the plan we want to get (/heute for today and /morgen for tomorrow)
  2. Parse the code using jsoup
  3. Get all table rows using a CSS selector: #vertretungsplan tr
  4. For each row, get all td elements, parse the data, and feed it to a list

Having parsed all the data, we could display it as intended. One more problem I ran into was authentication. The online substitution plan uses PHP session ids and cookies for authentication. The authentication flow would be as follows: Enter your data into the login form, POST request to the index page to have your session id authenticated and from there on, get the data you want from /heute and /morgen. But this meant an extra request for authentication each time we needed to login. And not being authenticated still needed me to parse the page to check for errors, since this “beast of a product” simply makes use of no HTTP feature whatsoever. The HTML code is malformed as well: Ids, which ought to be unique, are reused multiple times and typos are included as well.

As I later discovered, it does not matter which page you send the post request to. So to save traffic, authentication information is sent with each request to either /heute or /morgen. At first glance, this might be considered a security issue, but the requests are performed using TLS and no sensitive data is connected to the accounts.

Future Ideas

The biggest feature planned for the future are notifications based on a list of your courses. At a fixed interval, the app would refresh the plan data, generate a changeset and based on this set, show notifications concerning your courses. I am not sure yet, however, whether I will include a cloud sync feature for this, since the app is currently running without further server software, save the online substitution plan.

Open Source

The complete app’s source code is available at Github under the BSD-2-clause-license.

          Progress on KaraPy        

This is a follow-up on my previous article Updating Kara’s Python

KaraPy – the Kara replacement I have been working on using Python – is nearing its open source release. Everything but the “tools” object is implemented, although that does not mean there is not more development to do on it.

As you can see in the small demo above, I also have found replacement for Kara’s images and the respective authors will be named along the images’ licenses in the repository as soon as it is on GitHub. I would also like to add that it is working in both Python 2 and 3 (tested in PyPy 2.1, Python 2.7 and Python 3.3).

What Still Has To Be Done

First of all and as already mentioned, there are still methods of the “tools” object left to be implemented:

  • void showMessage(String string) “Write string to dialog window”
  • void checkState() “Checks the execution controller”
  • String stringInput(String title) “Lets the user input a string in a dialog window with a title. Returns null if the dialog is aborted using Cancel.”
  • int intInput(String title) “Lets the user input a[n] integer number in a dialog window with a title. Returns Integer.MIN_VALUE if the dialog is aborted using Cancel.”
  • double doubleInput(String title) “Lets the user input a real number in a dialog window with a title. Returns Double.MIN_VALUE if the dialog is aborted using Cancel.”`

Secondly, it barely has any tests yet, partly because I am still not used to them, but that definitely is something I have to fix.

Ideas For The Future

There are some, but I consider these this one the most important: Adding a world editor to KaraPy itself so we do not have to rely on Kara’s editor at all. But I do not think that we already have to stop there. I have good faith that this program could become more than just a Kara clone, meaning that eventually its API will get bigger and that it might get a custom map format supporting more than just the bug, trees, mushrooms and leaves.

          Scientists on the Margins        
David Nobes


The World Summit on the Information Society (WSIS) was convened in Geneva in December of 2003. When the World Summit was announced, many in the scientific community questioned why there was no clear or central role for science and scientists. Scientists at CERN, in particular, expressed their concerns because CERN, one of the premier international collaborative scientific institutions, is regarded by many as the “birthplace” of the Internet.

As a result of the interventions of scientists, the UN and WSIS Secretariat proposed to hold an additional, but separate meeting ahead of the World Summit - the Role of Science in the Information Society (RSIS). The Role of Science meeting was also held in Geneva, at CERN, immediately before the World Summit. Many who attended RSIS also attended the WSIS. (The RSIS website is still active, as of January 2005).

The RSIS was intended to provide a forum whereby scientists and science administrators could contribute to the ongoing discussions on the Information Society. The discussions focussed on information sharing - the mechanisms for such sharing, and the impact on society that information sharing could have, because it became quickly apparent that information sharing is one of the primary elements in what we have come to call the Information Society, which I will abbreviate here as IS. Information technology is abbreviated as IT.

At this point, it should be noted that many of the participants, this author included, wondered what influence we scientists might have on the larger World Summit. Because of its separateness, many attendees doubted, sometimes publicly, that we would have much impact on the main WSIS “event” (the term used on the website and in the printed material). Many felt that the meeting was nonetheless useful, but more for the informal networks and contacts that we made, rather than for the formal proceedings. This reflects the nature of the Internet and modern electronic communications, which was nicely and concisely described by Tim Berners-Lee, who developed the browser-based interface, the “World Wide Web,” now concomitant with the popular conception of the Internet. He portrayed the “essence” of the Web as “decentralized” and “fractal.” It was originally designed to fill a need to share information (“data”) that was different in nature, format, and style.

[ For a much different conception of a global informational network, albeit one that has yet to be put into popular practice, see the Home Page of Ted Nelson, eds. ]

At the end of the RSIS was a “Visionary Panel Discussion: Science and Governance.” Most of the panel members who discussed the future of the Internet used outdated and outmoded terminology and paradigms, and I think they missed some of the inherent anarchic and democratic aspects of the Internet. Many of us felt that the panel, with the exception of Berners-Lee, showed a lack of understanding of the Internet and the Web. Indeed, the character of the Internet and the Web in many ways reflect how human progress is made, whether we are discussing science or broader societal aspects. We take steps that wander up many blind alleys and false trails before hitting upon solutions to previously unsolved problems. The solutions are almost always imperfect and almost always later superseded by some better approach. It is necessarily unstructured and chaotic, as any creative activity will be. However, those involved directly, such as scientists, are often excluded from the decision-making processes, which tend to be dominated by politicians and bureaucrats who are in general sadly ignorant of science and its methods. I hope to expand on this theme in the report that follows. The issues raised are no less relevant and important a year on from the meeting. The most exciting and innovative projects described during the meeting emphasised the lack of centralized control over the Internet and the Web, and that such control is nearly impossible. We cannot control what people do with the Internet; instead the main issue should be about showing people how to use the Internet effectively and sceptically.

The structure of this report is simple. It follows the structure of the meeting, which was built around the central RSIS “themes”: education; economic development; environment; health; and enabling technologies. I summarise some of the main points and observations from each session, highlighting those talks, presentations and sessions that seem to have best captured the atmosphere of the RSIS and future of the Information Society.


The opening plenary session comprised a series of presentations that ranged widely across the IS spectrum. Adolf Ogi, Special Advisor on WSIS to the Swiss Federal Council, officially welcomed the RSIS participants on behalf of Switzerland, the host country, and challenged the participants to promote “science for all, without boundaries.” He touched on the issues of control of technology and the role of infrastructure, and the costs associated with both. When we say “costs”, we mean both the cost to society as a whole and the cost to the individual. This becomes, then, a major concern in developing countries where personal monetary wealth is limited, and thus access to modern computing tools is limited.

Two speakers put the Role of Science in the context of the World Summit on the Information Society. Adama Samassékou, President of the WSIS Preparation Committee, addressed the gulf between the “haves” and the “have-nots,” using the now common phrase “the digital divide.” However, Samassékou went beyond these almost clichéd terms and viewpoints to discuss the traditional forms of knowledge, and how in the IS world oral traditions, and the information they transmit, are being lost, largely because we have not had a means to incorporate them into the technology of the IS. He emphasised the goal of a lack of boundaries for the sharing of information, and the need to promote the IS within an ethical framework. In this framework, he included environmental ethics. This theme arose again in the special session on the Environment in the IS.

Yoshia Utsumi, Secretary-General of the International Telecommunication Union, emphasised accessibility of IS, but his emphasis was on scientific access. This was perhaps a reflection of the audience, but was then limited in its scope, especially when considered in the light of some of the presentations that came later in the day. He noted the lack of scientific funding in the developing world, and the “problems” in science policy. My opinion is that “gap” may have been a more appropriate word, because few countries, developing or otherwise, have clear policies for the sharing of information, scientific or otherwise. Many that do have such policies, such as the U.S.A., obstruct information sharing for reasons of “security,” even though open access to data and information is often the best defence. However, as Utsumi noted, this was a beginning of the process of discussion and policy formulation.

After the two RSIS context speakers, we listened to three “keynote” speakers, each of whom gave brief talks: Dr Nitin Desai, Special Advisor to Kofi Annan on WSIS; HRH Princess Maha Chakro Sirindhom of Thailand; and Walter Erdelen, Assistant Director-General for Natural Sciences at UNESCO. These talks touched on issues of citizen-to-citizen communication and the “digital divide” (Desai), the lack of access to IT and concepts of sustainability in the IS (Sirindhom), and the environment (Erdelen).

Dr Esther Dyson, the Founding Chair of the Internet Corporation for Assigned Names and Numbers (ICANN), was listed as speaking on “the promise of the Information Society and the role that science and technology have played.” ICANN is the organisation responsible for mediating domain names. They do not assign names, per se, but monitor the process and the circumstances. They have little power, but unfortunately are often seen, incorrectly, as responsible for the current morass over domain names. Dyson did not speak on the listed topic, but instead talked about the role of scientists themselves, rather than some monolithic “science,” in the future of the IS. She also emphasised that we cannot solve the problems of the Internet in a question and answer session.

Finally, Ismail Serageldin, the Director-General of the Library of Alexandria, gave a PowerPoint presentation on the state of IT use at the Library. It is impossible to cover all of the material he (rapidly yet effectively) presented. The Bibliotheca Alexandrina is making use of IT in many ways, and to a large extent (their website is Some of the problems and issues Serageldin identified for the RSIS were, to name a few:

* effective and accessible publication and dissemination of information, specifically research and the results of research;
* peer-review (or lack thereof for online publications);
* copyright and “fair use” of online materials; and
* Internet library loans.

He discussed the rise of anti-science movements, particularly in the context of fundamentalist religious groups, and both here and in his talk he noted that these were not only Islamic but also Christian fundamentalist groups. Some approaches they used to try to counter such movements were:

* the establishment of a BA science “supercourse”;
* reaching children with “My Book”, which placed the child within the book designed and partly written by the child using online resources; and
* the “Hole in the Wall” computer.

This last approach was particularly interesting and revolutionary. The concept is to place a PC secured into a recess in a wall, using a transparent cover to allow visibility and access to the touch screen. Results showed that illiterate people, especially children and young adults, were learning to read by working their way through Internet connections. They would begin by using the symbols to guide their way, but would eventually learn to decipher at least in part the messages that accompanied those symbols.

One unfortunate omission from the programme was the presentation by Tim Berners-Lee, who was delayed by a snowstorm in Boston, and did not arrive until half way through the second day of the symposium.

“THE FUTURE: What the Scientific Information Society Can Offer”

The next session was a bit of a misnomer. It was a mix of topics, ranging from GIS to technological access for urban and rural poor people to sociological aspects. The sociological paper was simply a written paper read aloud, with a singular lack of the use of any of the technology we had been discussing. The sociological presentation simply served to emphasise the growing gap between scientists and some social scientists, and made me uncomfortably aware of why the Sokal hoax had worked so well amongst the social science journals; the presentation was unnecessarily rife with jargon that obscures rather than informs.

As an aside, for those unfamiliar with the Sokal hoax, Allan Sokal is a Professor of Physics at New York University who submitted a hoax article, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” to the journal Social Text. As the Skeptics Dictionary says (

The article was a hoax submitted, according to Sokal, to see “would a leading journal of cultural studies publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions?” It would. Needless to say, the editors of Social Text were not pleased.

What Sokal was attacking was the view amongst some social scientists that “physical reality” is a social construct, whereas the existence of an external “world” is an underlying premise in science. There is insufficient space to explore this issue adequately here, but the reader is referred to the many websites dealing with the Sokal “affair” (especially, e.g., and, and Sokal’s own site (

[ Amato writes at length of the Sokal Affair in sokal text: another funny thing happened on the way to the forum; and it is discussed in Kilgore’s review of Technoscience and Cyberculture, and Ciccoricco’s Contour of a Contour, eds. ]

In that session, nonetheless, were two presentations that stand out in my mind, those by Lida Brito, the Minister of Higher Education, Science, and Technology for Mozambique, and Onno Purbo, an engineer from Indonesia. Purbo talked about how to “Facilitate Fast and Self-Propelled Internet Access: Return to Society,” a presentation that was shifted from the second day into the first day’s programme. His presentation was, in many ways, a useful counterpoint to Serageldin’s, in particular the “Hole in the Wall” PC, noted above. Purbo obtains PC’s at low cost, usually sold cheaply or donated by large companies that are upgrading their computing systems. These PC’s are then made available in “classrooms” placed in poor urban and rural areas so that the local people can use the computers. They also learn to use the Internet. Purbo provides access by, as he put it, “stealing” open frequencies. He uses antennas ingeniously constructed from old tin cans; these are sufficient to provide the signal needed. He uses open source software, and emphasised that mass education is the key to providing a basic education to the broad populace.

His presentation also served to emphasise that education is crucial for informed and useful access to the Internet. Too many people, of whatever socio-economic level, “surf” the Net without any thought about the “information” they are obtaining. The websites they access are often a source of disinformation and misinformation. However, this also serves to reinforce the democratic nature of the Internet. We cannot control how people use the Web, and the fact that there are hundreds of sites devoted to Elvis may or may not be a sad commentary on our society, but it nonetheless also serves to show us how uncontrollable the Internet is.

I present the Elvis example, one noted at the meeting, not to denigrate the use of the Internet and the Web for such purposes. What it shows is that new technologies have become new instruments of entertainment, when the hope was that they would become self-directed teaching tools. My main point is that during many of the RSIS sessions, a number of our “elder statesmen” (and they were almost all male) talked about “control.” They seek to control access, information flow, and the development of the Internet. In this way, our “leaders” show their fundamental ignorance of this creature. I emphasise, again, Berners-Lee’s description of the Internet as a fractal and chaotic thing.

Brito’s presentation was, in contrast, a passionate “wish” list of what she would like to do and see happen, both in Mozambique and beyond. Her list was focussed around the themes of wider literacy and ” relevant ” knowledge.

The session ended with a panel discussion, ostensibly “Reflections on the Role of Science in the Information Society.” The participants each gave a short presentation, with a very brief period at the end for discussion. Most were much as expected, and a number were largely political in nature. One exception was Juergen Renn, of the Max Planck History of Science Institute and ECHO (European Cultural Heritage On-Line), who was concerned that the “core of cultural heritage is largely excluded from information technology” and noted how ECHO was formed to address this. He also briefly talked about the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. (The full declaration can be found at: While the goals of the declaration are laudable, a number of participants were concerned about the lack of copyright protection, citing cases where work done by researchers in developing countries was plagiarised by researchers in developed countries.

So concluded the first day of the conference. A number of us noted a general lack of self-criticism in most of the presentations. There was a lot of vague language and abundant use of clichés, much “looking to the future” and long wish lists. The most exciting presentations, for me, were the ones that discussed concrete examples of taking IT to the broader populace, often in quite revolutionary ways, in all of the meanings of that phrase.


I attended the session on “Contributions to Environment.” Other sessions were on Education, Economic Development, Health, and Enabling Technologies. All of these sessions had quite active online forums for discussion in the months leading up to the RSIS and WSIS symposiums, and the forums can be reviewed at the RSIS website. Most of us contributed to more than one online discussion group, but attended only one parallel session.

In the Environment session, most of the presentations focussed on technical and management issues. David Williams of EUMETSAT talked about the Global Earth Observation Systems and Strategies, focussing on data management and the move toward an Integrated Global Observation Strategy (IGOS), which seeks a comprehensive integrated effort. Such a move needs a “shared strategy,” and involves the participation of the UN, international scientific and research programmes, space agencies, etc. They seek to develop a common approach to surface and satellite observations. The international weather observation and forecasting network is one successful example where a common strategy and approach has been developed. Williams had many interesting and pithy quotes: “The world is full of data and short on information” is probably my favourite.

Patricio Bernal, of UNESCO and the IOC, talked about the Global Ocean Observation System (GOOS). There are regional GOOS “alliances.” New Zealand, where I am based, is a member of one such regional alliance. Bernal noted, however, that there needs to be an adaptation of international norms for data sharing to facilitate the further development of GOOS. This was a common theme that arose a number of times during the Environment parallel session, specifically, and the RSIS more generally. There are often conflicting protocols for sharing data and information and, as Williams’ quote illustrates, a set of data is not always usable information.

Josef Arbacher of the ESA talked about Global Monitoring for Environment and Security (GMES), a programme for monitoring of regional development, management of risk, and the guidance of crisis management and humanitarian aid. The ESA aims to have full capacity by 2012-2015. The EU will be spending 628 million euros in the 2004-2006 fiscal period, rising to 5005 million euros by the 2007-2015 period. Again, the issue of data sharing and accessibility arose, in addition to questions of data verification and transparency of the process.

Stuart Marsh, of the British Geological Survey Remote Sensing Group, talked about Geohazards and the IS. He noted that citizens are the ultimate beneficiaries, and suggested that there are three main user groups of geohazards information: “responsible authorities”, scientists in monitoring and government agencies, and research scientists. They have different needs, e.g., baseline inventory of hazards, monitoring, rapid dissemination of information during a crisis, etc. He noted, as did the others in the session, the need for an integrated approach from surface to space, and the need for but difficulty in bringing together the different types of data. Again, this raised the issue of data management. Marsh’s presentation also highlighted, however, the gap in our knowledge about the scientific literacy of our public “authorities.” Those responsible may well be local or regional officials who are far removed from those who gather and use the data/information. These officials may have no understanding of the processes involved, and their concerns may in fact run counter to the actions that should be taken to avert a crisis. The current crisis in South Asia in the wake of the tsunami illustrates many of these concerns. An early warning system was not in place because of the cost (both for the infrastructure development and for ongoing support) and because of the lack of technical expertise to staff such an enterprise.

This illustrates a major gap in the entire RSIS - there was little or no consideration of how we get technical information to the public officials and to the wider population. The entire issue of scientific literacy was glossed over, and instead most presenters focussed on those who were trained to use the data, when, as I noted earlier, most people are using the Internet in an undirected and uninformed way, so that they are unable or unwilling to distinguish “good” reliable information from poor quality “information” or even from reports that were consciously devised to misinform the “public.”

After Marsh, Stuart Salter, who leads the Species Information Service (SIS) of the World Conservation Union (IUCN), gave probably the most thoughtful of the Environmental presentations. He discussed “appropriate technologies.” As an example to start off his talk, he mentioned an emergency in Belize where large volumes of vaccine were required, but which went bad because of a lack of refrigeration. Those providing the vaccine were unaware of such a lack; it never occurred to them that large parts of the world still lack refrigeration. He used this to highlight the problem when a network of scientists (who he described as “free spirited individuals”), give “information” that needs to be organised in a common format and then propagated up and out into the community. His premise was that complex ICT systems could allow a simple “front end” and often can be configured by users to suit their purposes. He noted the need to change the “paradigm” whereby scientists visit a country, do their research, then leave and publish the results, leaving no net results in the visited country. He emphasised the need for using scientists in regional networks, working in existing well-functioning scientific and conservation networks. Then the data are vertically integrated in a relational database, using a GIS format. This is the mode of operation used successfully by the SIS for decades. The data are controlled by the scientific community, and the quality of the data is overseen by Specialist Groups, of which there are 128 in the SIS. The data are continuously updated. The SIS has thus grown from existing networks, rather than imposed from outside, which explains why it has worked so well.

Finally, Luigi Fusco of the ESA talked about “Emerging Technologies for Earth Observation and Environmental Applications.” He used as his example the wreck of the tanker “Prestige” off the northwest coast of Portugal and Spain. He noted that the satellite data were insufficient to be used alone, and that a wide range of technologies and their associated data, from surface through to satellite observations, needed to be integrated in a complex information management system. This theme of the need for integration of different types of data and information from a range of technologies and scales of observation arose again and again throughout the session.


The closing sessions were in two parts: a series of summaries of the thematic parallel sessions were presented, followed by a “panel discussion,” closing remarks from the Secretary-General of the UN Conference on Trade and Development (UNCTAD) and then the “Key Message” from the RSIS, presented by the Director-General of CERN. Given that the “Key Message” did not differ at all from the text circulated before the RSIS meeting, many of us wondered why we had spent two days talking about the various issues. We concluded that the greatest benefit may well arise from the creation of a network of individuals interested in the issues raised by the RSIS symposium.

The session summaries raised some common themes and issues. One of the primary issues is the integration and sharing of data within complex structures, and the desire to get IT into rural and poor urban communities. The goal to fight illiteracy, generally, and scientific illiteracy, more specifically, is a major obstacle in the building of an Information Society, which requires the wider availability and use of IT, from tertiary institutions everywhere, not just in developing countries, to remote communities.

Finally, the panel discussion amounted to little more than prepared statements from “elder statesmen” (men without exception, all elderly except for Tim Berners-Lee), and was perhaps symbolic of much of the meeting. Berners-Lee spoke for two minutes and encapsulated the essence of the Internet and the Information Society better and more succinctly than any other speaker. It is decentralized and “fractal” in its nature, and inherently uncontrollable and ungovernable. Yet so many of the politicians on the panel, for most were politicians, used outmoded and outdated paradigms and language in their politically motivated speeches. They kept talking about “governance” of the Internet and IT. I can only conclude that our political “leaders” have little or no idea about the Internet tiger they have by the tail. It is fundamentally an anarchic, often revolutionary creature, one that will refuse to be confined and controlled.

          A Mega-Crawler for the rest of us        

web crawlerDo you get this urge sometimes, to query the entire web for something?

Do you wish you had your own Mega-Crawler?

I mean, it's not like you can go to Google and type in some box

WHERE DOMAIN(PAGE-URL) IS IN  "domain1, domain2, domain3"

Lunch took a long time today, so Eran and myself had some time to brainstorm a bit about a crawler-for-the-rest-of-us:

  • Crawler code would be hosted on Amazon's EC2
  • The data would be stored on Amazon's S3
  • Anyone can add "post-crawl-processors" which will post-process crawled pages (build a full text index, extract microformats, calculate rank...). The persistent data generated by the post-processors will also be hosted on S3.
  • Anyone can submit URLs to be crawled. The system will automatically fork from these URLs to any other discovered URL. Eventually, the entire web will be crawled.
  • API for querying the crawl data, or the data generated by the post-crawl-processors.

Who will pay for this? Companies and organizations who wish to use this data:

  • The basic crawling code will be divided among the "subscribers". Initially, small database, low costs. Later, larger database, more subscribers, costs (hopefully) remain low.
  • The cost of a post-processor (CPU, storage) is divided by the number of subscribers the post-processor has. The more useful it is, the more subscribers will use it, and the less each will pay. If it's a proprietary post-processor, no need to share it, but it will naturally cost more (being used by only 1 subscriber).
  • Retrieving query result will cost by bandwidth.

The general idea is pay-as-you-use, with prices going down as more subscribers use the service. No one makes money (well, except for Amazon of course), everyone sharing costs, IP (post-processors) can be shared or protected. The more you consume, the more you pay. The more you share, the less you pay.

This is very rough of course. But what do you think? Is it feasable? Is it interesting?

          Announcing DrupalCamp Mumbai 2017        
2017-04-01 08:30 - 2017-04-02 18:00 Asia/Kolkata
Event type: 
Drupalcamp or Regional Summit

April 1st and 2nd, 2017

Victor Menezes Convention Centre, IIT Bombay

We are excited to announce that the 6th annual DrupalCamp Mumbai will be held on Saturday, April 1st and Sunday, April 2nd, 2017 at VMCC, IIT Bombay.


#DCM2017 Features

  • Keynote by featured open source & community evangelist

  • Expert sessions on multiple tracks: The latest developments with Drupal 8. Expert sessions for developers, site builders, themers, devops admins, project managers, growth hackers and business owners

  • Drupal in a Day Training for Drupal beginners and students

  • Interactive Drupal 8 workshops on topics like--Transitioning to D8, Migrating to D8, Symfony, D8 site building, D8 admin and content management, D8 contribs, headless D8 and more.

  • CXO roundtable - where business leaders in the Drupal community can share knowledge and resolve issues with their peers.

  • Projects Showcases featuring some exciting projects in Drupal

  • Bird of Feathers sessions: these barcamp style, informal discussion groups are a great place to meet other members of the community with similar interests.

  • Networking sessions and after party

  • Day 2 Codesprints and Hackathon featuring Drupal 8 development and Drupal 7->8 module porting.

  • Community Meetup to discuss Drupal Mumbai initiatives, challenges, ideas and suggestions with community members and Plan 2017

DrupalCamp Mumbai 2015 was a resounding success with 650+ participants over 2 days. 40+ businesses were represented. Drupal trainings were a big draw. So were expert sessions. Mke Lamb keynoted the event. We had two unique panel discussions on open source in governance, and understanding open source communities in India. 

DrupalCamp Mumbai 2017 promises to be even bigger. Drupal 8 is maturing, Our focus will be everything around Drupal 8. Besides expert sessions, we are planning sessions interactive workshops around Drupal 8 for developers, designers, CXOs and project managers.

Who should attend? DCM is the biggest and most fun gathering of Drupalers in the country.

  • For businesses this will be a perfect platform to scout talent and raise their brand’s awareness. Networking opportunities will allow you to prospect alliances, and understand the rapidly growing Drupal landscape.

  • Developers, designers, site builders, project managers and hackers can network with the top companies, their peers and experts in Drupal today. You can sharpen your skills at workshops, solve problems and become an active contributor to at the codesprints.

  • For startups, consultants, and enthusiasts, there is no better networking and prospecting platform than DCM2017, be it for new projects or looking for technical advice from experts.

  • Wordpress, Joomla, PHP developers, students or anyone into open source and Drupal, will especially find DCM2017 an eye-opening experience. If you know someone who is interested, bring them along or ask them to attend!

  • For everyone it is a great way to become part of the most exciting, fastest growing and dynamic open source communities and become a contributing member.

  • Lets not forget all the fun and free giveaways to be had, and new friends to be made!

So what are you waiting for?


We are always looking for passionate volunteers. If you, or anyone you know would like to be part of Drupal Mumbai please ask them to sign up here -

Engage with us and keep updated on DCM2017 and Drupal Mumbai events:

Mailing List:


Twitter: @DrupalMumbai



For any more details, please write to


DCM-Logo-New2017.png22.58 KB

          Spring 2017 tech reading        
Hello and a belated happy new year to you! Here's another big list of articles I thought was worth sharing. As always thanks to the authors who wrote these articles and to the people who shared them on Twitter/HackerNews/etc.

Distributed systems (and even plain systems)


SQL lateral view

Docker and containers

Science and math


Java streams and reactive systems

Java Lambdas

Just Java

General and/or fun

Until next time!

          Summer 2016 tech reading        

Hi there! Summer is here and almost gone. So here's a gigantic list of my favorite, recent articles, which I should've shared sooner.


Other languages

Reactive programming

Persistent data structures



Systems and other computer science-y stuff


Until next time! Ashwin.

          Fall 2014 tech reading        
My posts are getting less frequent and when I do post something, I realize that they are mostly just links. Yes, work is keeping me busy.
Big data:
Really? Another Hadoop SQL layer? Another Storm?
For those of you who knew about the original "column oriented stores" and "in-memory stream processing" - KDB -

Java 8 - the good and ugly bits:
Networks and systems:
The usual Scala and Go hate:
Until next time!
          This month's good tech reading        
(Many of these links I discovered in my Google+, Twitter, HN or RSS feeds. I don't take credit to be the first to find them)

Do I detect a "NoScala" sentiment here? 
Big data:
Until next time!
          Download Kodi IPA File and Install it Without Jailbreak on iPhone or iPad        

Kodi is one of the significant traditional media apps nowadays. The reason is its amazing features. Kodi is open source software and is available on different operating systems. With Kodi, you can play any type of media without any problems. The best thing about Kodi is it allows you to play media in different formats. […]

The post Download Kodi IPA File and Install it Without Jailbreak on iPhone or iPad appeared first on UnlockBoot.

          Prosesor Baru Intel Sasar Android dan MeeGo        

SANTA CLARA, - Platform berbasis prosesor Intel Atom generasi kedua yang baru diluncurkan Selasa (4/5/2010), membidik pasar smartphone dan komputer tablet yang tengah tumbuh pesat. Bukan tanpa modal Intel masuk ke pasar smartphone dan tablet karena setidaknya dua platform software sudah dikuasainya, Android dan MeeGo (gabungan dari Intel Moblin dan Nokia Maemo).

"Sebagai salah satu enggota pendiri Open Handheld Allaince (OHA), Intel telah bekerja sama dengan Google selama beberapa tahun terakhir dan menyediakan dukungan terhadap platform Android saat diluncurkan," demikian pernyataan Intel yang dirilis Selasa (4/5/2010). OHA merupakan organisasi yang mendukung pengembangan Android beranggotakan vendor software maupun hardware.

Intel menyatakan karakteritik prosesor Intel Atom seri Z6xx yang baru disediakannya bekerja sangat optimal di Android sehingga bisa digunakan di semua jenis perangkat buatan vendor smartphone yang menggunakan sistem operasi tersebut. Prosesor ini dijual dalam bentuk paket platform yang sudah dilengkapi hub pengendali (MP20) dan Mixed Signal IC (MSIC) tersendiri.

"Platform berbasis prosesor Intel Atom telah dioptimalkan untuk menghasilkan kinerja terbaik dan pengalaman berinternet dengan kebutuhan daya sangat rendah pada platform software Moblin 2.1," lanjut pernyataan Intel.

Moblin 2.1 merupakan platform software berbasis open source (Linux) yang dirancang Intel untuk berbagai jenis perangkat genggam. Belum lama ini pengembangan Intel Moblin dan Nokia Maemo yang sama-sama berbasis open source melakukan merger. Intel memastikan platform barunya mendukung penuh MeeGo.

Saat ini pesaing terberat di pasar smartphone adalah ARM yang platformnya banyak dipakai produsen prosesor seperti Qualcomm dengan prosesor Snapdragon 1 GHz. Samsung juga sudah mengembangkan kemampuan sepadan dengan prosesor yang sama-sama berbasis arsitektur ARM. Namun, apakah platform Intel akan segera diserap pasar smartphone?

          TurnKey 13 out, TKLBAM 1.4 now backup/restores any Linux system        

This is really two separate announcements rolled into one:

  1. TurnKey 13 - codenamed "satisfaction guaranteed or your money back!"

    The new release celebrates 5 years since TurnKey's launch. It's based on the latest version of Debian (7.2) and includes 1400 ready-to-use images: 330GB worth of 100% open source, guru integrated, Linux system goodness in 7 build types that are optimized and pre-tested for nearly any deployment scenario: bare metal, virtual machines and hypervisors of all kinds, "headless" private and public cloud deployments, etc.

    New apps in this release include OpenVPN, Observium and Tendenci.

    We hope this new release reinforces the explosion in active 24x7 production deployments (37,521 servers worldwide) we've seen since the previous 12.1 release, which added 64-bit support and the ability to rebuild any system from scratch using TKLDev, our new self-contained build appliance (AKA "the mothership").

    To visualize active deployments world wide, I ran the access logs through GeoIPCity and overlaid the GPS coordinates on this Google map (view full screen):


  2. TKLBAM 1.4 - codenamed "give me liberty or give me death!"

    Frees TKLBAM from its shackles so it can now backup files, databases and package management state without requiring TurnKey Linux, a TurnKey Hub account or even a network connection. Having those will improve the usage experience, but the new release does its best with what you give it.

    I've created a convenience script to help you install it in a few seconds on any Debian or Ubuntu derived system:

    wget -O - -q $URL | PACKAGE=tklbam /bin/bash

    There's nothing preventing TKLBAM from working on non Debian/Ubuntu Linux systems as well, you just need to to install from source and disable APT integration with the --skip-packages option.

    Other highlights: support for PostgreSQL, MySQL views & triggers, and a major usability rehaul designed to make it easier to understand and control how everything works. Magic can be scary in a backup tool.

    Here's a TurnKey Hub screenshot I took testing TKLBAM on various versions of Ubuntu:

    Screenshot of TurnKey Hub backups

Announcement late? Blame my problem child

As those of you following TurnKey closely may have already noticed, the website was actually updated with the TurnKey 13.0 images a few weeks ago.

I was supposed to officially announce TurnKey 13's release around the same time but got greedy and decided to wrap up TKLBAM 1.4 first and announce them together.

TKLBAM 1.4 wasn't supposed to happen. That it did is the result of a spontaneous binge of passionate development I got sucked into after realizing how close I was to making it a lot more useful to a lot more people. From the release notes:

More people would find TKLBAM useful if:

  • If it worked on other Linux distributions (e.g., Debian and Ubuntu to begin with)

  • If users understood how it worked and realized they were in control. Magic is scary in a backup tool.

  • If it worked without the TurnKey Hub or better yet without needing a network connection at all.

  • If users realized that TKLBAM works with all the usual non-cloud storage back-ends such as the local filesystem, rsync, ftp, ssh, etc.

  • If users could more easily tell when something is wrong, diagnose the problem and fix it without having to go through TKLBAM's code or internals

  • If users could mix and match different parts of TKLBAM as required (e.g., the part that identifies system changes, the part that interfaces with Duplicity to incrementally update their encrypted backup archives, etc.)

  • If users could embed TKLBAM in their existing backup solutions

  • If users realized TKLBAM allowed them to backup different things at different frequencies (e.g., the database every hour, the code every day, the system every week)

    Monolithic all-or-nothing system-level backups are not the only way to go.

  • If it could help with broken migrations (e.g., restoring a backup from TurnKey Redmine 12 to TurnKey Redmine 13)

  • If it worked more robustly, tolerated failures, and with fewer bugs

So that's why the release announcement is late and Alon is slightly pissed off but I'm hoping the end result makes up for it.

TurnKey 13: from 0.5GB to 330GB in 5 years

Big things have small beginnings. We launched TurnKey Linux five years ago in 2008 as a cool side project that took up 0.5GB on SourceForge and distributed 3 installable Live CD images of LAMP stack, Drupal and Joomla.

5 years later the project has ballooned to over 330GB spanning 1400 images: 100 apps, 7 build types, in both 64-bit and 32-bit versions. So now we're getting upset emails from SourceForge asking if the project really needs to take up so much disk space.

Yes, and sorry about that. For what it's worth, realizing TurnKey may eventually outgrow SourceForge is part of the reason we created our own independent mirror network (well, that and rsync/ftp access). Sourceforge is great, but just in case...

93,555 lines of code in 177 git repos

In terms of development, I recently collected stats on the 177 git repositories that make up the app library, self-contained build system, and a variety of custom components (e.g., TKLBAM, the TurnKey Hub).

It turns out over the years we've written about 93,555 lines of code just for TurnKey, most of it in Python and shell script. Check it out:

Late but open (and hopefully worth it)

TurnKey 13 came out a few months later than we originally planned. By now we have a pretty good handle on what it takes to push out a release so the main reason for the delay was that we kept moving the goal posts.

In a nutshell, we decided it was more important for the next major TurnKey release to be open than it was to come out early.

The main disadvantage was that Debian 7 ("Wheezy") had come out in the meantime and TurnKey 12 was based on Debian 6 ("Squeeze"). On the other hand Debian 6 would be supported for another year and since TurnKey is just Debian under the hood nothing prevented impatient users who wanted to upgrade the base operating system to Debian 7 to go through the usual automated and relatively painless Debian upgrade procedure.

So we first finished work on TKLDev, put it through the trenches with the TurnKey 12.1 maintenance release, and moved the project's development infrastructure to GitHub where all development could happen out in the open.

We hoped to see a steady increase in future open source collaboration on TurnKey's development and so far so good. I don't expect the sea to part as it takes more than just the right tools & infrastructure to really make an open source project successful. It takes community and community building takes time. TurnKey needs to win over contributors one by one.

Alon called TurnKey 13.0 "a community effort" which I think in all honesty may have been a bit premature, but we are seeing the blessed beginnings of the process in the form of a steadily growing stream of much appreciated community contributions. Not just new prototype TurnKey apps and code submissions but also more bug reports, feature requests and wiki edits.

And when word gets out on just how fun and easy it is to roll your own Linux distribution I think we'll see more of that too. Remember, with TKLDev, rolling your own Debian based Linux distribution is as easy as running make:

root@tkldev ~$ cd awesomenix
root@tkldev turnkey/awesomenix$ make

You don't even have to use TKLDev to build TurnKey apps or use any TurnKey packages or components. You can build anything you want!

Sadly, I've gotten into the nasty habit of prepending TKL - the TurnKey initials - to all the TurnKey related stuff I develop but under the hood the system is about as general purpose as it can get. It's also pretty well designed and easy to use, if I don't (cough) say so myself.

I'll be delighted if you use TKLDev to help us improve TurnKey but everyone is more than welcome to use it for other things as well.

3 new TurnKey apps - OpenVPN, Tendenci and Observium

  • OpenVPN: a full-featured open source SSL VPN solution that accommodates a wide range of configurations, including remote access, site-to-site VPNs, Wi-Fi security, and more.

    Matt Ayers from Amazon asked us to consider including an OpenVPN appliance in the next release and Alon blew it out of the park with the integration for this one.

    The new TurnKey OpenVPN is actually a 3 for 1 - TurnKey's setup process asks whether you want OpenVPN in client, server or gateway mode and sets things up accordingly.

    My favourite feature is the one that allows the admin to create self destructing URLs with scannable QRcodes that makes setting up client OpenVPN profiles on mobiles a breeze. That's pretty cool.

  • Tendenci: a content management system built specifically for NPOs (Non Profit Organizations).

    Upstream's Jenny Qian did such an excellent job developing the new TurnKey app that we accepted it into the library with only a few tiny modifications.

    This is the first time an upstream project has used TKLDev to roll their own TurnKey app. It would be awesome to see more of this happening and we'll be happy to aid any similar efforts in this vain any way we can.

  • Observium: a really cool autodiscovering SNMP based network monitoring platform.

    The new TurnKey app is based on a prototype developed by Eric Young, who also developed a few other prototype apps which we plan on welcoming into the library as soon as we work out the kinks. Awesome work Eric!

Special thanks

Contributing developers:

Extra special thanks

  • Alon's wife Hilla: for putting up with too many late work sessions.
  • Liraz's girlfriend Shir: for putting up with such a difficult specimen (in general).

          Announcing TurnKey Linux 12.0: 100+ ready-to-use solutions        


Ladies and gentlemen, the 12.0 release is finally out after nearly 6 months of development and just in time to celebrate TurnKey's 4th anniversary. I'm proud to announce we've more than doubled the size of the TurnKey Linux library, from 45 appliances to over 100!

As usual pushing out the release was much more work than we expected. I'd like to chalk that up to relentless optimism in the face of vulgar practical realities and basic common sense. On the flip side, we now have 100+ appliances of which 60+ are brand spanking new. Lots of good new applications are now supported.

Despite all the hard work, or maybe because of it, working on the release was the most fun I've had in a while.

You look away and then back and suddenly all this new open source stuff is out there, ready for prime time. So many innovations and competing ideas, all this free energy. We feel so privileged to have a front row seat and not just watch it all play out but also be able to play our own small role in showcasing so much high-quality open source work while making it just a bit more accessible to users.

Unlike previous releases this latest release is based on Debian, not Ubuntu. We realize this may upset hardcore Ubuntu fans but if you read on, I'll try to explain below why "defecting" to Debian was the right thing for TurnKey.

What's new?

Deprecated appliances

  • AppEngine: With the addition of Go, we decided to split the Google App Engine SDK appliance into 3 separate language specific appliances (appengine-go, appengine-java and appengine-python).
  • Joomla16: 1.6 has reached end-of-life, and has been replaced with the Joomla 2.5 appliance.
  • EC2SDK: Insufficient amount of interest to warrant maintenance, especially now that the TurnKey Hub exists.
  • Zimbra and OpenBravo: Will be re-introduced once TurnKey supports 64bit architectures. Sorry folks. We're still working on that.

Changes common to all TurnKey appliances - the TurnKey Core

As most of you know TurnKey Core is the base appliance on top of which all other TurnKey appliances are built and includes all the standard goodies.

Most of the ingredients Core is built out of comes directly from the Debian package repositories. Thanks to the quality of packaging refreshing these is usually very easy. A handful (e.g., webmin, shellinabox) we have to package ourselves directly from upstream because they're not in Debian. That takes more work, but still usually not a big deal.

Then there are the parts of TurnKey we developed ourselves from scratch. They're our babies and giving them the love they need usually involves more work than all the other maintenance upgrades put together. The largest of these components is TKLBAM - AKA the TurnKey Linux Backup and Migration system.

Version 1.2 of TKLBAM which went into the release received a round of much needed improvements.

The highlights:

  • Backup
    • Resume support allows you to easily recover and continue aborted backup sessions.
    • Multipart parallel S3 uploads dramatically speed up how long it takes to upload backup archives to the cloud. Dial this up high enough and you can typically fully saturate your network connection.
  • Restore
    • Added embedded squid download cache to make it easier to retry failed restores (especially large multi-GB restores!) without resorting to suicide.
    • Fixed the annoying MySQL max_packet issue that made it unnecessarily difficult to restore large tables.

In a bid to open up development to the community we've also tried to make TKLBAM more developer friendly by making its internals easier to explore via the "internals" command. We've also put the project up on GitHub. For a bit more detail, read the full the release notes for TKLAM 1.2.

TurnKey 12.0 appliances available in 7 delicious flavors

Optimized Builds

  1. ISO: Our bread and butter image format. Can be burned on a CD and run almost anywhere - including bare metal or any supported virtualization platform.
  2. Amazon EC2: 1,414 Amazon EC2 optimized builds, spanning the globe in all 7 regions, in both S3 and EBS backed flavors. The Hub has been updated and all images are available for immediate deployment.
  3. VMDK / OVF: For those of you who like a slice of virtual in your lemonade, we've got VMDK and OVF optimized builds for you too.
  4. OpenStack: The new kid on the block, OpenStack optimized builds are just a download away, get 'em while they're hot. Oh, and don't forget that these builds require a special initrd.
  5. OpenVZ : Is OpenVZ your sweet tooth? Step right up! Using Proxmox VE? The TurnKey PVE channel has been updated as well for inline download and deployment right from your web browser.
  6. Xen: And who can forget about our favorite steady rock, optimized builds for Xen based hosting providers are at your fingertips.

Want just one appliance? Follow the download links on the site to download any image in any flavor individually from SourceForge.

Want to collect them all? Thanks to the good people manning TurnKey's mirror network, you can now get all the images in a particular image format (about 20 GB) or even the whole 120 GB enchilada.


  • 7 build formats
  • 606 downloadable virtual appliance images (120 GB worth)

Ubuntu vs Debian: the heart says Ubuntu, but the brains says Debian

One of the hardest things we had to do for this release was choose between Ubuntu and Debian. We initially planned on doing both. Then eventually we realized that would nearly double the amount of testing we had to do - a major bottleneck when you're doing a 100+ appliances.

Think about it. At this scale every one hour worth of extra testing and bugfixing translates into about 2-3 weeks of extra manwork. And to what ends? We already had way too much work to do, and the release kept getting pushed back, again and again, did it really matter whether we supported both of them like we originally intended? Would it really provide enough extra value to users to warrant the cost? They were after all quite similar...

In the end, choosing to switch over to Debian and abandon Ubuntu support for now was hard emotionally but easy on a purely rational, technical basis:

  1. Ubuntu supports less than 25% of available packages with security updates

    Remember that TurnKey is configured to auto-install security updates on a daily basis.

    This may sound dangerous to users coming from other operating systems but in practice works greats, because security updates in the Debian world are carefully back-ported to the current package version so that nothing changes besides the security fix.

    Unfortunately, if you're using Ubuntu there's a snag because Ubuntu only officially supports packages in its "main" repository section with security updates. Less than 25% of packages are included in the "main" repository section. The rest go into the unsupported "universe" and "multiverse".

    So if an Ubuntu based TurnKey appliance needs to use any of the nearly 30,000 packages that Ubuntu doesn't officially support, those packages don't get updates when security issues are revealed. That means users may get exposed to significantly more risk than with an equivalent Debian based TurnKey appliance.

    While Ubuntu are continually adding packages to its main repository, they can't keep pace with the rate of packages being added to Debian by thousands of Debian Developers. In fact, with each release the gap is growing ever wider so that an increasingly smaller percentage of packages are getting into main. For example, between Debian 6.0 (Squeeze) and the upcoming Debian 7.0 (Wheezy) release 8000 packages were added. In a comparable amount of time, between Ubuntu 10.04 (Lucid) and Ubuntu 12.04 only 1300 packages were added to main.

  2. Debian releases are much more bug-free and stable

    Stability wise, there's no comparison. Even the Ubuntu Long Term Support releases that come out every two years have serious bugs that would have held up a Debian release.

    Fundamentally this is due to the differences in priorities that drive the release process.

    Debian prioritizes technical correctness and stability. Releases are rock solid because they only happen when all of the release critical issues have been weeded out.

    By contrast, Ubuntu prioritizes the latest and greatest on a fixed schedule. It's forked off periodically from the Debian unstable version, also known as "sid" - the toy story kid that will break your toys. It then gets released on a fixed pre-determined date.

    There's no magic involved that makes Ubuntu suddenly bug-free by its intended release date compared with the version of Debian it was forked from. Ubuntu has a relatively small development team compared with Debian's thousands of developers. I think they're doing an amazing job with what they have but they can't perform miracles. That's not how software development works.

    So Debian is "done" when its done. Ubuntu is "done" when the clock runs out.

  3. Ubuntu's commercial support is not available for TurnKey anyhow

    Part of the original rational for building TurnKey on Ubuntu was Canonical's commercial support services. We theorized many higher-end business type users wouldn't be allowed to use a solution that didn't have strong commercial support backing.

    But 4 years passed, and despite talks with many very nice high level people at Canonical, nothing happened on the ground, except that one time Alon was invited to participate in an Ubuntu Developer Summit.

    We realized if we wanted to offer commercial support for TurnKey we'd have to offer it ourselves and in that case it would be easier to support Debian based solutions than Ubuntu.

Those were the main reasons for switching to Debian. Most of the people we talked to who cared about the issue one way or another preferred Debian for the same reasons, as did the majority of the 3000 people who participated in our website survey.

Don't get me wrong. Despite all the mud that has been slung at Ubuntu recently we still have a huge amount of respect for Ubuntu and even greater respect for the Ubuntu community. Sure, mistakes have been made, we're all only human after all, but let's not forget how much Ubuntu has done for the open source community.

Also, just because we don't have the resources to support Ubuntu versions of all TurnKey appliances right now doesn't mean we've ruled that out for the future. It's all about using the right tool for the job. If for some use cases (e.g., the desktop) that turns out to be Ubuntu, we'll use Ubuntu.

How to upgrade existing appliances to TurnKey 12

Simple: Just use TKLBAM to create a backup of an existing older version and then restore it on TurnKey 12.

TKLBAM only backs up files that have changed since the default installation. That means if you changed a configuration file on TurnKey 11.3 that configuration file will be backed up and re-applied when you restore to TurnKey 12.

What's next: opening up TurnKey, more developers = more appliances

To those inevitable number of you who will be disappointed that a favorite application hasn't made it into this release, please be patient, we're working within very tight resource constraints. I'll admit it's our own damn fault. We realized about a couple of years too late we haven't been open enough in the way we've been developing TurnKey.

Better late than never though. We're working very hard to fix this. Much of the work we've been doing over the last year has been to clean up and upgrade our development infrastructure so we can open it up fully to the community (and support additional architectures such as amd64!).

Soon anyone will be able to build TurnKey appliances from scratch using the same tools the core development team is using. We're working on this because it's the right thing to do as an open source project. We're also hoping it will help more people from the open source community contribute to TurnKey as equal members and lift the labor bottleneck that is preventing us from scaling from 100+ to a 1000+  TurnKey appliances.

Many, many thanks to...

  • Jeremy Davis, Adrian Moya, Basil Kurian, Rik Goldman (and his students), L.Arnold, John Carver, and Chris Musty. These guys submitted many TKLPatches that made it into this release, and even more importantly kept the community alive while we dropped off the face off the earth to focus on development.

    Their dedication and generosity have been an inspiration. We'd like the community to get to know them better so we'll soon be publishing interviews with them on the blog. Stay tuned.

  • Jeremy Davis AKA "The JedMeister": TurnKey forum moderator and all around super great guy. Jeremy is such an important part of what makes the TurnKey community tick I figured I should thank him at least twice. For emphasis. :)

  • The many rivers of upstream: Debian, Ubuntu and all of the wonderful open source communities who give love and write code for the software that goes into TurnKey.

  • Everyone else who helped test the 12.0 release candidate and provided ideas and feedback.

  • TurnKey users everywhere. Without you, TurnKey's audience, there really wouldn't be a point.

          TKLBAM: a new kind of smart backup/restore system that just works        

Drum roll please...

Today, I'm proud to officially unveil TKLBAM (AKA TurnKey Linux Backup and Migration): the easiest, most powerful system-level backup anyone has ever seen. Skeptical? I would be too. But if you read all the way through you'll see I'm not exaggerating and I have the screencast to prove it. Aha!

This was the missing piece of the puzzle that has been holding up the Ubuntu Lucid based release batch. You'll soon understand why and hopefully agree it was worth the wait.

We set out to design the ideal backup system

Imagine the ideal backup system. That's what we did.

Pain free

A fully automated backup and restore system with no pain. That you wouldn't need to configure. That just magically knows what to backup and, just as importantly, what NOT to backup, to create super efficient, encrypted backups of changes to files, databases, package management state, even users and groups.

Migrate anywhere

An automated backup/restore system so powerful it would double as a migration mechanism to move or copy fully working systems anywhere in minutes instead of hours or days of error prone, frustrating manual labor.

It would be so easy you would, shockingly enough, actually test your backups. No more excuses. As frequently as you know you should be, avoiding unpleasant surprises at the worst possible timing.

One turn-key tool, simple and generic enough that you could just as easily use it to migrate a system:

  • from Ubuntu Hardy to Ubuntu Lucid (get it now?)
  • from a local deployment, to a cloud server
  • from a cloud server to any VPS
  • from a virtual machine to bare metal
  • from Ubuntu to Debian
  • from 32-bit to 64-bit

System smart

Of course, you can't do that with a conventional backup. It's too dumb. You need a vertically integrated backup that has system level awareness. That knows, for example, which configuration files you changed and which you didn't touch since installation. That can leverage the package management system to get appropriate versions of system binaries from package repositories instead of wasting backup space.

This backup tool would be smart enough to protect you from all the small paper-cuts that conspire to make restoring an ad-hoc backup such a nightmare. It would transparently handle technical stuff you'd rather not think about like fixing ownership and permission issues in the restored filesystem after merging users and groups from the backed up system.

Ninja secure, dummy proof

It would be a tool you could trust to always encrypt your data. But it would still allow you to choose how much convenience you're willing to trade off for security.

If data stealing ninjas keep you up at night, you could enable strong cryptographic passphrase protection for your encryption key that includes special countermeasures against dictionary attacks. But since your backup's worst enemy is probably staring you in the mirror, it would need to allow you to create an escrow key to store in a safe place in case you ever forget your super-duper passphrase.

On the other hand, nobody wants excessive security measures forced down their throats when they don't need them and in that case, the ideal tool would be designed to optimize for convenience. Your data would still be encrypted, but the key management stuff would happen transparently.

Ultra data durability

By default, your AES encrypted backup volumes would be uploaded to inexpensive, ultra-durable cloud storage designed to provide %99.999999999 durability. To put 11 nines of reliability in perspective, if you stored 10,000 backup volumes you could expect to lose a single volume once every 10 million years.

For maximum network performance, you would be routed automatically to the cloud storage datacenter closest to you.

Open source goodness

Naturally, the ideal backup system would be open source. You don't have to care about free software ideology to appreciate the advantages. As far as I'm concerned any code running on my servers doing something as critical as encrypted backups should be available for peer review and modification. No proprietary secret sauce. No pacts with a cloudy devil that expects you to give away your freedom, nay worse, your data, in exchange for a little bit of vendor-lock-in-flavored convenience.

Tall order huh?

All of this and more is what we set out to accomplish with TKLBAM. But this is not our wild eyed vision for a future backup system. We took our ideal and we made it work. In fact, we've been experimenting with increasingly sophisticated prototypes for a few months now, privately eating our own dog food, working out the kinks. This stuff is complex so there may be a few rough spots left, but the foundation should be stable by now.

Seeing is believing: a simple usage example

We have two installations of TurnKey Drupal6:

  1. Alpha, a virtual machine on my local laptop. I've been using it to develop the TurnKey Linux web site.
  2. Beta, an EC2 instance I just launched from the TurnKey Hub.

In the new TurnKey Linux 11.0 appliances, TKLBAM comes pre-installed. With older versions you'll need to install it first:

apt-get update
apt-get install tklbam webmin-tklbam

You'll also need to link TKLBAM to your TurnKey Hub account by providing the API-KEY. You can do that via the new Webmin module, or on the command line:

tklbam-init QPINK3GD7HHT3A

I now log into Alpha's command line as root (e.g., via the console, SSH or web shell) and do the following:


It's that simple. Unless you want to change defaults, no arguments or additional configuration required.

When the backup is done a new backup record will show up in my Hub account:

To restore I log into Beta and do this:

tklbam-restore 1

That's it! To see it in action watch the video below or better yet log into your TurnKey Hub account and try it for yourself.

Quick screencast (2 minutes)

Best viewed full-screen. Having problems with playback? Try the YouTube version.

The screencast shows TKLBAM command line usage, but users who dislike the command line can now do everything from the comfort of their web browser, thanks to the new Webmin module.

Getting started

TKLBAM's front-end interface is provided by the TurnKey Hub, an Amazon-powered cloud backup and server deployment web service currently in private beta.

If you don't have a Hub account already, request an invitation. We'll do our best to grant them as fast as we can scale capacity on a first come, first served basis. Update: currently we're doing ok in terms of capacity so we're granting invitation requests within the hour.

To get started log into your Hub account and follow the basic usage instructions. For more detail, see the documentation.

Feel free to ask any questions in the comments below. But you'll probably want to check with the FAQ first to see if they've already been answered.

Upcoming features

  • PostgreSQL support: PostgreSQL support is in development but currently only MySQL is supported. That means TKLBAM doesn't yet work on the three PostgreSQL based TurnKey appliances (PostgreSQL, LAPP, and OpenBravo).
  • Built-in integration: TKLBAM will be included by default in all future versions of TurnKey appliances. In the future when you launch a cloud server from the Hub it will be ready for action immediately. No installation or initialization necessary.
  • Webmin integration: we realize not everyone is comfortable with the command line, so we're going to look into developing a custom webmin module for TKLBAM. Update: we've added the new TKLBAM webmin module to the 11.0 RC images based on Lucid. In older images, the webmin-tklbam package can also be installed via the package manager.

Special salute to the TurnKey community

First, many thanks to the brave souls who tested TKLBAM and provided feedback even before we officially announced it. Remember, with enough eyeballs all bugs are shallow, so if you come across anything else, don't rely on someone else to report it. Speak up!

Also, as usual during a development cycle we haven't been able to spend as much time on the community forums as we'd like. Many thanks to everyone who helped keep the community alive and kicking in our relative absence.

Remember, if the TurnKey community has helped you, try to pay it forward when you can by helping others.

Finally, I'd like to give extra special thanks to three key individuals that have gone above and beyond in their contributions to the community.

By alphabetical order:

  • Adrian Moya: for developing appliances that rival some of our best work.
  • Basil Kurian: for storming through appliance development at a rate I can barely keep up with.
  • JedMeister: for continuing to lead as our most helpful and tireless community member for nearly a year and a half now. This guy is a frigging one man support army.

Also special thanks to Bob Marley, the legend who's been inspiring us as of late to keep jamming till the sun was shining. :)

Final thoughts

TKLBAM is a major milestone for TurnKey. We're very excited to finally unveil it to the world. It's actually been a not-so-secret part of our vision from the start. A chance to show how TurnKey can innovate beyond just bundling off the shelf components.

With TKLBAM out of the way we can now focus on pushing out the next release batch of Lucid based appliances. Thanks to the amazing work done by our star TKLPatch developers, we'll be able to significantly expand our library so by the next release we'll be showcasing even more of the world's best open source software. Stir It Up!

          Finding the closest data center using GeoIP and indexing        

We are about to release the TurnKey Linux Backup and Migration (TKLBAM) mechanism, which boasts to be the simplest way, ever, to backup a TurnKey appliance across all deployments (VM, bare-metal, Amazon EC2, etc.), as well as provide the ability to restore a backup anywhere, essentially appliance migration or upgrade.

Note: We'll be posting more details really soon - In this post I just want to share an interesting issue we solved recently.

Backups need to be stored somewhere - preferably somewhere that provides unlimited, reliable, secure and inexpensive storage. After exploring the available options, we decided on Amazon S3 for TKLBAM's storage backend.

The problem

Amazon have 4 data centers called regions spanning the world, situated in North California (us-west-1), North Virginia (us-east-1), Ireland (eu-west-1) and Singapore (ap-southeast-1).
The problem: Which region should be used to store a servers backups, and how should it be determined?
One option was to require the user to specify the region to be used during backup, but, we quickly decided against polluting the user interface with options which can be confusing, and opted for a solution that could automatically determine the best region.

The solution

The below map plots the countries/states with their associated Amazon region:
Generated automatically using Google Maps API from the indexes.
The solution: Determine the location of the server, then lookup the closest Amazon region to the servers location.

Part 1: GeoIP

This was the easy part. The TurnKey Hub is developed using Django which ships with GeoIP support in contrib. Within a few minutes of being totally new to geo-location I had part 1 up and running.
When TKLBAM is initialized and a backup is initiated, the Hub is contacted to get authentication credentials and the S3 address for backup. The Hub performs a lookup on the IP address and enumerates the country/state.
In a nutshell, adding GeoIP support to your Django app is simple: Install Maxmind's C library and download the appropriate dataset. Then, once you update your file, you're all set.

GEOIP_PATH = "/volatile/geoip"
GEOIP_LIBRARY_PATH = "/volatile/geoip/"


from django.contrib.gis.utils import GeoIP

ipaddress = request.META['REMOTE_ADDR']
g = GeoIP()
    {'area_code': 609,
     'city': 'Absecon',
     'country_code': 'US',
     'country_code3': 'USA',
     'dma_code': 504,
     'latitude': -39.420898,
     'longitude': - 74.497703,
     'postal_code': '08201',
     'region': 'NJ'}

Part 2: Indexing

This part was a little more complicated.
Now that we have the servers location, we can lookup the closest region. The problem is creating an index of each and every country in the world, as well as each US state - and associating them with their closest Amazon region.
Creating the index could have been really pain staking, boring and error prone if doing it manually - so I devised a simple automated solution:
  • Generate a mapping of country and state codes with their coordinates (latitude and longitude).
  • Generate a reference map of the server farms coordinates.
  • Using a simple distance based calculation, determine the closest region to each country/state, and finally output the index files.
I was also planning on incorporating data about internet connection speeds and trunk lines between countries, and add weight to the associations, but decided that was overkill.
We are making the indexes available for public use (countries.index, unitedstates.index).
More importantly, we need your help to tweak the indexes - as you have better knowledge and experience on your connection latency and speed. Please let us know if you think we should associate your country/state to a different Amazon region.
[update] We updated the indexes to include the new AWS regions (Oregon, Sao Paulo, Tokyo), tweaked automatic association to use the haversine formula, and added overrides based on underwater internet cables. Lastly, we've open sourced the whole project on github (checkout the live map meshup).

          Comentario en Listado de herramientas ETL Open Source por Blog de Excel        
Muy buen listado de herramientas ETL. Haré una prueba de las herramientas Open Source.
          New features for GeoFroggerFX        

I added H2 Database, JPA and a groovy plugin interface to the application.

The application is open source and can be found at

          GeoFroggerFX is now an open source project        

I decided to publish GeoFroggerFX as an open source project on GitHub and published a minimalistic project page.

          Simple way to add logger using Log4j in a java application        
Hello Devs, Today I learnt how to add logger using Log4j and want to share quick implementation of it. Log4j is a flexible logging library, an open source project from Apache. Using Log4j, we can replace print line statements, like System.out.println(“Hello World”) . The advantage in using Log4j is, if you do not want to print the … Continue reading
          Android Buffet: Episode 356 – Somewhere Over the Ocean        

Show Notes Android News Android Distribution updated for July 2017, Nougat at 11.5, Marshmallow 31.8, Lollipop 30.1, KitKat 17 These Android Wear watches start at under $100 Paranoid Android 2 is out, with new devices and features App News Ads coming to Facebook messenger home screen Brevent is an open source alternative to Greenify, but requires ADB each… Read More »

The above... Episode 356 – Somewhere Over the Ocean appeared first on Android Buffet.

           Redis 代理服务Twemproxy         

1、twemproxy explore

      å½“我们有大量 Redis 或 Memcached 的时候,通常只能通过客户端的一些数据分配算法(比如一致性哈希),来实现集群存储的特性。虽然Redis 2.6版本已经发布Redis Cluster,但还不是很成熟适用正式生产环境。 Redis 的 Cluster 方案还没有正式推出之前,我们通过 Proxy 的方式来实现集群存储。

       Twitter,世界最大的Redis集群之一部署在Twitter用于为用户提供时间轴数据。Twitter Open Source部门提供了Twemproxy。

     Twemproxy,也叫nutcraker。是一个twtter开源的一个redis和memcache代理服务器。 redis作为一个高效的缓存服务器,非常具有应用价值。但是当使用比较多的时候,就希望可以通过某种方式 统一进行管理。避免每个应用每个客户端管理连接的松散性。同时在一定程度上变得可以控制。

      Twemproxy是一个快速的单线程代理程序,支持Memcached ASCII协议和更新的Redis协议:

     å®ƒå…¨éƒ¨ç”¨C写成,使用Apache 2.0 License授权。项目在Linux上可以工作,而在OSX上无法编译,因为它依赖了epoll API.

      Twemproxy 通过引入一个代理层,可以将其后端的多台 Redis 或 Memcached 实例进行统一管理与分配,使应用程序只需要在 Twemproxy 上进行操作,而不用关心后面具体有多少个真实的 Redis 或 Memcached 存储。 


    • 支持失败节点自动删除

      • 可以设置重新连接该节点的时间
      • 可以设置连接多少次之后删除该节点
      • 该方式适合作为cache存储
    • 支持设置HashTag

      • 通过HashTag可以自己设定将两个KEYhash到同一个实例上去。
    • 减少与redis的直接连接数

      • 保持与redis的长连接
      • 可设置代理与后台每个redis连接的数目
    • 自动分片到后端多个redis实例上

      • 多种hash算法:能够使用不同的策略和散列函数支持一致性hash。
      • 可以设置后端实例的权重
    • 避免单点问题

      • 可以平行部署多个代理层.client自动选择可用的一个
    • 支持redis pipelining request


    • 支持状态监控

      • 可设置状态监控ip和端口,访问ip和端口可以得到一个json格式的状态信息串
      • 可设置监控信息刷新间隔时间
    • 高吞吐量

      • 连接复用,内存复用。
      • 将多个连接请求,组成reids pipelining统一向redis请求。

     å¦å¤–可以修改redis的源代码,抽取出redis中的前半部分,作为一个中间代理层。最终都是通过linux下的epoll 事件机制提高并发效率,其中nutcraker本身也是使用epoll的事件机制。并且在性能测试上的表现非常出色。


Twemproxy 由于其自身原理限制,有一些不足之处,如: 
  • 不支持针对多个值的操作,比如取sets的子交并补等(MGET 和 DEL 除外)
  • 不支持Redis的事务操作
  • 出错提示还不够完善
  • 也不支持select操作


Twemproxy 的安装,主要命令如下: 
apt-get install automake  
apt-get install libtool  
git clone git://  
cd twemproxy  
autoreconf -fvi  
./configure --enable-debug=log  
src/nutcracker -h

      listen: #使用哪个端口启动Twemproxy  
      redis: true #是否是Redis的proxy  
      hash: fnv1a_64 #指定具体的hash函数  
      distribution: ketama #具体的hash算法  
      auto_eject_hosts: true #是否在结点无法响应的时候临时摘除结点  
      timeout: 400 #超时时间(毫秒)  
      server_retry_timeout: 2000 #重试的时间(毫秒)  
      server_failure_limit: 1 #结点故障多少次就算摘除掉  
      servers: #下面表示所有的Redis节点(IP:端口号:权重)  
      redis: true  
      hash: fnv1a_64  
      distribution: ketama  
      auto_eject_hosts: false  
      timeout: 400  

你可以同时开启多个 Twemproxy 实例,它们都可以进行读写,这样你的应用程序就可以完全避免所谓的单点故障。

abin 2015-11-03 19:30 发表评论

          How to Create a Fully Featured Mail Server using Postal        
Postal is a free and open source complete mail server for sending and receiving emails. It is written in Ruby and JavaScript. In this tutorial, we will install Postal Mail Server on Ubuntu 17.04.
          How to Install TaskBoard on CentOS 7        
TaskBoard is a free and open source application to keep a track of the tasks that needs to be done. It requires minimal dependencies to work. Database is stored in SQLite which eliminates the requirement of MySQL or any other database server.
          How to Install MySQL Server with phpMyAdmin on FreeBSD 11        
In this tutorial, we will install MySQL with phpMyAdmin along with Apache web server with PHP 5.6. MySQL is a free and open source relational management system. It stores data in tabular format. It is the most popular way of storing the data into the database. phpMyAdmin is also a free and open source application used to administrate a MySQL server instance through a rich graphical user interface.
          How to Install SonarQube on Ubuntu 16.04        
SonarQube is a free and open source quality management system platform that can be used to automate code inspection. It can analyze source code files, calculate a set of metrics and show the result on the web based dashboard. It is written in Java language and also supports other languages like Perl, PHP, and Ruby.
          How to Install RabbitMQ Server on CentOS 7        
RabbitMQ is a free and open source enterprise message broker software. It is written in Erlang and implements Advanced Message Queueing Protocol (AMQP). In this tutorial, we will install RabbitMQ on CentOS 7 server.
          Pengalaman menggunakan Linux dari seorang pemula        
Pada bulan Juli 2008 untuk pertama kalinya saya menginstal Ubuntu dan menggunakan sampai sekarang. Sebelumnya memang pernah menginstal Mandriva tapi berhubung saat itu sering mendapatkan masalah akhirnya kembali menggunakan Window$$.

Seiring dengan semakin murahnya layanan internet maka saat itu bergabung dengan tetangga menggunakan RT/RW net. Tapi saat ini sudah pindah rumah sehingga kembali menjadi fakir benwit dan berharap "fakir benwit dan anak terlantar dipelihara oleh negara", tapi sudahlah yang penting masih bisa ngenet meskipun lemot, sambil mencari provider yang murah dan kencang. Saat itu, dengan layanan internet unlimited yang lumayan kencang menurut saya, kecepatan download sekitar 200 kbps maka jendela dunia seperti terbuka lebar.

Saat itulah keinginan untuk menggunakan linux kembali menggelora setelah membaca beberapa rekan blogger yang menggunakan Ubuntu. Pada tanggal 10 Juli 2008 Ubuntu 8.04 saya instal di PC dengan sukses. Sebelum ada layanan internet di rumah, jika menemukan masalah memang tidak tahu harus kemana mencari cara untuk memecahkan masalah tersebut. Setelah ada layanan internet banyak permasalahan dapat dipecahkan dengan searching menggunakan searching engine.

Beberapa kenalan yang berniat untuk menggunakan Linux pernah menanyakan mengenai linux setelah membaca tulisan di blog saya. Dia menanyakan kenapa sound tidak bisa hidup dan beberapa hal lainnya. Saya hanya menjawab "Kalau ada masalah seperti itu nanya ke Google aja". Setelah beberapa hari dia membalas "Terima kasih saya sekarang berhasil menggunakan linux". Padahal semua orang bisa memberi jawaban seperti saya hehehehe.

Pernah juga ada blogger yang menanyakan permasalahan melalui komentarnya di blog ini. Sebenarnya jika saya bisa membantu pastilah saya tidak keberatan untuk membantu, tapi berhubung pemula sehingga banyak tidak tahunya. Paling jawaban andalan saya adalah "Silahkan googling banyak rekan blogger yang menulis hal itu".

Dari pengembaraan di dunia maya dan pengalaman, saya mendapatkan tips agar bisa terus eksis menggunakan Linux, diantaranya :

1. Mengoprek aplikasi yang biasa dioprek di Windows tapi kali ini pakai Linux. Saat menggunakan windows saya suka download Youtube dan tube-tube lainnya hehehe, menggunakan aplikasi namanya Unlocker. Setelah menggunakan linux ternyata cara downloadnya sangat mudah, jika ingin tahu silahkan klik di sini. Dan bagi yang biasa menggunakan sotosop bisa menggunakan GIMP seperti di sini. Dan masih ada lagi yang lainnya dapat dibaca di sini.
2. Sediakan layanan internet yang memadai, seperti yang saya sampaikan di atas. Dulu mungkin mahal sekali untuk menyediakan layanan internet. Tapi sekarang jaman sudah berubah. Sekarang jamannya internet murah jika sampeyan berada pada waktu dan tempat yang tepat.
3. Jika ingin mahir silahkan kursus linux. Tapi hal ini tidak saya lakukan.
4. Bergabung dengan komunitas pengguna linux. Saat ini saya bergabung dengan, di situ banyak rekan blogger yang berbagi pengalaman tentang linux.
5. Jika belum punya komputer silahkan beli komputer dulu. Jika ke warnet silahkan ke warnet linux, untuk yang satu ini ahlinya adalah kang Pradna.

Dan setelah menggunakan linux saya merasakan berbagai manfaat dan keuntungan :

1. Untuk memilikinya sangat murah bisa langsung download atau kalau ubuntu bisa minta ke ShipIT.
2. Tidak pernah bermasalah dengan virus seperti saat menggunakan O/S yang lain.
3. Relatif lebih stabil jika dibanding dengan windows. Dulu saat masih menggunakan Windows saya merasakan komputer semakin lama semakin lemot entah karena virus, crash atau hal lain yang tidak saya ketahui. Lain halnya dengan linux yang tetap stabil.
4. Update gratis, yang penting tersedia layanan internet.

Sedangkan kerugian menggunakan linux seperti yang pernah saya baca di sebuah blog (lupa nama blognya) adalah :
1. Lupa instal update antivirus sehingga tidak bisa menjawab jika ada yang nanya "Punya update antivirus nggak?".
2. Tidak tahu cara nge-crack sofware.

Akhir kata sebenarnya saya menulis tentang linux bukan karena ahli atau geek (bisa dilihat dari isi tulisan) tapi berharap ada masukan dari komentar dan bisa sharing dengan rekan-rekan semua.

Tulisan ini dibuat untuk menyukseskan Lomba Blog Open Source P2I-LIPI dan Seminar Open Source P2I-LIPI 2009.

Demikianlah pengalaman yang dapat saya bagikan kepada rekan blogger. Atas perhatiannya saya ucapkan terima kasih.


Endar Fitrianto

Nb. Berhubung tulisan pemula pasti banyak istilah istilah yang kurang tepat penggunaanya. Mohon maaf dan harap maklum ya!
Tulisan ini hanyalah pendapat dari seorang pemula jadi sampeyan boleh saja tidak sependapat.
          CLEditor: WYSIWYG HTML Editor jQuery Plugin        
CLEditor is a open source jQuery plugin which provides a lightweight, full featured, cross browser, WYSIWYG HTML editor which can be easily added into any web site. It is lightweight plugin that consumes less than 8K of total bandwidth.
          Quizzy: Creates AJAX and XML Based Quizzes        
Quizzy is an open Source PHP and AJAX based quiz library. It allows you to quickly and easily add multiple-choice quizzes to your website. All the quiz data is based on XML so making quizzes is very easy. Quizzy has been tested in IE 6+, Firefox, Chrome, Safari 4+, Opera 8+, and Konqueror.
          What Is Linux?        
Linux is a generic term referring to Unix-like computer operating systems based on the Linux kernel. Their development is one of the most prominent examples of free and open source software collaboration; typically all the underlying source code can be used, freely modified, and redistributed by anyone under the terms of the GNU GPL and other free licenses. Linux is predominantly known for its use in servers, although it is installed on a wide variety of computer hardware, ranging from embedd…
          How Is Linux Licensed?        
Linux is licensed on the GNU General Public License. The licenses of the utilities and programs which come with the installations vary. Much of the code is from the GNU Project at the Free Software Foundation, and is also under the GPL. Some other major programs often included in Linux distributions are under a BSD license and other similar licenses. You can view various Open Source Licenses at The Open Source Initiative.
          107: The Frankencamera        
Should you jump to the 7D or Open Source, Don't forget to clone out the birds, your questions and an interview with Katrin Eismann.
          096: The Photoshop Evolution        
Canon 5D firmware goes open source, the evolution of Photoshop, and guest host Sara France
          Mahara 46th developer meeting on Aug 26, 2015; 08:00 UTC        
mahara, 48th developer meeting on 27 October 2015The 46th developer meeting for the open source E-portfolio “Mahara” will be held on 26th Aug, 2015 Wednesday at 08:00 UTC The developer meetings are held on IRC in the #mahara-dev channel. An IRC bot will be used to take minutes of these meetings and agendas made available on these pages before-hand. If you don’t […]
          Mahara 43rd developer meeting on 23 April        
Developers of Mahara, the most popular open source E-portfolio, are organizing their 43rd developer meeting on 23rd April 2015 at 8:00 UTC. You can join the chat from the Mahara IRC channel #mahara-dev channel or if you are not having any IRC client then you can join using your browser from this link. The agenda […]
          Scientists Use Google Earth and Crowdsourcing to Map Uncharted Forests        

Scientists Use Google Earth and Crowdsourcing to Map Uncharted Forests

No single person could ever hope to count the world’s trees. But a crowd of them just counted the world’s drylands forests—and, in the process, charted forests never before mapped, cumulatively adding up to an area equivalent in size to the Amazon rainforest.

Current technology enables computers to automatically detect forest area through satellite data in order to adequately map most of the world’s forests. But drylands, where trees are fewer and farther apart, stymied these modern methods. To measure the extent of forests in drylands, which make up more than 40% of land surface on Earth, researchers from UN Food and Agriculture Organization (FAO), World Resources Institute and several universities and organizations had to come up with unconventional techniques. Foremost among these was turning to residents, who contributed their expertise through local map-a-thons.

Technical Challenges, Human Solutions

Traditional remote sensing algorithms detect tree cover in a pixel rather than capturing individual trees in a landscape. That means the method can miss trees in less-dense forests or individual trees in farm fields or grasslands, which is most often the nature of dryland areas. 

<p>Hansen/UMD/Google/USGS/NASA tree cover data displayed on Global Forest Watch. Green pixels represent tree cover with greater than 20 percent canopy density but do not count trees outside of these pixels. Note, coarse pixels as shown above may be more efficient for rapidly detecting large scales of deforestation, while individual mapping techniques as described below may be more effective for monitoring land restoration and degradation.</p>

Hansen/UMD/Google/USGS/NASA tree cover data displayed on Global Forest Watch. Green pixels represent tree cover with greater than 20 percent canopy density but do not count trees outside of these pixels. Note, coarse pixels as shown above may be more efficient for rapidly detecting large scales of deforestation, while individual mapping techniques as described below may be more effective for monitoring land restoration and degradation.

Google Earth collects satellite data from several satellites with a variety of resolutions and technical capacities. The dryland satellite imagery collection compiled by Google from various providers, including Digital Globe, is of particularly high quality, as desert areas have little cloud cover to obstruct the views. So while difficult for algorithms to detect non-dominant land cover, the  human eye has no problem distinguishing trees in the landscapes.  Using this advantage, the scientists decided to visually count trees in hundreds of thousands of high-resolution images to determine overall dryland tree cover.

Local Map-a-thons using Collect Earth

Armed with the quality images from Google that allowed researchers to see objects as small as half a meter (about 20 inches) across, the team divided the global dryland images into 12 regions, each with a regional partner to lead the counting assessment. The regional partners in turn recruited local residents with practical knowledge of the landscape to identify content in the sample imagery. These volunteers would come together in participatory mapping workshops, known colloquially as “map-a-thons.”

To lay the groundwork for the local map-a-thon events the team identified both an entry point, usually a university, that could help recruit participants, as well as a facility with the capacity and internet to host the map-a-thon. Once trained, any given analyst could identify 80 to 100 plots per day.

<p>Example of Collect Earth grid set for data collection.</p>

Example of Collect Earth grid set for data collection.

Quality control was carried out afterwards by comparing the identified land cover with the number of trees counted. For example, if a local participant identified an image as having just three trees but later identified the same image as forest, the researchers knew there was human error and further review was required.

<p>The Collect Earth tool allows users to navigate between multiple windows and select the best imagery for each particular data point</p>

The Collect Earth tool allows users to navigate between multiple windows and select the best imagery for each particular data point

These map-a-thons were designed so that people with first-hand knowledge of local landscapes could participate. No knowledge of remote sensing or any technology beyond internet literacy was required. The expertise needed was the understanding of regional landscape and land use. This practical knowledge of the region was critical as the participants were not only able to count individual trees but also identify the type of land use and trees they saw on the Google Earth images.

This research only “discovered” new forest in the sense that Columbus “discovered” the New World. The drylands forest was always there, and the people who live in the area always knew it was there. In fact, they were the only ones who had the background knowledge to identify subtleties like whether an image of a plant in their region was a shrub or actually a young tree, or if what appeared to be a tree was just a perennial plant. A few common perennial crops, including coffee and banana, looked like shrubs in the satellite images, but local participants had no problem identifying them correctly as perennial crops instead of shrubs, a distinction that would have been impossible with satellite imagery analysis alone.

<p>Collect Earth Map-a-thon Event in Gatsibo, Rwanda.</p>

Collect Earth Map-a-thon Event in Gatsibo, Rwanda.

This human identification component, along with the ability to zoom into sub-meter resolution with cheap and available technology, has helped achieve the breakthrough result of forest cover identification 9 percent higher than previously reported. 

Local Ownership of the Map and the Land

Utilizing local landscape knowledge not only improved the map quality but also created a sense of ownership within each region. The map-a-thon participants have access to the open source tools and can now use these data and results to better engage around land use changes in their communities. Local experts, including forestry offices, can also use this easily accessible application to continue monitoring in the future.

Global Forest Watch (GFW) uses medium resolution satellites (30 meters or about 89 feet) and sophisticated algorithms to detect near-real time deforestation in densely forested area.  The dryland tree cover maps complement GFW by providing the capability to monitor non-dominant tree cover and small-scale, slower-moving events like degradation and restoration. Mapping forest change at this level of detail is critical both for guiding land decisions and enabling  government and business actors to demonstrate their pledges are being fulfilled, even over short periods of time.

The data documented by local participants will enable scientists to do many more analyses on both natural and man-made land changes including settlements, erosion features and roads. Mapping the tree cover in drylands is just the beginning.

          Libfabric paper at IEEE Hot Interconnects        
Later today, Sean Hefty will present a paper about the OpenFabrics Interfaces (a.k.a. “libfabric“) at the 2015 IEEE Hot Interconnects conference. Libfabric is the next-generation Linux library being developed by an open source consortium of vendors and academic researchers that implements the OpenFabrics Interfaces, specifically designed to expose application-focused networking functionality to high performance applications (e.g., MPI, PGAS, […]
MacBird Open Source Release: "There are other UI runtimes, some are even open source. MacBird is different because it's built for the designer. You create and edit MacBird 'cards' using a draw program with grouping and alignment." Originally released three years ago today.
          10+ Awesome Tools and Extensions For GraphQL APIs        

With the recent surge of interest in GraphQL, a vibrant new ecosystem of supplementary software has quickly emerged. Open source communities and entrepreneuring startups alike are validating new GraphQL use cases, filling in GraphQL implementation gaps, and enabling more and more developers to adopt GraphQL practices with decreased overhead through the use of some pretty awesome tools. Read more

The post 10+ Awesome Tools and Extensions For GraphQL APIs appeared first on Nordic APIs.

          Collaborating with employees, Red Hat becomes passionate corporate giver        
A Red Hat volunteer team
DeLisa Alexander

Community contributions have long been an important topic at Red Hat. Our company was built on the open-source software development model. We work collaboratively with developers around the world to create software, sharing code and contributing it both upstream and downstream as part of the open source community. As Red Hat grew into a successful company, we recognized that our responsibility as a corporate citizen needed to mature beyond contributing software code to the open-source community.

For many years, Red Hat supported nonprofits with donations from our modest charitable giving fund. This allowed us to contribute to the community in a way that was appropriate to our size, while meeting our commitments to our stakeholders.

Developing our corporate citizenship program

Six years ago, we were maturing as a company and wanted to expand our contributions into a more robust corporate citizenship program. Luckily, we had a group of associates who were passionate about, and experienced in, corporate citizenship. We asked them to form a committee and develop a corporate citizenship roadmap for our company.

The corporate citizenship committee set out to determine how Red Hat could better serve our communities while funding these efforts at a level befitting a company our size. As Red Hatters, they opted to answer this question using the same open-source principles we use to grow our business. They got the whole company involved.
First, they surveyed our U.S. associates, asking them to identify the areas of need where they would like Red Hat to focus our efforts. And they promised to keep them involved and informed. They also asked associates to identify causes that were personally important to them (rather than what they felt the company should support) to see if their answers differed.

Associates identified a broad range of causes, from the environment to animal welfare to domestic violence prevention, as causes they personally supported. But when they stated what they thought Red Hat should support, Red Hatters clearly identified four areas of focus for corporate giving efforts:
  • •    basic needs 
  • •    education 
  • •    technology
  • •    health
Associates also indicated strong interest in volunteer events and in a matching gifts program. Armed with this data, the committee prepared a three-year roadmap and went to work implementing the plan. We narrowed the focus of our giving program to the funding priorities identified by the associates and created a process by which associates could nominate charities for a donation from Red Hat. After creating guidelines and a simple application process for nonprofits, the committee focused on the matching gifts program because that was the most frequently requested charitable program by associates.

Many companies conduct a concerted workplace-giving campaign for only a few charities or limit the numbers or types of charities eligible for their matching gifts program. However, as the committee reviewed the survey responses, we realized this would be one area where Red Hat could easily support associates in the causes that mattered to them, beyond the causes prioritized by the company. So when we established our matching gifts program in 2010, Red Hat’s guidelines allowed a matching contribution to any 501(c)(3) nonprofit, school or house of worship (provided the donation went to a social outreach program). Allowing maximum flexibility in our program has been key, because as a company, we have a strong commitment to promoting freedom and choice.

Around the same time, our volunteer committee formalized. From time to time, our associates supported a variety of volunteer projects. A team of Red Hatters went to New Orleans to help in the aftermath of Hurricane Katrina; others conducted a toy drive for Toys for Tots around the holidays. Before the committee, though, there was a lot of confusion over what was an official Red Hat volunteer activity or simply a group of colleagues getting together and deciding to ride in a charity bike ride wearing Red Hat t-shirts. This is a common dynamic for young companies.

Again, our committee focused Red Hat’s volunteer activities on the priorities identified through the survey. They gathered data on area nonprofits that needed groups of volunteers and created internal processes to help managers find projects for team-building activities. They started organizing company-wide volunteer events several times a year, including during our annual Red Hat Summit conference when we engage customers, partners and employees in volunteer activities to benefit the community. We also conduct volunteer events during We Are Red Hat Week, an annual celebration of our brand, culture and people, and these events have benefited communities around the world.

All Red Hatters have the opportunity to participate on the volunteer committee to help organize events. If they don’t have the time, there are still opportunities for involvement; for example, all associates were recently invited to vote for their favorite Red Hat Volunteers t-shirt design. Our volunteer efforts focus on communities where we have offices, but they also are a great opportunity for collaboration across the company.

A new perspective

Before implementing any of these programs, just as we were evaluating our place in the community as a corporate citizen, the world changed. Around October of 2008, at a time when we usually started our holiday party planning, the economy began to unravel. Although Red Hat had the good fortune of enjoying continued growth, most Red Hat associates had friends and family members touched by the recession. Some began to question whether we should spend money on merriment for ourselves when so many people around us were suffering. We considered the question: “What should the Red Hat way of celebrating the holidays be during times of economic hardship?” We concluded that we needed to tap into our open-source roots and focus on community contributions.

So we took this idea to the company and received a variety of reactions. Some people expressed great pride in the company for suggesting this change, while others viewed the party as a much-needed celebration and an opportunity for the company to thank employees and their families for their support throughout the year.

We took their feedback and reworked the plan. Instead of contributing all of the holiday party funds, we decided to use most of the funds for charity but still provide a token amount per associate per U.S. office to be used for small-scale in-the-office parties or in other ways as each office saw fit. In Raleigh, for example, we decided to have a small party held right after work. A team of volunteers offered to decorate for the event and take turns serving their fellow associates.

Next, we asked associates to nominate organizations, from the four areas of focus, for our new holiday donation. We collected a large list of worthy organizations. Given the tough economic situation, we decided that we should focus on organizations that covered basic needs, and the associates voted and chose Feeding America for our first-ever holiday donation.

Feeding America is the nation’s leading domestic hunger-relief charity, which supports a network of food banks across the United States. In addition to our substantial national donation to the organization (which paid for about 800,000 meals, or approximately 1 million pounds of food), many of our offices around the United States held their own canned food drives to benefit local food banks.

Afterward, the reaction within Red Hat and the community was extremely positive. For years, when someone learned that I work at Red Hat, especially around the holiday season, they often mentioned hearing about Red Hat giving its holiday party budget away to charity. It has become a point of pride internally for Red Hat that we not only give away our software code, but we also give in other meaningful ways.

In the years since, we have continued our new tradition and made a substantial holiday donation to a national nonprofit chosen by our associates, including Meals on Wheels, the Alzheimer’s Association, Habitat for Humanity and the Wounded Warrior Project.

Going global

Although we started with a focus on U.S. nonprofits, we have now expanded our giving beyond the United States. We have local committees in our Raleigh corporate headquarters and our Westford, MA, engineering headquarters and another committee for the rest of North America contributions. Our first committee outside the United States was for our offices in Europe, and we now have a committee in Asia-Pacific, too. We have encouraged each local committee to survey their own associates to determine priorities that resonate best with associates in each region. While we strive for a certain level of consistency, we also want to give as much flexibility as we can to each program so that it has meaning for associates around the world. The best part about the committees is that they all consist of Red Hatters who are volunteering extra time to be part of our corporate citizenship efforts. Their passion drives everything that we do.

The corporate citizenship program within Red Hat has become something we care about deeply. Because our associates are actively involved in the decisions made and organizations selected, they feel a sense of ownership and pride in our contributions.
DeLisa Alexander is executive vice president and chief people officer for Red Hat.

          Downlaod Software SketchUp Make 17.2.2555 | desain grafis 3D        

SketchUp Make adalah aplikasi desain grafis untuk membuat dan memodifikasi berbagai jenis model 3D dengan cepat dan mudah.

Ketika bekerja dalam bidang desain, seperti desain interior, maka biasanya diperlukan sebuah program atau software yang mampu membuat desain dengan mudah dan cepat.
SketchUp Make merupakan software atau aplikasi yang dapat kita gunakan untuk melakukan atau merancang sebuah bangunan seperti Rumah, Gedung, atau Kost, dan masih banyak lainnya.dirancang untuk arsitek, insinyur sipil, pembuat film, game developer, dan profesi terkait.
Berbagai fitur disertakan dalam program ini, termasuk perangkat untuk menggambar (drawing), efek dan tektur. Dengan berbagai perangkat tersebut, pengguna bisa membuat gambar 3D tanpa batas, termasuk untuk desain arsitektur, map atau blueprint.
Gambar atau images yang didukung oleh program ini antara lain JPG, PNG, TIF, BMP, SKP atau DEM. Sedangkan untuk menyimpan, format yang didukung adalah SKP, BMP, JPG, PNG, TIF, DAE dan KMZ.

Beberapa kelebihan dan Fitur SketchUp Make :
1.Interface yang menarik dan simpel
2.Mudah digunakan oleh golongan pemula sekalipun
3.Banyak open source plugin yang mendukung dan melengkapi kinerja sketchup
4.Terdapat fitur import file ke ekstensi 3ds (untuk 3ds max), dwg (untuk autocad), kmz (untuk google earth), pdf, jpg, bmp, dxf, dan lain-lain.

DeveloperTrimble Navigation Limited
Sistem OperasiWindows XP/Vista/7/8/10

          Download Software Universal USB Installer | Live USB Creator terbaru        
Universal USB Installer adalah Live USB Creator yang memungkinkan Anda untuk membuat installer Linux untuk booting menggunakan USB Flash Drive.
Universal USB mudah digunakan. Cukup memilih Distribusi Linux Live, file ISO, Flash Disk Anda dan klik Install.
Fitur lain termasuk kemampuan untuk format FAT32 flash drive untuk memastikan instalasi yang bersih. Setelah selesai, Anda harus menjalankan bootable USB Flash Drive dengan versi Linux diinstal.
Prosesnya semudah seperti memilih sebuah distribusi yang kita inginkan, pilih ISO dalam hardisk (kita juga bisa mengunduhnya secara otomatis).
Mengapa harus menggunakan Universal USB Installer ?, karena software ini merupakan universal software yang support untuk beragam jenis sistem operasi yang ada saat ini, seperti produk OS open Source seperti Linux dengan distro-distronya yang sangat banyak.
Selain itu dapat pula digunakan untuk membuat Installer Sistem operasi produk Microsoft seperti Wndows XP, Windows 7, Windows 8 hingga Windows 10. Kecuali sistem operasi milik komputer Apple Mac OS.
Selain memiliki ukuran yang mini, Universal USB Installer juga bersifat Portable sehingga tidak perlu melakukan Install kedalam hardisk atau sistem operasi yang ada, dan yang lebih bagusnya lagi, Aplikasi ini merupakan produk Open Source.

DeveloperUSB Pen Drive Linux
Sistem OperasiWindows XP/Vista/7/8/10

          Telstra eyes innovation through startups, IoT, M2M        
Open Source key to innovation at Telstra says Frank Arrigo, API evangelist at Telstra. Telstra is looking to stay ahead of the curve by encouraging technological innovation through collaboration with startups, machine-to-machine (M2M) technology, and the Internet of Things (IoT) — but said that ensuring its network continues to be the best in Australia is still […]
          VLC media player Terry Pratchett 2.2.4-Il miglior lettore multimediale multiformato        

Per la riproduzione di file video o audio in qualsiasi formato VLC media player è probabilmente la soluzione che fa per te.

Nello specifico è una buona alternativa a iTunes e RealPlayer. È leggero, veloce e facile da usare, e, soprattutto, riproduce di tutto.

Il più potente lettore multimediale gratuito

VLC è, ad oggi, il lettore di audio e video più stabile, flessibile e leggero in circolazione. Se gli altri lettori multimediali riescono a leggere i formati meno comuni con l'aiuto di un codec oppure non riescono a riprodurli affatto, VLC media player gestisce centinaia di formati dai file MPEG a FLV e RMBV.

Un'altra funzione molto utile offerta da VLC media player è la possibilità di utilizzarlo per vedere in anteprima i file che stai scaricando, perché è in grado di riprodurre file video anche parziali. In ogni caso, la riproduzione dei file video è solo una delle tante possibilità offerte da VLC media player: infatti si integra perfettamente con i servizi di streaming dei canali web come, consentendoti di accedere a canali quali ESPN, Reuters e National Geographic.

Basta fare clic con il tasto destro del mouse o con il tasto comando sulla playlist, selezionare Services Discovery e Quindi, apparirà nel tuo elenco di riproduzione di VLC media player; selezionalo facendo clic per visualizzare le varie categorie di canali a disposizione. Fai clic sulla categoria che desideri aprire per visualizzare in VLC media player un menu a tendina di tutti i canali disponibili. A questo punto non ti resta che fare clic sul canale che vuoi vedere e la riproduzione in streaming inizia quasi immediatamente.

Playlist facili da gestire e da creare

VLC media player supporta anche una serie di tasti di scelta rapida e se hai tempo di imparare le combinazioni puoi gestirlo senza toccare il mouse. Questo aspetto non è rilevante se lo usi per guardare soprattutto DVD o file video; mentre nel caso della musica, è in grado di riprodurre qualsiasi estensione di file, è dotato di un equalizzatore e di una funzione per la creazione di playlist.

Per quanto riguarda la gestione dei file multimediali, VLC media player non è intuitivo come iTunes ma è certamente un lettore molto più flessibile rispetto ai formati che supporta. Puoi utilizzarlo anche per convertire i tuoi file e, oltre a tutti i formati e ai supporti fisici che è in grado di leggere, supporta anche molti protocolli di streaming e schede TV.

Con VLC media player è facile anche inserire i sottotitoli nei tuoi file video: basta aggiungere il file SRT alla cartella in cui hai salvato il video e i sottotitoli verranno caricati automaticamente.

Ti consigliamo di leggere anche altri articoli sul tema, come quelli di seguito: come convertire file video e audio con VLC, come aggiungere sottotitoli in italiano con VLC, come sincronizzare i sottotitoli con VLC

Il migliore lettore multimediale gratuito

È proprio difficile trovare dei difetti a VLC media player: è un lettore multimediale straordinariamente leggero che riesce a riprodurre in modo impeccabile i file che gli altri lettori non riescono neanche ad aprire.

La sua natura open source gli garantisce aggiornamenti costanti. La qualità di riproduzione audio e video è ottima. Davvero poco da eccepire.

Download VLC media player Terry Pratchett 2.2.4 in Softonic

          aMule 2.3.1-Il mulo alternativo del P2P anche per Mac        

aMule è un client P2P gratuito e open source per scaricare film, musica e tanti altri tipi di file su Mac.


È un'applicazione multipiattaforma e ti permette di collegarti alle reti peer-to-peer di eDonkey e Kademlia per condividere e scambiare file con altri utenti.

Usare aMule è un’operazione alla portata di tutti, basta infatti connettersi ad un server o alla rete Kad utilizzando i pulsanti sulla barra degli strumenti, cercare il file che si vuole scaricare e aspettare che il download sia completato.

aMule offre praticamente tutte le funzioni di eMule. Tra queste non potevano mancare le molteplici opzioni di ricerca, il sistema di messaggistica istantanea integrato e ovviamente la gestione dei download e degli upload, che dà una priorità alta solo agli utenti che condividono molti file.


Esistono numerosi motivi per scegliere aMule, primo fra tutti la semplicità d’uso e la facilità con cui si possono scaricare file senza dover effettuare ricerche dal browser e perderti nei meandri della rete. Proprio per questo lo consigliamo a utenti meno esperti, che vogliono evitare complicazioni al momento di scaricare.

Il più grande svantaggio di aMule è però il fatto che non supporta i file torrent. Questo sistema di condivisione P2P si rivela, il più delle volte, un’opzione più rapida al momento di scaricare file di grandi dimensioni, velocizzando notevolmente il processo di download.


Uno dei contro dell’abbondanza di risultati di eMule è il problema dei file con un nome falso o corrotti, ma esistono molti modi per aggirare il problema dei fake, tra i quali controllare i diversi nomi del file.

Tra i numerosi server ai quali connettersi ne esistono anche parecchi che sono server spia o server fake: puoi proteggerti attraverso un’attenta ma semplice configurazione seguendo le stesse istruzioni di eMule.


amule è un programma particolarmente indicato agli utenti meno esperti per la sua semplicità d'uso e per i numerosi risultati di ricerca.

Download aMule 2.3.1 in Softonic

          GIMP 2.8.14-Il fotoritocco si fa con l'editor di immagini open source        

L’alternativa più affidabile a Photoshop si chiama GIMP e ti permette di ritoccare, creare e modificare immagini, foto e anche presentazioni multimediali grazie all’ampio ventaglio di strumenti e filtri. Perfetto per un’utenza domestica, grazie all’interfaccia ben congegnata. GIMP è open source e in italiano

Risolviamo subito un mistero. Il nome GIMP sta per "The GNU Image Manipulation Program" e come avrai capito è un software open source per l’editing delle immagini. Insomma, l’alternativa gratuita ad Adobe Photoshop. GIMP (prima conosciuto come The GIMP) è diventato giustamente popolare perché, grazie all'attiva community di utenti e sviluppatori, è cresciuto tanto e offre una tale quantità di plug-in liberamente scaricabili dal sito ufficiale, che comincia a non avere tanto da invidiare rispetto al costoso rivale di casa Adobe.

GIMP, in un’interfaccia dal design migliorato, ben congegnata, a finestre multiple e completamente personalizzabili, offre ben più strumenti, plugin e filtri fotografici di quanto un utente comune avrebbe realmente bisogno. Le palette sono molto simili a quelle di Photoshop, per cui basta poco a chi non è completamente inesperto, per sentirsi a proprio agio. In ogni caso, il software dispone di un’ampia documentazione e di molti tutorial scaricabili online.

Scoprirai così che con GIMP è possibile intervenire su foto e creazioni grafiche modificandone dimensione e qualità, tagliando via delle parti, correggendo livelli e canali, così come la luminosità, il contrasto, la saturazione del colore, i valori tonali e anche lavorare sulle gif animate (cosa che Photoshop non permette più di fare nelle ultime versioni). Se il risultato non ti soddisfa, con GIMP è facile annullare le modifiche. Per quanto riguarda la conversione e il salvataggio delle immagini, The GIMP supporta i formati più popolari come BMP, GIF, JPG, PCX, PNG, PS, TIF, TGA e XPM.

Insomma, tutto rose e fiori? Questo no. Dobbiamo ammettere che due cose sono vere: che GIMP di tanto in tanto dà segni di instabilità e che i tempi di caricamento sono un po’ lunghetti, persino più di Photoshop. Infine, eccetto l’abbondanza di filtri e plug-in, e la possibilità di creare processi in batch per applicare veloci modifiche automatiche alle tue immagini, GIMP non dispone di molte funzionalità avanzate rivolte a scopi professionali. Per l’uso domestico, tuttavia, è molto più che sufficiente.

Download GIMP 2.8.14 in Softonic

          Burn 2.5.1-Masterizzare è un gioco da ragazzi        

Burn è un completo software di masterizzazione dotato di un'interfaccia estremamente amichevole e alla portata di tutti; perfetto per chi vuole copiare facilmente, ma con qualità, dati, audio, video o file immagine su CD/DVD. Una nuova scommessa vinta dalla comunità open source

Su Mac non mancano certo programmi di qualità per masterizzare, eppure la comunità open source vuole dire la sua con un software che sebbene non abbia la stessa eco di mostri del settore come Dragon Burn o Toast, ha ben poco da invidiare a questi programmi.

Di Burn colpisce la sua semplicità di utilizzo, ogni funzione è chiara e anche gli utenti alle prime armi riusciranno a masterizzare dati, audio e video in pochissimo tempo. L'interfaccia del programma presenta una scheda di opzioni per ognuna di queste categorie.

La sezione video, in particolare, presenta l'interessante caratteristica di poter creare VCD, SVCD, DivX e DVD, offrendoti la possibilità di convertire automaticamente qualunque tipo di filmato a un formato compatibile con questi standard. Se tutto questo non bastasse, Burn ti consente anche di creare file immagine o di masterizzare su CD/DVD quelli che già possiedi (a tal fine supporta diversi tra i formati più comuni, come ISO e DMG).

Il programma offre un menu delle opzioni ordinato ma per la verità poco ricco di funzioni avanzate; si tratta comunque di una sottigliezza che non penalizza oltremodo Burn, che rimane un programma consigliato e chiaramente improntato all'immediatezza.

Download Burn 2.5.1 in Softonic

          Tizen powered Samsung device expected in 2013        

Samsung is the biggest backer of the open source Tizen OS and according to Daily Yomiuri, Tizen-powered devices will make its way in next year. While the newspaper didn’t divulge any more details, NTT Docomo is said to be the carrier in Japan who’ll carry Tizen running phone manufactured by Samsung. [quote]The OS that Docomo […]

The post Tizen powered Samsung device expected in 2013 appeared first on Sammy Hub.

          Zabbix: Os 2 Melhores Cursos de Monitoramento de Redes        

Suporte Ninja

Pesquisamos e encontramos duas opções para você que quer aprender mais sobre Zabbix, a ferramenta que mais cresce no mercado. Primeiro Curso: CURSO MONITORAMENTO DE REDES COM ZABBIX  (Curso da AulaEAD) Durante o curso você aprenderá a importância de monitorar ambientes de rede utilizando a ferramenta Open Source Zabbix 3.0 e os recursos empregados para os mais diversos usos sejam eles...

O post Zabbix: Os 2 Melhores Cursos de Monitoramento de Redes apareceu primeiro em Suporte Ninja.

          Gruf, a Gerrit command line utility        

(See also the followup to this article.)

I've recently started spending more time interacting with Gerrit, the code review tool used both by OpenStack, at, and by a variety of other open source projects at GerritForge's GitHub-linked I went looking for command line tools …

          Using tools badly: time shifting git commits with Workinghours        

This is a terrible hack. If you are easily offended by bad ideas implemented poorly, move along!

You are working on a wonderful open source project...but you are not supposed to be working on that project! You're supposed to be doing your real work! Unfortunately, your extra-curricular activity is …

          A new start        
Well, it had to happen someday. Welcome to my first ever blog posting.

I can't promise this is going to be the most regular of blog, but I will do my best.

I am the principle author of an obscure Open source project called JPype.

I thought it would be interesting to share thoughts and ideas that come to me as I develop it. So this blog is going to be very programming-oriented.

Well, that's it for now. More on this later!
          Introduction to Docker        

dotCloud founder and CTO Solomon Hykes recently stopped by Twitter HQ to show us Docker, an open source project designed to easily create lightweight, portable, self-sufficient containers from any application.

Common use cases for Docker include:

  • Automating the packaging and deployment of applications
  • Creation of lightweight, private PAAS environments
  • Automated testing and continuous integration/deployment
  • Deploying and scaling web apps, databases and backend services

Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.

Docker automates the repetitive tasks of setting up and configuring development environments so that developers can focus on what matters: building great software.

Developers using Docker don’t have to install and configure complex databases nor worry about switching between incompatible language toolchain versions. When an app is dockerized, that complexity is pushed into containers that are easily built, shared and run. Onboarding a co-worker to a new codebase no longer means hours spent installing software and explaining setup procedures. Code that ships with Dockerfiles is simpler to work on: Dependencies are pulled as neatly packaged Docker images and anyone with Docker and an editor installed can build and debug the app in minutes.

          Open Content in Games        

On the first day of sessions (Saturday), one of the sessions I attended was that of managing content in open source games. There was a previous session on open source games in general, which I will cover in a later post. A very important question in open source games is how we can develop games with high-quality content, but still manage artists who are not familiar with open source development procedures?

The folks from Battle for Wesnoth began talking about their problems in getting content for their game. Originally, they stated, they had a system that had fairly bad graphics. Over time, it has become a much more aesthetically appealing system, with animated sprites and high-quality artwork. Over time, they were able to find a good artist to place in the position of lead artist. The lead artist is the person who signs off on all the artwork in the game, whether it be textures, conceptual artwork, or sprites and animations.

One thing Battle for Wesnoth developers had to deal with is younger artists. They were able to get artwork from a number of teenagers, but this artwork was fairly low-quality, due to the lack of experience of the artists. So, at times, it was necessary to just tell people that their artwork wasn't of the quality necessary in Battle for Wesnoth. On the other hand, they admitted that they accepted artwork from a number of artists who weren't initially very good, but improved over time.

Worldforge is another project that had representatives present in this session. Worldforge isn't exactly a game, really, but rather an open source world-building application. The requirements for this application are incredibly massive, and a method is needed for developing a method of accepting, rating, and storing assets. The representatives from Worldforge stated that they have begun building an application that takes assets from artists, and stores and presents it for evaluation from an administrator. Ideally, it would be nice to have a system that can rank assets, present it for approval to an administrator, then if approved, commit it to the asset version control system. The application they presented is called Wombat, and they are asking for help from open source developers to evaluate and improve this application.

BZFlag also spoke about their asset-management system. A primary problem they encounter is that artists don't understand licensing. One of the things they had to place into their application was an option for "I stole this" for artists submitting content.

One of the things that I find more difficult about content in games is finding artists. I realize that expecting artists to commit assets into an svn server is probably not going to work, but how does one come upon artists to begin with? I think this is a very important question, as both groups brought up the idea that artists aren't as familiar with open source software development as computer scientists are. Thus, they aren't as exposed to the different groups as we are, and don't have any idea that they can get experience with an open source software group. Some of the questions that I have are, "How can we advertise in a more effective method, to minimize budget constraints, to get artists more interested in our projects?", "How can we keep artists around, once they've developed a single piece of artwork?", "How can we help improve a mediocre artists' skills?", and "How can we express to an artist what we need done, and have them actually do the necessary work, rather than what they want to do?".

More pictures and posts to come!

          Google Mentor Summit 2008        

So, I just got back to Roseville from Mountain View, CA. It was pretty nice there, so it's not quite so wonderful to return to 34 degree (F) weather here in Minnesota, but home is home, and I'm glad to be back.

It was a great trip. It was my first time attending the mentor summit (as this is the first time I was a mentor for gsoc), and I had a great time. I learned a ton, and had the opportunity to meet a large number of very interesting people. In addition, I learned quite a bit about open source management, as well as game development in general. I intend to share all of these things with you over the next few days, but I need to get some sleep right now, so I will leave you with the next best thing: pictures! (more to come, I promise)

Marten Svanfeldt (left) and Scott Johnson (right) at the Wild Palms Hotel

Marten Svanfeldt (left) and Scott Johnson (right) at the Wild Palms hotel

Chris Want (left) and Kent Mein (right) from the Blender Foundation in the Google Lobby

Marten Svanfeldt hanging out with some open source developers, waiting for the bus to the Googleplex

Outside the Googleplex in Mountain View, CA, USA

          The Daily Six Pack: July 28, 2014        

A COLLECTION OF LINKS FOR LOVERS OF I.T. Dirk Strauss . . . . Feature link: ApexSQL Refactor How to Format and Refactor Your SQL Code Directly in SSMS and Visual Studio, Pinal Dave Other Open source projects from Microsoft Make Windows 8.1 Sit, Stay on the Desktop, Shawn Keene How To Get The Boss To [...]

The post The Daily Six Pack: July 28, 2014 appeared first on Dirk Strauss.


Related Stories


          Learn To Code, It’s Harder Than You Think        

TL;DR: All the evidence shows that programming requires a high level of aptitude that only a small percentage of the population possess. The current fad for short learn-to-code courses is selling people a lie and will do nothing to help the skills shortage for professional programmers.

This post is written from a UK perspective. I recognise that things may be very different elsewhere, especially concerning the social standing of software developers.

It’s a common theme in the media that there is a shortage of skilled programmers (‘programmers’, ‘coders’, ‘software developers’, all these terms mean the same thing and I shall use them interchangeably). There is much hand-wringing over this coding skills gap. The narrative is that we are failing to produce candidates for the “high quality jobs of tomorrow”. For example, this from The Telegraph:

“Estimates from the Science Council suggest that the ICT workforce will grow by 39 per cent by 2030, and a 2013 report from O2 stated that around 745,000 additional workers with digital skills would be needed to meet demand between now and 2017.

Furthermore, research by City & Guilds conducted last year revealed that three quarters of employers in the IT, Digital and Information Services Sector said that their industry was facing a skills gap, while 47 per cent of employers surveyed said that the education system wasn’t meeting the needs of business.”

Most commentators see the problem as being a lack of suitable training. Not enough programmers are being produced from our educational institutions. For example, here is Yvette Cooper, a senior Labour party politician, in The Guardian:

“The sons and daughters of miners should all be learning coding. We have such huge advantages because of the world wide web being invented as a result of British ingenuity. We also have the English language but what are we doing as a country to make sure we are at the heart of the next technology revolution? Why are we not doing more to have coding colleges and technical, vocational education alongside university education?”

There is also a common belief in the media that there are high barriers to entry to learning to code. This from the Guardian is typical:

“It’s the must-have skill-set of the 21st century, yet unless you’re rich enough to afford the training, or fortunate enough to be attending the right school, the barriers to learning can be high.”

So the consensus seems to be that high barriers to entry and a lack of accessible training mean that only a rich and well educated elite have access to these highly paid jobs. The implication is that there is a large population of people for whom programming would be a suitable career if only they could access the education and training that is currently closed to them.

In response, there are now a number of initiatives to encourage people to take up programming. The UK government created ‘Year of Code’ in 2014:


The message is “start coding this year, it’s easier than you think.” Indeed the executive director of Year of Code, Lottie Dexter, said in a Newsnight interview that people can “pick it up in a day”., a “non-profit dedicated to expanding participation in computer science education”, says on its website, “ aims to help demystify that coding is difficult”.


So is it really that easy to learn how to code and get these high paying jobs? Is it really true that anyone can learn to code? Is it possible to take people off the streets, give them a quick course, and produce professional programmers?

What about more traditional formal education? Can we learn anything about training programmers from universities?

Given the skills shortage one would expect graduates from computer science courses to have very high employment rates. However, it seems that is not the case. The Higher Education Statistics Agency found that computer science graduates have “the unwelcome honour of the lowest employment rate of all graduates.” Why is this? Anecdotally there seems to be a mismatch between the skills the students graduate with and those that employers expect them to have. Or more bluntly, after three years of computer science education they can’t code. A comment on this article by an anonymous university lecturer has some interesting insights:

“Every year it's the same - no more than a third of them [CS students] are showing the sort of ability I would want in anyone doing a coding job. One-third of them are so poor at programming that one would be surprised to hear they had spent more than a couple of weeks supposedly learning about it, never mind half-way through a degree in it. If you really test them on decent programming skills, you get a huge failure rate. In this country it's thought bad to fail students, so mostly we find ways of getting them through even though they don't really have the skills.”

Other research points to similar results. There seems to be a ‘double hump’ in the outcome of any programming course between those who can code and those who can’t.

“In particular, most people can't learn to program: between 30% and 60% of every university computer science department's intake fail the first programming course.”

Remember we are talking about degree level computing courses. These are students who have been accepted by universities to study computer science. They must be self selecting to a certain extent. If the failure rate for programming courses is so high amongst undergraduates it would surely be even higher amongst the general population - the kinds of candidates that the short ‘learn to code’ courses are attempting to attract.

Let’s look at the problem from the other end of the pipeline. Let’s take successful professional software developers and ask them how they learnt to code. One would expect from the headlines above that they had all been to expensive, exclusive coding schools. But here again that seems not to be the case. Here are the results of the 2015 Stack Overflow developers survey. Note that this was a global survey, but I think the results are relevant to the UK too:


Only a third have a computer science or related degree and nearly 42%, the largest group, are self taught. I have done my own small and highly unscientific research on this matter. I run a monthly meet-up for .NET developers here in Brighton, and a quick run around the table produced an even more pronounced majority for the self-taught. For fun, I also did a quick Twitter poll:


76% say they are self taught. Also interesting were the comments around the poll. This was typical:


Even programmers with CS degrees insist that they are largely self taught. Others complained that it was a hard question to answer since the rate of change in the industry means that you never stop learning. So even if you did at some point have formal training, you can’t rely on that for a successful career. Any formal course will be just a small element of the continual learning that defines the career of a programmer.

We are left with a very strange and unexpected situation. Formal education for programmers seems not to work very well and yet the majority of those who are successful programmers are mostly self taught. On the one hand we seem to have people who don’t need any guided education to give them a successful career; they are perfectly capable of learning their trade from the vast sea of online resources available to anyone who wants to use it. On the other hand we have people who seem unable to learn to code even with years of formal training.

This rather puts the lie to the barriers to entry argument. If the majority of current professional software developers are self taught, how can there be barriers to entry? Anyone with access to the internet can learn to code if they have the aptitude for it.

The evidence points to a very obvious conclusion: there are two populations: one that finds programming a relatively painless and indeed enjoyable thing to learn and another that can’t learn no matter how good the teaching. The elephant in the room, the thing that Yvette Cooper, the ‘year of code’ or ‘hour of code’ people seem unwilling to admit is that programming is a very high aptitude task. It is not one that ‘anyone can learn’, and it is not easy, or rather it is easy, but only if you have the aptitude for it. The harsh fact is that most people will find it impossible to get to any significant standard.

If we accept that programming requires a high level of aptitude, it’s fun to compare some of the hype around the ‘learn to code’ movement with more established high-aptitude professions. Just replace ‘coder’ or ‘coding’ with ‘doctor’,  ‘engineer’,  ‘architect’ or ‘mathematician’.

  • “You can pick up Maths in a day.”
  • Start surgery this year, it’s easier than you think!
  • aims to help demystify that architecture is difficult.
  • “The sons and daughters of miners should all be learning to be lawyers.”

My friend Andrew Cherry put it very well:


Answer:  only one: software development. You want to be a doctor? Go to medical school for seven years.

Accepting that aptitude is important for a successful career in programming, we can approach the ‘shortage’ problem from a different angle. We can ask how we can persuade talented people to choose programming rather than other high-aptitude professions. The problem is that these individuals have a great deal of choice in their career path and, as I’m going to explain, programming has a number of negative social and career attributes which make them unlikely to choose it.

There’s no doubt that software development is a very attractive career. It’s well paid, mobile, and the work itself is challenging and rewarding. But it has an image problem. I first encountered this at university in the 1990’s. I did a social science degree (yes I’m one of those self taught programmers). Socially, us arts students looked down on people studying computer science, they were the least cool students on the campus - mostly guys, with poor dress sense. If anyone considered them at all it was with a sense of pity and loathing. When towards the end of my degree, I told my then girlfriend, another social science student, that I might choose a career in programming, she exclaimed, “oh no, what a waste. Why would you want to do that?” If you did a pop-quiz at any middle-class gathering in the UK and asked people to compare, say, medicine, law, architecture or even something like accountancy, with software development, I can guarantee that they would rate it as having a lower social status. Even within business, or at least more traditional businesses, software development is seen as a relatively menial middle-brow occupation suitable for juniors and those ill-qualified for middle management. Perversely, all these courses saying ‘learn to code, it’s easy’ just reinforce the perception that software development is not a serious career.

There’s another problem with software development that’s the flip side of the low barriers to entry mentioned above, and that is there is no well established entry route into the profession. Try Googling for ‘how to become a doctor’, or ‘how to become a lawyer’ for example:


There are a well established series of steps to a recognised professional qualification. If you complete the steps, you become a recognised member of one of these professions. I’m not saying it’s easy to qualify as a doctor, but there’s little doubt about how to go about it. Now Google for ‘how to become a software developer’, the results, like this one for example, are full of vague platitudes like ‘learn a programming language’, ‘contribute to an open source project’, ‘go to a local programming group’. No clear career path, no guarantees about when and if you will be considered a professional and get access to those high-paying jobs of the future.

Yes, I made this up, but it makes the point. :)

Now take a high-aptitude individual who has done well at school and finds demanding intellectual tasks relatively straightforward, and offer them a choice: on the one hand, here is a career, let’s take medicine for example, you follow these clearly enumerated steps, which are demanding but you are good at passing exams, and at the end you will have a high-status, high paying job. Or, how about this career: go away, learn some stuff by yourself, we’re not sure exactly what; try and get a junior, low status job, and just learn more stuff – which you can work out somehow – and work your way up. No guarantees that there’s a well paying job at the end of it. Oh, and, by the way, the whole world will think you are a bit of a social pariah while you are about it. Which would you choose?

So could software development follow the example of older professions and establish a professional qualification with high barriers to entry? There are attempts to do this. The British Computer Society (BCS) calls itself ‘the chartered institute for IT’ and seeks establish professional qualifications and standards. The problem is that it’s comprehensively ignored by the software industry. Even if you could get the industry to take a professional body seriously, how would you test people to see if they qualified? What would be on the exam? There are very few established practices in programming and as soon as one seems to gain some traction it gets undermined by the incredibly rapid pace of change. Take Object Oriented programming for example. In the 2000’s, it seemed to be establishing itself as the default technique for enterprise programming, but now many people, including myself, see it as a twenty year diversion and largely a mistake. How quickly would programming standards and qualifications stay up to date with current practice? Not quickly enough I suspect.

However, my main point in this post has been to establish that programming is a high-aptitude task, one than only some people are capable of doing with any degree of success. If the main point of a professional qualification is filter out people who can’t code, does it really matter if what is being tested for is out of date, or irrelevant to current industry practices? Maybe our tentative qualification would involve the completion of a reasonably serious program in LISP? A kind of Glass Bead Game for programmers? The point would be to find out if they can code. They can learn what the current fads are later. The problem still remains how to get industry to recognise the qualification.

In the meantime we should stop selling people a lie. Programming is not easy, it is hard. You can’t learn to code, certainly not to a standard to get a well-paid-job-of-the-future, in just a few weeks. The majority of the population can not learn to code at all, no matter how much training they receive. I doubt very much if the plethora of quick learn-to-code courses will have any impact at all on the skills shortage, or the problem of unskilled low pay and unemployment. Let’s stop pretending that there are artificial barriers to entry and accept that the main barrier to anyone taking it up is their natural aptitude for it. Instead let’s work on improving the social status of the software industry – I think this is in any case happening slowly – and also work on encouraging talented young people to consider it as a viable alternative to some of the other top professions.

          A Simple Nowin F# Example        

In my last post I showed a simple F# OWIN self hosted server without an application framework. Today I want to show an even simpler example that doesn’t reference any of the Microsoft OWIN libraries, but instead uses an open source server implementation, Nowin. Thanks to Damien Hickey for pointing me in the right direction.

The great thing about the Open Web Interface for .NET (OWIN) is that it is simply a specification. There is no OWIN library that you have to install to allow web servers, application frameworks and middlewear built to the OWIN standard to communicate. There is no interface that they must implement. They simply need to provide an entry point for the OWIN application delegate (better know as the AppFunc):

    Func<IDictionary<string , object>, Task>

For simple applications, where we don’t need routing, authentication, serialization, or an application framework, this means we can simply provide our own implementation of the AppFunc and pass it directly to an OWIN web server.

Nowin, by Boris Letocha, is a .NET web server, built directly against the standard .NET socket API. This means it should work on all platforms that support .NET without modification. The author claims that it has equivalent performance to NodeJS on Windows and can even match HttpListener. Although not ready for production, it makes a compelling implementation for simple test servers and stubs, which is how I intend to use it.

To use any OWIN web server with F#, we simply need to provide an AppFunc and since F# lambdas have an implicit cast to System.Func<..> we can simply provide the AppFunc in the form:

    fun (env: IDictionary<string, obj>) -> Task.FromResult(null) :> Task

Let’s see it in action. First create an F# console application and install the Nowin server with NuGet:

    Install-Package Nowin

Now we can host our Nowin server in the application’s entry point:

    let main argv = 

        use server = 
                .SetEndPoint(new IPEndPoint(IPAddress.Any, port))
                .SetOwinApp(fun env -> Task.FromResult(null) :> Task)


        printfn "Server listening on http://localhost:%i/ \nhit <enter> to stop." port
        Console.ReadLine() |> ignore


Of course this server does nothing at all. It simply returns the default 200 OK response with no body. To do any useful work you need to read the OWIN environment, understand the request and create a response. To make this easier in F# I’ve created a simple OwinEnvironment type with just the properties I need. You could expand this to encompass whatever OWIN environment properties you need. Just look at the OWIN spec for this.

    type OwinEnvironment = {
        httpMethod: string;
        requestBody: Stream;
        responseBody: Stream;
        setResponseStatusCode: (int -> unit);
        setResponseReasonPhrase: (string -> unit)

Here is a function that takes the AppFunc environment and maps it to my OwinEnvironment type:

    let getOwinEnvironment (env: IDictionary<string , obj>) = {
        httpMethod = env.["owin.RequestMethod"] :?> string;
        requestBody = env.["owin.RequestBody"] :?> Stream;
        responseBody = env.["owin.ResponseBody"] :?> Stream;
        setResponseStatusCode = 
            fun (statusCode: int) -> env.["owin.ResponseStatusCode"] <- statusCode
        setResponseReasonPhrase = 
            fun (reasonPhrase: string) -> env.["owin.ResponseReasonPhrase"] <- reasonPhrase

Now that we have our strongly typed OwinEnvironment, we can grab the request stream and response stream and do some kind of mapping. Here is a function that does this. It also only accepts POST requests, but you could do whatever you like in the body. Note the transform function is where the work is done.

    let handleOwinEnvironment (owin: OwinEnvironment) : unit =
        use writer = new StreamWriter(owin.responseBody)
        match owin.httpMethod with
        | "POST" ->
            use reader = new StreamReader(owin.requestBody)
        | _ ->
            owin.setResponseStatusCode 400
            owin.setResponseReasonPhrase "Bad Request"
            writer.Write("Only POST requests are allowed")

Just for completeness, here is a trivial transform example:

    let transform (request: string) : string =
        sprintf "%s transformed" request

Now we can re-visit our console Main function and pipe everything together:

    let main argv = 

        use server = 
                .SetEndPoint(new IPEndPoint(IPAddress.Any, port))
                .SetOwinApp(fun env -> 
                    |> getOwinEnvironment 
                    |> handleOwinEnvironment 
                    |> endWithCompletedTask)


        printfn "Server listening on http://localhost:%i/ \nhit  to stop." port
        Console.ReadLine() |> ignore


The endWithCompletedTask function, is a little convenience to hide the ugly synchronous Task return code:

    let endWithCompletedTask = fun x -> Task.FromResult(null) :> Task

So as you can see, OWIN and Nowin make it very easy to create small web servers with F#. Next time you just need a simple service stub or test server, consider doing something like this, rather that using a heavyweight server and application framework such as IIS, MVC, WebAPI or WebForms.

You can find the complete code for the example in this Gist

          How to Run a Successful Open Source Project        

A couple of months ago I attended BarCamp Brighton, an open conference at Brighton University. Everyone is encouraged to present a session, and as I don’t need much excuse to talk to a room full of people I thought I’d do an unrehearsed talk on running an open source project based on my experiences with EasyNetQ. I spent about half an hour going through what EasyNetQ is, its history, and how I organise things, to a room of six people. I know, fame at last! The really interesting bit came during the questions when I got into a discussion with a chap who worked for the Mozilla foundation. He asked what makes a successful open source project. Of course ‘success’ is very subjective. Success for me means that some people are interested enough in EasyNetQ to download it from NuGet and commit time to sending me bug reports and pull requests, for other people success might be defined as a project so popular that they can make a living from supporting it. Given the former, lower bar, here are some of the things that we came up with.

  • It has to do something useful. Obvious, yes. The classic ‘scratching an itch’ reason for starting an OSS project usually means that it’s useful to you. How successful your project will be depends on how common your problem is.
  • A clear mission. If you have a clear mission, it’s easy for people to understand the problem you are trying to solve. EasyNetQ’s mission is to provide the simplest possible API for RabbitMQ on .NET. People know what the intention is, and it provides a guide to EasyNetQ’s design.
  • Easy to install. If your environment provides a package manager, make sure your project provides a package for it. Otherwise make sure you provide a simple installer and clear instructions. Don’t insist that people build your project from source if you can avoid it. EasyNetQ installs from NuGet in a few seconds.
  • A good ‘first 20 minutes’ experience. Make your project super easy to get started with. Have a quick start guide to help users get something working as soon as possible. You can get EasyNetQ up and running with two console apps publishing and subscribing within 10 minutes. Hmm, I need to get that quick start guide done though :p
  • Documentation! Clear, simple to follow, documentation is really helpful to your users. Keep it up to date. A wiki that anyone can update is ideal. I use the GitHub wiki for EasyNetQ’s documentation. If you can register a domain name and have a nice homepage with a short pithy introduction to your project, so much the better.
  • A forum for communicating with your users. I use a Google group. But there are many other options. Make sure you answer people’s questions. Be nice (see below).
  • Let people know what’s happening. A blog, like this, is an ideal place to write about the latest project news. Let people know about developments and your plans for the future.
  • Release early, release often. Long feedback loops kill software. That’s true of any project, open or closed, but much more so for open source. The sooner you get your code in front of people, the sooner you find out when it isn’t working. Have a continuous build process that allows people to be able to quickly install the very latest version. You can split out ‘stable’ from ‘development’ releases if you want. I don’t bother, every commit to EasyNetQ’s GitHub repository is immediately published to NuGet, but that will probably change as the project matures.
  • Have a versioning policy and explain it. You can see EasyNetQ’s here.
  • Use GitHub to host your source code. Yes, yes, I know there are alternatives, but GitHub has made hosting and contributing to OSS projects much easier. It has become so ubiquitous that it’s the obvious choice. Everyone knows how it works.
  • Be nice! It’s easy to get frustrated with weird questions on the mailing list, or strange pull requests, but always remember to be polite and helpful. Be especially nice to people who send you bug reports and pull requests, they are helping your project for no gain. Be thankful.

I’m sure there are loads of other things that I’ll think of as soon as I hit ‘publish’ on this post, so don’t think of this as comprehensive. I’d love to hear any more suggestions in the comments.

Of course the main problem most people have to overcome with running an OSS project is finding the time. I’ve been extraordinarily fortunate that 15below have sponsored EasyNetQ. If you use it and like it, don’t forget to thank them.

          Open Source Band: Fresh talent every year        
The Open Source Band is an SVRocks tradition – available to the public for use and modification beyond its original design. Band members this year include: Jonah Matranga, Moniz Franco, Whitney Nichole, Greg Studley, Larry Marcus, Andrew Stess, Maxine Marcus and Alexandra Elliott.  Tell us about your band. How did you get started? How long […]
          today's leftovers        
  • Restarting the free accounting search

    ack in 2012, we started a quest to find a free replacement for the QuickBooks Pro package that is used to handle accounting at LWN. As is the way of such things, that project got bogged down in the day-to-day struggle of keeping up with the LWN content treadmill, travel, and other obstacles that the world tends to throw into the path of those following grand (or not so grand) ambitions. The time has come, however, to restart this quest and, this time, the odds of a successful outcome seem reasonably good.

    Accounting data is crucial to the proper operation of any but the most trivial of businesses. It provides metrics showing how well the business is operating, and a company's duties to report to governments cannot be performed without it. Accounting is often tightly tied to a company's day-to-day operations, such that a failure of the accounting system can bring the entire business down. Given that, one would think that businesses would demand open and free access to their own accounting data.

    Proprietary systems like QuickBooks do not provide that access; instead, accounting data is stored in a mysterious, proprietary file format that is difficult to access — especially if one is uninterested in developing on Windows using a proprietary development kit. Locking up data in this way makes moving to a competing system hard, naturally, though a number of (proprietary) alternatives have found a way. It also makes it hard to get company data into the system in any sort of automated way. LWN operates with a set of scripts that convert data into the IIF format for importing, for example.

  • OSGeo-Live 11.0 Released

    Version 11.0 of the OSGeo-Live GIS software collection ( has been released, ready for FOSS4G which is the International Conference for Free and Open Source Software for Geospatial (( - 2017 in Boston, USA.

  • 6 hardware projects for upgrading your home

    Every day, hobbyists and tinkerers are pushing the boundaries of what we can do with low-cost microcontrollers and mini-computers like the Arduino and Raspberry Pi. That trend doesn't stop when it comes to IoT and home automation. In this article, I'll round up six projects from Adafruit Industries that use open source hardware and software to improve home life (or at the very least, make more fun) in new and interesting ways.


          today's leftovers        

          today's leftovers        
  • Another DIY Net Player

    This is a Raspberry Pi based audiophile net player that decodes my mp3 collection and net radio to my Linn amplifier. It is called TeakEar, because it’s main corpus is made from teak wood. Obviously I do not want to waste rain forest trees just because of my funny ideas, the teak wood used here has been a table from the 1970ies, back when nobody cared about rainforests. I had the chance to safe parts of the table when it was sorted out, and now use it’s valuable wood for special things.

  • August 2017 Issue of The PCLinuxOS Magazine Released

    The PCLinuxOS Magazine staff is pleased to announce the release of the August 2017 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community. The magazine is lead by Paul Arnote, Chief Editor, and Assistant Editor Meemaw. The PCLinuxOS Magazine is released under the Creative Commons Attribution-NonCommercial-Share-Alike 3.0 Unported license, and some rights are reserved. All articles may be freely reproduced via any and all means following first publication by The PCLinuxOS Magazine, provided that attribution to both The PCLinuxOS Magazine and the original author are maintained, and a link is provided to the originally published article.

  • Ryzen Linux Users Are Still Facing Issues with Heavy Compilation Loads

    It was originally reported that Linux users were facing segmentation faults and, at times, crashes when running concurrent compilation loads on Ryzen CPUs, and these issues don’t appear to be fixed: Phoronix has run additional tests and found that heavy workloads remain problematic, as of Linux 4.13. These problems did not occur when tested using Intel CPUs.

  • 50+ Segmentation Faults Per Hour: Continuing To Stress Ryzen

    In direct continuation of yesterday's article about easily causing segmentation faults on AMD Zen CPUs, I have carried out another battery of tests for 24 hours and have more information to report today on the ability to trivially cause segmentation faults and in some cases system lock-ups with Ryzen CPUs.

  • Give Generously! Seven Ways To Help Open Source

    Your business most likely depends on open source software. But are you playing your part to make sure it will still be there in the future? For that to happen, the projects where it is both maintained and improved need to flourish.

    How can you contribute to that goal? The first thought most of us have — donate money — is unlikely to be the best way to support the open source projects that are most important to you. While proprietary software companies want your money in huge quantities to pay their shareholders, executives and staff, in open source communities most of the people who develop the code are paid elsewhere. As a consequence, there’s only a modest need for cash and a little goes a long way.

  • RFC: integrated 3rd-party static analysis support
  • GCC Working On 3rd Party Static Analysis Support

    Red Hat's David Malcom has posted a series of patches for implementing third-party static analysis support within the GNU Compiler Collection (GCC).

          today's leftovers        
  • Google Grabs Nielsen as Business Apps User From Microsoft

    For word processing and spreadsheets, Nielsen staff now uses Google Docs and Sheets instead of Microsoft’s Word and Excel applications from its familiar Office suite of software. For video conferencing and messaging, Nielsen dropped Microsoft’s Skype in favor of Google equivalents.

  • 3DR Solo Back as Open Source Platform

    Don’t play Taps for 3DR‘s Solo yet. 3DR’s CEO Chris Anderson tweeted today that the Solo is getting a second life.

    In an article title “The Solo Lives On,” on the ArduPilot Blog – ArduPilot is an opensource autopilot system – the team explains how a community of developers worked to give the Solo a “heart transplant.” The developer of the now-obselete Pixhawk 2.0 hardware flight system, the Solo’s stock system, has developed a bolt-on replacement which will allow for new ArduCopter firmware changes.

  • Bluetooth Mesh networks: Is a standards body right for IoT innovation?


    Mesh networks are not new. It is a network topology in which each node relays data for the network. All mesh nodes cooperate in the distribution of data in the network. The IoT-purpose-built Zigbee—a low-power, low-bandwidth ad hoc network—is a mesh network. Dating to 2002, Aruba Networks was founded to build Wi-Fi mesh networks. In 2014, student protesters in Hong Kong used mobile app FireChat to turn the crowd’s smartphones into a Wi-Fi and Bluetooth mesh network so authorities could not interrupt protester’s coordinating conversations by blocking 3G and 4G network access.

          today's leftovers        
  • Linux Weather Forecast

    This page is an attempt to track ongoing developments in the Linux development community that have a good chance of appearing in a mainline kernel and/or major distributions sometime in the near future. Your "chief meteorologist" is Jonathan Corbet, Executive Editor at If you have suggestions on improving the forecast (and particularly if you have a project or patchset that you think should be tracked), please add your comments below.

  • Linux guru Linus Torvalds is reviewing gadgets on Google+

    Now it appears the godfather of Linux has started to put all that bile to good use by reviewing products on Google+.

  • Learning to love Ansible

    I’ve been convinced about the merits of configuration management for machines for a while now; I remember conversations about producing an appropriate set of recipes to reproduce our haphazard development environment reliably over 4 years ago. That never really got dealt with before I left, and as managing systems hasn’t been part of my day job since then I never got around to doing more than working my way through the Puppet Learning VM. I do, however, continue to run a number of different Linux machines - a few VMs, a hosted dedicated server and a few physical machines at home and my parents’. In particular I have a VM which handles my parents’ email, and I thought that was a good candidate for trying to properly manage. It’s backed up, but it would be nice to be able to redeploy that setup easily if I wanted to move provider, or do hosting for other domains in their own VMs.

  • GSoC: Improvements in kiskadee architecture

    Today I have released kiskadee 0.2.2. This minor release brings some architecture improvements, fix some bugs in the plugins and improve the log messages format. Initially, lets take a look in the kiskadee architecture implemented on the 0.2 release.

  • How UndoDB works

    In the previous post I described what UndoDB is, now I will describe how the technology works.

    The naïve approach to record the execution of a program is to record everything that happens, that is the effects of every single machine instruction. This is what gdb does to offer reversible debugging.

  • Wild West RPG West of Loathing Launches for PC/Mac/Linux on August 10th

    Today, developer Asymmetric announced that its comedy, wild west RPG, West of Loathing, is poised to launch for PC, Mac, and Linux on August 10th.

  • Canonical asks users' help in deciding Ubuntu Linux desktop apps

    Canonical Ubuntu Linux has long been one of the most popular Linux desktop distributions. Now, its leadership is looking to its users for help to decide the default desktop applications in the next long-term support version of the operating system: Ubuntu 18.04.

    This release, scheduled for April 2018, follows October's Ubuntu 17.10, Artful Aardvark. Ubuntu 18.04 will already include several major changes. The biggest of these is Ubuntu is abandoning its Unity 8 interface to go back to the GNOME 3.x desktop.

  • Enhanced Open Source Framework Available for Parallel Programming on Embedded Multicore Devices
  • Studiolada used all wood materials to create this affordable open-source home anyone can build

    Using wood panels as the principal building material reduced the project’s overall cost and footprint because the wooden beams and wall panels were cut and varnished in a nearby workshop. Prefabricated concrete was used to embed the support beams, which were then clad in wooden panels. In fact, wood covers just about everything in the home, from the walls and flooring to the ceiling and partitions. Sustainable materials such as cellulose wadding and wood fibers were even used to insulate the home.

          Top 10 Web Technologies Tools for Developers        

Originaly posted by 

Web Technologies have always shocked everyone with its up bringing to the ocean of world wide web.

Now, just Developing Web applications is not the aim but developing User Friendly Applications is what every developer aiming for.
Use of frameworks like bootstrap, semantics, etc and CMS like Wordpress, JOOMLA , etc is increasing. But is this the end? A big No... Web Technologies is a kind of domain where there is no stop.

Web developers always think of making their User Interfaces more and more friendly. The designs and efficiency of web based applications is ever increasing and this will be boosted with these upcoming technologies in 2015.

1. ECMAScript 6: ES6 is the future of Javascript and is about to bring in some exciting new features. Many browsers have already implementing its features. Traceur is already compatible to accept codes in ES6 and transpile it to ES5 so that it works in Today's browsers.

2. AngularJS: Google's one of efficient and popular framework for developing Single Page Apps is under development for its version 2.0. This version is more powerful, better and faster. Convergence of Durandal, another very popular framework with AngularJS is definitely going to create an impact in the long run.

3. React: The Facebook's UI library, React is a great tool for developing UI. React is compatible at both ends i.e. at client side and also at server side, and this is what that makes it an excellent choice for creating isomorphic apps.

4. Meteor: A budding Open Source market has Meteor for creating real time JavaScript apps. Meteor can be considered as one of the best auto integratable software that can content in templates automatically when we make changes in the database.

5. Ionic: With a growth of mobile internet users, a need for developing mobile apps that work on all platforms is increasing. Ionic gives you a platform to create cross platform mobile apps using HTML5 and JavaScript. For now, Ionic iis still in beta. Ionic has changed drastically the way we use to build mobile apps. Knowledge of just front end is enough to build apps in Ionic. So, with these many advantages, Ionic will make its stand in 2015.

6. Dart: Another open source project by Google is Dart. It aims to simplify web development. Lately Dart has been getting developers' attention and it has been ported to AngularJS too.

7. Firebase: Firebase is the need of today. It lets you sync and store data in real time. It has binding for all popular programming languages and client side MV frameworks. Firebase is definitely going to be a popular solution for real time back-end.

8. Parse: Parse has a platform for building complete backend for mobile apps. File and data, both can be stored efficiently in Parse and sending Push notifications is simple. Parse cannot be ignored in the upcoming time due to an exponential increase in mobile apps.

9. is a node module that allows you to create real time apps easily. According to its website it's being used by products like Microsoft Office, Yammer, Zendesk, etc. Real time apps are gaining attention with a nice pace.

10. Polymer: Polymer uses Web Components to redefine web development. Making reusable custom components that extend HTML is possible with Polymer. Polymer is also good at content base development. It is highly expected that Polymer will grow in the coming future and become a favorite tool among developers.

          Google is launching their own open source browser tomorrow        
[+6] Discussion by Robert Gentel on 09/01/08 4:11 PM Replies: 70 Views: 5,265
Tags: Google Chrome, Google, Browsers, Internet, Browser
Last Post by hingehead on 09/02/10 8:50 PM
          Arduino lanza logo open source para comunidades        
A pesar de que el diseño electrónico de la tarjeta Arduino es Open Source, el nombre y logo de Arduino, así como el diseño gráfico de sus tarjetas, son marcas registradas de Arduino y sus socios. Por lo que el… Continue Reading
          Red Hat Hiring For Freshers : Associate Software Engineer @ Bangalore        
Red Hat, Inc [] Job Title : Associate Software Engineer Job ID : 56955 Department : Software Engineering Location : Bangalore Job Description: Company Description: At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and ...
          RPTools: Open Source Tools for Pen & Paper RPGs        
RPTools is an open source tool set for PC designed to enhance pen and paper role-playing games.  If you’re a RPG fanatic you are probably already aware of these tools or at least heard of them from your fellow gamers.  After experimenting with the tools in my own Pathfinder and D&D games I decided to dig a little deeper and obtain an interview with the folks who have made these tools openly available to the general public! NERD TREK interview with Frank Edwards & Keith Athey of RPTools.   Jonathan Nerdtrek:  Hello Keith!  Please tell our readers a bit about your RPTools programs and your role within the company. Keith Athey:  RPTools is a community devoted to producing open source […]
          processing 1.0(BETA)        

Processing is an open source programming language and environment for people who want to program images, animation, and sound. It is used by students, artists, designers, architects, researchers, and hobbyists for learning, prototyping, and production. It is created to teach fundamentals of computer programming within a visual context and to serve as a software sketchbook and professional production tool. Processing is developed by artists and designers as an alternative to proprietary software tools in the same domain.
some examples:

sourcecode for websites as graphs (previous published) - an applet that being used in Processing
nike one by Motion Theory

tags :

          Ð¡Ð¿Ð¸ÑÐ¾Ðº текущих дел        
Текущие проекты и дела, по которым нельзя проябываться вообще никак
1. Работа, проект Ф (основной)
2. Работа, всякие другие дела по ней
3. Семья
4. Блинком (а до него — Мастер-Блин)
5. Обучение игре на баяне
6. Игра2018, выбор
7. Joinrpg, текущие задачи и саппорт
8. Joinrpg, сбор денег и поиск программистов
9. наверняка что-то забыл

Список дел, целей, интересов по которым нет сдвига/они в загоне/работа не идет совсем:
а) спорт
б) хочу водить модуль
в) хочу ездить на дачу
г) проблема — скважина на даче
д) хочу больше играть в настолки
е) КогдаИгра
ж) Joinrpg, API для мобил, программы Мэла, Deus Ex
з) подумать про политику/общественную деятельность/наблюдение/выборы
и) сделать настолку
к) сделать апгрейд в программировании, выучить язык Rust, вложиться в open source
л) изучить внимательно рынок труда и варианты по работе
м) вести с болот
н) сделать игру в 2017
о) помочь друзьям, которые делают или хотят маленькие игры в 2017
п) проблема — сетки на окнах и вообще мелкие доделки по ремонту дома
р) работа со студентами на работе
с) чаще видеть сестру, Борисыча, бабушку
т) долговременные вопросы Бастилии, сайт etc
у) командный выезд (вероятно Помпеи)
ф) есть прикольные компьютерные игры
х) регулярно готовить
ч) зубы
ц) аллергия
          Día de la Virtualización Open Source Empresarial        
El "Día de la Virtualización Open Source Empresarial: cómo ahorrar costes e incrementar el rendimiento del datacenter" proporcionará las claves para crear y gestionar una infraestructura de virtualización escalable y abierta.
          RE: KOM-TEK aplikasi untuk membuat dokumentasi        

Selain software dr wikipedia

Coba pake Knowledge Tree, di kantor gw juga lagi niat implementasi DMS
(Document Management System), lagi trial pake software tersebut. Ada
yang open source nya buat coba2 dulu , so far sich memuaskan banget ..

Coba browse di > search aja pake key
word Knowledge Tree

Disitu bisa di download semua nya . yang versi opensource .. selamat



[] On Behalf Of arif budiyono
Sent: Monday, December 31, 2007 12:01 PM
Subject: Re: KOM-TEK aplikasi untuk membuat dokumentasi

pakai wikipedia private mas.
artinya mas intall sendiri wikipedia di hosting,
gooling saja u/ dapet soft wikinya.
atau makai google word nya punya google.
dimana user bisa dibatasi.

- arif <>

--- achmad mardiansyah <
<> >

> dear all,
> perlu saran neeh...
> saat ini saya dengen temen2 lagi membuat dokumentasi
> tentang suatu produk.
> supaya cepet kita sepakat untuk mengerjakannya via
> internet.
> keroyokan gitu...
> jadi hasilnya bisa langsung dilihat bareng2...
> setelah browsing...
> kliatannya yang cocok untuk kita pake adalah
> software CMS semacam wikipedia.
> cuman dari yang kita tahu, software tersebut proses
> penulisannya
> terbuka, tanpa kontrol sama sekali.
> yang saya inginkan adalah, tiap user yang ingin
> mengedit harus login dahulu...
> dan setelah mengedit tulisan, system bakal mencatat
> perubahan apa saja
> yang terjadi karena user tersebut...
> ada usul pake software/CMS apa?
> terima kasih sebelumnya
> regards,
> achmad m

Send instant messages to your online friends <>

[Non-text portions of this message have been removed]

Website materi belajar umum:
Diskusi melalui web:
Materi belajar SAP ABAP

SEBELUM kirim email ke mailing list ini, pastikan bahwa anda

1. INGAT etiket umum dan topik diskusi milist ini
2. PAHAM isi email untuk konsumsi umum, bukan perseorangan
3. GANTI subject email jika ganti topik
5. TAHU      Quota Max : 2 email/orang/hari untuk topik sama kecuali moderator.
6. Hanya moderator yang boleh mengirimkan attachment.

Terlalu banyak email?

Kirimkan 1 email kosong ke : <-- 1 email rangkuman saja perhari. <-- Libur panjang/cuti lebaran ==> no email dan cek lewat web.

Lain-lain: <-- kembali ke pengiriman email normal <--- alamat moderator <--- jika ingin berlangganan

Milis lainnya: => majalah kom tek =>  milis lowongan kerja => Pengembangan karir dan bisnis => Belajar bahasa Inggris

Organisasi non-profit Indonesian Production and Operations Management
Society (IPOMS) - Turut Memajukan SDM dan Industri Indonesia.
Ingin mendaftar mailing list IPOMS dan manajemen produksi/operasi? Kirimkan email ke
Mendaftar ke web di bawah lebih dianjurkan untuk mempercepat pendaftaran:
Recent Activity
Visit Your Group
Yahoo! Finance

It's Now Personal

Guides, news,

advice & more.

Ads on Yahoo!

Learn more now.

Reach customers

searching for you.

Parenting Zone

Your home for

parenting information

on Yahoo! Groups.


          Apache Hadoop and Big Data on IBM Cloud        

Companies are producing massive amounts of data—otherwise known as big data. There are many options available to manage big data and the analytics associated with it. One of the more popular options is Apache Hadoop, an open source software designed to scale up and down quickly with a high degree of fault tolerance. Hadoop lets organizations gather and examine large amounts of structured and unstructured data.

          Continuously Testing your Infrastructure with OVF and Microsoft Operations Management Suite        
Introduction One of the cool new features in Windows Server 2016 is Operation Validation Framework. Operation Validation Framework (OVF) is an (open source) PowerShell module that contains: A set of tools for executing validation of the operation of a system. It provides a way to organize and execute Pester tests which are written to validate […]
          Latinoware 2014 aí vamos nós!        


Após 5 anos volto a Latinoware, evento da comunidade de Software Livre que ocorre em Foz do Iguaçu - Paraná - Brasil.

Além das conexões pessoais, trocas de chaves pgp, desvirtualizações de amigos virtuais, chops e etc… tem uma extensa e rica programação. Assim, para minha organização pessoal, listo abaixo as palestras ou oficinas que pretendo participar. Se você estiver por lá, nestes horários, poderemos compartilhar as mesmas coordenadas de espaço-tempo :-)

O que pretendo participar/assistir/comparecer

A programação completa (com sinopse de cada palestra/oficina/keynote) pode ser vista aqui.


  • 10h - 11h - GNU/Linux - It is not 1984 (or 1969) anymore - Jon “Maddog” Hall
  • SIMULANDO FENÔMENOS COM O GEOGEBRA - Marcela Martins Pereira e Eduardo Antônio Soares Júnior
  • 12h - 13h - Espaços abertos colaborativos Guilherme Guerra
  • 13h - 14h - (comer alguma coisa) e tentar me dividir entre: O analfabetismo tecnológico e a formação dos professores Antonio Carlos C. Marques e Internet das Coisas: Criando APIs para o mundo real com Raspberry Pi e Python Pedro Henrique Kopper
  • 14h -16h - Abertura Oficial da Latinoware
  • 16h - 17h - Edição de vídeos na prática com kdenlive Carlos Cartola
  • 17h - 18h - red#matrix, muito mais que uma mídia social. Frederico (aracnus) Gonçalves Guimarães


  • 10h - 11h - Direitos autorais e os cuidados ao utilizar serviços “da nuvem” e “gratuitos” para construir objetos educacionais Márcio de Araújo Benedito
  • .
  • 11h - 12h - Colaboração e Ferramentas Livres: possibilidades de contra-hegemonias na Escola. Sergio F. Lima
  • 12h - 13h - Professor Livre! O uso do software livre nas licenciaturas. Wendell Bento Geraldes
  • 13h - 14h - (comer alguma coisa) e Padrões abertos de documentação - ODF. Fa Conti
  • 14h -15h - Bitcoin, o futuro do dinheiro é open source (e livre). Kemel Zaidan e Plataforma Open Hardware para Robótica. Thalis Antunes De Souza e Thomás Antunes de Souza
  • 15h - 16h - Mozilla e Educação, como estamos revolucionando o ensino de habilidades digitais. Marcus Saad
  • 16h - 17h - Arduino Uno x MSP 430. Raphael Pereira AlkmimYuri Adan Gonçalves Cordovil
  • 17h - 18h - Inclusão de PCDs na Educação - Com Software Livre é Possível. Marcos Silva Vieira


  • 10h - 12h - Presença digital: não basta estar lá, tem que participar. Frederico (aracnus) Gonçalves Guimarães Será um “mão na massa” :-)
  • .
  • 12h - 13h - Acho que vou almoçar :-)
  • 13h - 14h - Educação e tecnologia com recursos livres. Marcos Egito
  • 14h -14:15h - Foto oficial do Evento
  • 14:15h - 15:15h - Introdução ao Latex. Ole Peter Smith
  • 15:15h - 16:15h - abnTeX2 e LaTeX: normas “absurdas” e documentos elegantes. Lauro César
  • 16:15h - 17:15h - Data Science / Big Data / Machine Learning E Software Livre. Eduardo Maçan

Se você estiver por lá, faça contato!

          Lowongan kerja Promotion Staff Purchasing DEsign Enginering        
Our Client, A Local Furniture Manufacture which the product is combine steel and wood, with brand from Germany and Japan. Recently seeking for a young and talented motivated candidates for the position of :
1. Distribution Manager (DM)
2. Promotion Staff (PS)
3. Maintenance Head (MH)
4.IT Staff (IT)
5.Programmer( Pro)
5. Purchasing Staff (PP)
6. Design and Engineering Staff (DES)
General Requirement :
a.. Male or Female, Min 25 years old
b.. Minimum D3 from Architecture, Interior Design, Product Design for DM, PS position.
c.. Minimum D3 from Electrical Engineering for MH position
d.. Minimum D3 in Computer for IT & Programmer position
e.. For IT&Programmer must have aknowledge in Visual Studio 2008 (minimal 2005), Open Source CRM, Commercial CRM.
f.. Minimum D3 in Technology Industry S1 for PP position
g.. Minimum D3 in Politechnic Manufacturing for DES position
h.. English active
i.. Good appearance & well performance
j.. Min 2 years experience in the same position is an advantage
k.. Located in East Jakarta area.
Should you believe that you find the requirement above, please do not hesistate to send your complete CV including your recent photograph to :
Subject : Code to FEBRY
          Links for 2011-02-17 []        
  • Firefox 4 RC Release on Feb 25, Final Version in March
    Firefox 4 RC is now targeted for a finalization on February 25, while the final version of the browser is now targeted for sometime in March. There are 22 blocking bugs left in the Firefox 4 Beta, all of which need to be fixed until the RC can be pushed out. Mozilla's Damon Sicore expressed some frustration with the patching process as he noted that 91 non-blocking bugs were fixed over the past seven days and only 84 blocking bugs. I believe it's time to do daily blocker driving on these few remaining items-triage the noms, review and push on each and every hard blocker, and give approvals the required attention to prevent regressions.
  • Google One Pass: Payment System from Google
    Google launched a service that allows publishers to manage paid content and subscriptions. Google One Pass is a "payment system that enables publishers to set the terms for access to their digital content". Once you pay to access some content, you should be able to read it from a computer, a tablet, a mobile phone, even if you're using a browser or a different app. Google One Pass tries to be flexible and easy to be implemented. "Publishers have control over how users can pay to access content and set their own prices. They can sell subscriptions of any length with auto-renewal, day passes (or other durations), individual articles or multiple-issue packages.
  • Electric CAD Software for Linux
    Are you an Electrical Engineer or someone who likes to designing electrical circuit boards? and then You are looking for a Open Source Software (OSS) CAD software to do it. Try this OSS Software - Electric. Although it doesn’t have the most modern looking GUI – what electrical engineer really cares about how “modern looking” a GUI is? – Electric offers a lot of features and will serve you well in your designs.

          Not a Good Start Into a Problematic Year        

KDE Project:

Like some other [open]SUSE developers I was casted and am now forced to look for a new day job. It could have happened in better economic times for sure. :-(

Pointers to new interesting job positions are gladly accepted. Bonus points the more they have to do with Open Source, Linux, Qt and KDE.

          Inkscape for Windows 0.92.2        

Inkscape is an open source SVG editor with capabilities similar to Illustrator, CorelDraw, Visio, etc. Supported SVG features include basic shapes, paths, text, alpha blending, transforms, gradients, node editing, svg-to-png export, grouping, and more. Its main motivation is to provide the Open Source community with a fully XML, SVG, and CSS2 compliant SVG drawing tool.

          Inkscape for Mac OS X 0.92.2        

Inkscape is an open source SVG editor with capabilities similar to Illustrator, CorelDraw, Visio, etc. Supported SVG features include basic shapes, paths, text, alpha blending, transforms, gradients, node editing, svg-to-png export, grouping, and more. Its main motivation is to provide the Open Source community with a fully XML, SVG, and CSS2 compliant SVG drawing tool.

          Inkscape for Linux 0.92.2        

Inkscape is an open source SVG editor with capabilities similar to Illustrator, CorelDraw, Visio, etc. Supported SVG features include basic shapes, paths, text, alpha blending, transforms, gradients, node editing, svg-to-png export, grouping, and more. Its main motivation is to provide the Open Source community with a fully XML, SVG, and CSS2 compliant SVG drawing tool.

          Mozilla Firefox for Windows 55.0.1        

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

          Mozilla Firefox for Mac OS X 55.0.1        

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

          Mozilla Firefox for Linux 55.0.1        

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

          VidCutter 4.0.0        

VidCutter is an open source video trimmer and joiner. You can easily complete common video editing like trim, split, and join. It supports popular video formats including MP4, AVI, MOV, WMV, MPEG, and FLV.

          ShareX 11.9.0 Pre-Release        

ShareX is an open source program that lets you take screenshots of any selected area with a single key, save them in your clipboard, hard disk or instantly upload them to over 25 different file hosting services. ShareX can capture screenshots with different shapes: rectangle, rounded rectangle, ellipse, triangle, diamond, polygon and also freehand. It can upload images, text files and all other different file types. It is able to capture screenshots with transparency and shadow. The program also supports clipboard upload and drag-and-drop.

          Pack de 900 increíbles iconos Material Design para descargar        

Según Google, Material Design es lo que se viene. Por lo tanto, hay que estar preparados para comenzar a hacer diseños acorde a este estilo. Por esta razón, hoy les quiero dejar este increíble pack de iconos Material Design gratis y open source para que usen en el desarrollo de sus diseños web o apps. […]

Este artículo Pack de 900 increíbles iconos Material Design para descargar fue publicado originalmente en Punto Geek.

          Excelente set de íconos open source para diseño web o aplicaciones        

Cuando uno está diseñando la interfaz de un sitio web o aplicación, siempre tiene que contar con buenos íconos, y si son open source mejor. Por eso quiero recomendarles un set que encontré y estoy seguro que les va a ser de mucha utilidad. Se trata de un set de íconos simples y livianos que […]

Este artículo Excelente set de íconos open source para diseño web o aplicaciones fue publicado originalmente en Punto Geek.

          Notebook Hardware Control Pro (NHC) 2.0 Pre-Release 06        
What is Notebook Hardware Control (NHC)?
With Notebook Hardware Control you can easily control the hardware components of your Notebook.

Notebook Hardware Control helps you to:
control the hardware and system power management
customize the notebook (open source ACPI Control System)
prolong the battery lifetime
cool down the system and reduce power consumption
monitor the hardware to avoid system failure
make your notebook quiet
With the Professional Edition of NHC you can have different user profiles and start NHC as service.

The user profiles allows you to change all NHC settings with one mouse click. The service allows you to use NHC on restricted (no administrator) user accounts. This will increase the security of your system. You can also use NHC on more user accounts at the same time.

Uploading Download Link

          Web Robots: The worker bees of Internet        
Web Robots: The worker bees of Internet

Web robots, also known as Web crawlers and Web spiders, traverse the Internet to extract various types of information. Web robots can be used ...

The post Web Robots: The worker bees of Internet appeared first on Open Source For You.

          Does your mobile app work without an Internet connection        
Does your mobile app work without an Internet connection

Native mobile apps act as the interface between the user and the users’ data and require uninterrupted connectivity. A native app with offline capability ...

The post Does your mobile app work without an Internet connection appeared first on Open Source For You.

          Common mistakes online entrepreneurs make while launching internet shopping malls        
online shopping malls

At the first glance, it may seem that web marketplace owners can earn huge profits and create strong business brands in a short period ...

The post Common mistakes online entrepreneurs make while launching internet shopping malls appeared first on Open Source For You.

          The Internet of things (IoT)        
The Internet of things (IoT)

The IoT is a technology in the making and we can experience it in small ways, even now. The author presents an ‘appetite whetting’ ...

The post The Internet of things (IoT) appeared first on Open Source For You.

          Ubuntu 12.04 Review: An LTS Done More or Less Precisely        
Super+W to spread all windows in the current workspace

Ubuntu 12.04 is the fourth LTS release from Canonical that came out about a month back, and the first LTS with the revamped user ...

The post Ubuntu 12.04 Review: An LTS Done More or Less Precisely appeared first on Open Source For You.

          What’s New in Nmap 6        
Have you checked out the new Nmap yet?

Nmap is Hollywood’s most famous “hacking” tool that has featured on numerous blockbusters. It was originally written to find hosts in a network thus ...

The post What’s New in Nmap 6 appeared first on Open Source For You.

          Hackers and the Open Source Revolution        
How to become a hacker!

This piece corrects the confusion created by mainstream media between “hacker” and “cracker”. It also considers the history, nature, attributes, ethics and attire of ...

The post Hackers and the Open Source Revolution appeared first on Open Source For You.

          Oracle Charts Out Java’s Future        
Where's Java headed?

The US software maker provides a glimpse into the world of Java, Oracle software and hardware, and the future of these technologies. May 10-11, ...

The post Oracle Charts Out Java’s Future appeared first on Open Source For You.

          Joy of Programming: How Debugging Can Result in Bugs!        
Debugging introduces bug(s)?

We typically debug code to find and fix bugs. However, debugging itself can cause bugs. This is an interesting phenomenon that we cover in ...

The post Joy of Programming: How Debugging Can Result in Bugs! appeared first on Open Source For You.

          The Needle and the Haystack: Exploring Search Models, Part 2        

In the previous article, we demystified some search-related jargon, and learned how the humble Grep can be used to simulate a Boolean-model search engine. ...

The post The Needle and the Haystack: Exploring Search Models, Part 2 appeared first on Open Source For You.

          Come play with WebVR Experiments        

Everyone should be able to experience VR, and WebVR is a big step in that direction. It’s open to all browsers, making it easier for developers to create something quickly and share it with everyone, no matter what device they’re on.

In February, we added WebVR to Chrome on Daydream-ready phones. Today, WebVR on Chrome now works with Google Cardboard, so that anyone with an Android phone and Cardboard can experience virtual worlds just by tapping a link.

To explore what’s possible with WebVR, we’re launching WebVR Experiments, a showcase of the amazing experiences developers are already building.

WebVR Experiments: Virtual reality on the web for everyone

Each experiment shows shows something different you can try in WebVR. Play ping pong with a friend in Konterball.


Explore the world with your voice.


Play Spot-the-Bot, where one player searches for bots in VR with the help of another player outside VR.


Become a donut and try to wrap your fashionable scarf around hungry enemies.


These are just a few of the experiments you can try. If you don’t have Cardboard or Daydream, you can still play on desktop or on any phone in 2D. WebVR support on Chrome for desktop headsets like Oculus Rift and HTC VIVE is coming soon.

In addition to the experiments, developers can find resources and open source code to help get started building in WebVR. If you build something cool, submit it to be featured in the gallery.

We hope these experiments make it easier for more people to experience VR, and inspire more developers to create new VR worlds on the web.

Start playing at

          100 announcements (!) from Google Cloud Next '17        

San Francisco — What a week! Google Cloud Next ‘17 has come to the end, but really, it’s just the beginning. We welcomed 10,000+ attendees including customers, partners, developers, IT leaders, engineers, press, analysts, cloud enthusiasts (and skeptics). Together we engaged in 3 days of keynotes, 200+ sessions, and 4 invitation-only summits. Hard to believe this was our first show as all of Google Cloud with GCP, G Suite, Chrome, Maps and Education. Thank you to all who were here with us in San Francisco this week, and we hope to see you next year.

If you’re a fan of video highlights, we’ve got you covered. Check out our Day 1 keynote (in less than 4 minutes) and Day 2 keynote (in under 5!).

One of the common refrains from customers and partners throughout the conference was “Wow, you’ve been busy. I can’t believe how many announcements you’ve had at Next!” So we decided to count all the announcements from across Google Cloud and in fact we had 100 (!) announcements this week.

For the list lovers amongst you, we’ve compiled a handy-dandy run-down of our announcements from the past few days:


Google Cloud is excited to welcome two new acquisitions to the Google Cloud family this week, Kaggle and AppBridge.

1. Kaggle - Kaggle is one of the world's largest communities of data scientists and machine learning enthusiasts. Kaggle and Google Cloud will continue to support machine learning training and deployment services in addition to offering the community the ability to store and query large datasets.

2. AppBridge - Google Cloud acquired Vancouver-based AppBridge this week, which helps you migrate data from on-prem file servers into G Suite and Google Drive.


Google Cloud brings a suite of new security features to Google Cloud Platform and G Suite designed to help safeguard your company’s assets and prevent disruption to your business: 

3. Identity-Aware Proxy (IAP) for Google Cloud Platform (Beta) - Identity-Aware Proxy lets you provide access to applications based on risk, rather than using a VPN. It provides secure application access from anywhere, restricts access by user, identity and group, deploys with integrated phishing resistant Security Key and is easier to setup than end-user VPN.

4. Data Loss Prevention (DLP) for Google Cloud Platform (Beta) - Data Loss Prevention API lets you scan data for 40+ sensitive data types, and is used as part of DLP in Gmail and Drive. You can find and redact sensitive data stored in GCP, invigorate old applications with new sensitive data sensing “smarts” and use predefined detectors as well as customize your own.

5. Key Management Service (KMS) for Google Cloud Platform (GA) - Key Management Service allows you to generate, use, rotate, and destroy symmetric encryption keys for use in the cloud.

6. Security Key Enforcement (SKE) for Google Cloud Platform (GA) - Security Key Enforcement allows you to require security keys be used as the 2-Step verification factor for enhanced anti-phishing security whenever a GCP application is accessed.

7. Vault for Google Drive (GA) - Google Vault is the eDiscovery and archiving solution for G Suite. Vault enables admins to easily manage their G Suite data lifecycle and search, preview and export the G Suite data in their domain. Vault for Drive enables full support for Google Drive content, including Team Drive files.

8. Google-designed security chip, Titan - Google uses Titan to establish hardware root of trust, allowing us to securely identify and authenticate legitimate access at the hardware level. Titan includes a hardware random number generator, performs cryptographic operations in the isolated memory, and has a dedicated secure processor (on-chip).


New GCP data analytics products and services help organizations solve business problems with data, rather than spending time and resources building, integrating and managing the underlying infrastructure:

9. BigQuery Data Transfer Service (Private Beta) - BigQuery Data Transfer Service makes it easy for users to quickly get value from all their Google-managed advertising datasets. With just a few clicks, marketing analysts can schedule data imports from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers and YouTube Content and Channel Owner reports.

10. Cloud Dataprep (Private Beta) - Cloud Dataprep is a new managed data service, built in collaboration with Trifacta, that makes it faster and easier for BigQuery end-users to visually explore and prepare data for analysis without the need for dedicated data engineer resources.

11. New Commercial Datasets - Businesses often look for datasets (public or commercial) outside their organizational boundaries. Commercial datasets offered include financial market data from Xignite, residential real-estate valuations (historical and projected) from HouseCanary, predictions for when a house will go on sale from Remine, historical weather data from AccuWeather, and news archives from Dow Jones, all immediately ready for use in BigQuery (with more to come as new partners join the program).

12. Python for Google Cloud Dataflow in GA - Cloud Dataflow is a fully managed data processing service supporting both batch and stream execution of pipelines. Until recently, these benefits have been available solely to Java developers. Now there’s a Python SDK for Cloud Dataflow in GA.

13. Stackdriver Monitoring for Cloud Dataflow (Beta) - We’ve integrated Cloud Dataflow with Stackdriver Monitoring so that you can access and analyze Cloud Dataflow job metrics and create alerts for specific Dataflow job conditions.

14. Google Cloud Datalab in GA - This interactive data science workflow tool makes it easy to do iterative model and data analysis in a Jupyter notebook-based environment using standard SQL, Python and shell commands.

15. Cloud Dataproc updates - Our fully managed service for running Apache Spark, Flink and Hadoop pipelines has new support for restarting failed jobs (including automatic restart as needed) in beta, the ability to create single-node clusters for lightweight sandbox development, in beta, GPU support, and the cloud labels feature, for more flexibility managing your Dataproc resources, is now GA.


New GCP databases and database features round out a platform on which developers can build great applications across a spectrum of use cases:

16. Cloud SQL for Postgre SQL (Beta) - Cloud SQL for PostgreSQL implements the same design principles currently reflected in Cloud SQL for MySQL, namely, the ability to securely store and connect to your relational data via open standards.

17. Microsoft SQL Server Enterprise (GA) - Available on Google Compute Engine, plus support for Windows Server Failover Clustering (WSFC) and SQL Server AlwaysOn Availability (GA).

18. Cloud SQL for MySQL improvements - Increased performance for demanding workloads via 32-core instances with up to 208GB of RAM, and central management of resources via Identity and Access Management (IAM) controls.

19. Cloud Spanner - Launched a month ago, but still, it would be remiss not to mention it because, hello, it’s Cloud Spanner! The industry’s first horizontally scalable, globally consistent, relational database service.

20. SSD persistent-disk performance improvements - SSD persistent disks now have increased throughput and IOPS performance, which are particularly beneficial for database and analytics workloads. Read these docs for complete details about persistent-disk performance.

21. Federated query on Cloud Bigtable - We’ve extended BigQuery’s reach to query data inside Cloud Bigtable, the NoSQL database service for massive analytic or operational workloads that require low latency and high throughput (particularly common in Financial Services and IoT use cases).


New GCP Cloud Machine Learning services bolster our efforts to make machine learning accessible to organizations of all sizes and sophistication:

22.  Cloud Machine Learning Engine (GA) - Cloud ML Engine, now generally available, is for organizations that want to train and deploy their own models into production in the cloud.

23. Cloud Video Intelligence API (Private Beta) - A first of its kind, Cloud Video Intelligence API lets developers easily search and discover video content by providing information about entities (nouns such as “dog,” “flower”, or “human” or verbs such as “run,” “swim,” or “fly”) inside video content.

24. Cloud Vision API (GA) - Cloud Vision API reaches GA and offers new capabilities for enterprises and partners to classify a more diverse set of images. The API can now recognize millions of entities from Google’s Knowledge Graph and offers enhanced OCR capabilities that can extract text from scans of text-heavy documents such as legal contracts or research papers or books.

25. Machine learning Advanced Solution Lab (ASL) - ASL provides dedicated facilities for our customers to directly collaborate with Google’s machine-learning experts to apply ML to their most pressing challenges.

26. Cloud Jobs API - A powerful aid to job search and discovery, Cloud Jobs API now has new features such as Commute Search, which will return relevant jobs based on desired commute time and preferred mode of transportation.

27. Machine Learning Startup Competition - We announced a Machine Learning Startup Competition in collaboration with venture capital firms Data Collective and Emergence Capital, and with additional support from a16z, Greylock Partners, GV, Kleiner Perkins Caufield & Byers and Sequoia Capital.


New GCP pricing continues our intention to create customer-friendly pricing that’s as smart as our products; and support services that are geared towards meeting our customers where they are:

28. Compute Engine price cuts - Continuing our history of pricing leadership, we’ve cut Google Compute Engine prices by up to 8%.

29. Committed Use Discounts - With Committed Use Discounts, customers can receive a discount of up to 57% off our list price, in exchange for a one or three year purchase commitment paid monthly, with no upfront costs.

30. Free trial extended to 12 months - We’ve extended our free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and schedule. Plus, we’re introduced new Always Free products -- non-expiring usage limits that you can use to test and develop applications at no cost. Visit the Google Cloud Platform Free Tier page for details.

31. Engineering Support - Our new Engineering Support offering is a role-based subscription model that allows us to match engineer to engineer, to meet you where your business is, no matter what stage of development you’re in. It has 3 tiers:

  • Development engineering support - ideal for developers or QA engineers that can manage with a response within four to eight business hours, priced at $100/user per month.
  • Production engineering support provides a one-hour response time for critical issues at $250/user per month.
  • On-call engineering support pages a Google engineer and delivers a 15-minute response time 24x7 for critical issues at $1,500/user per month.

32. site - Google Cloud Platform Community is a new site to learn, connect and share with other people like you, who are interested in GCP. You can follow along with tutorials or submit one yourself, find meetups in your area, and learn about community resources for GCP support, open source projects and more.


New GCP developer platforms and tools reinforce our commitment to openness and choice and giving you what you need to move fast and focus on great code.

33. Google AppEngine Flex (GA) - We announced a major expansion of our popular App Engine platform to new developer communities that emphasizes openness, developer choice, and application portability.

34. Cloud Functions (Beta) - Google Cloud Functions has launched into public beta. It is a serverless environment for creating event-driven applications and microservices, letting you build and connect cloud services with code.

35. Firebase integration with GCP (GA) - Firebase Storage is now Google Cloud Storage for Firebase and adds support for multiple buckets, support for linking to existing buckets, and integrates with Google Cloud Functions.

36. Cloud Container Builder - Cloud Container Builder is a standalone tool that lets you build your Docker containers on GCP regardless of deployment environment. It’s a fast, reliable, and consistent way to package your software into containers as part of an automated workflow.

37. Community Tutorials (Beta)  - With community tutorials, anyone can now submit or request a technical how-to for Google Cloud Platform.


Secure, global and high-performance, we’ve built our cloud for the long haul. This week we announced a slew of new infrastructure updates. 

38. New data center region: California - This new GCP region delivers lower latency for customers on the West Coast of the U.S. and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

39. New data center region: Montreal - This new GCP region delivers lower latency for customers in Canada and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

40. New data center region: Netherlands - This new GCP region delivers lower latency for customers in Western Europe and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

41. Google Container Engine - Managed Nodes - Google Container Engine (GKE) has added Automated Monitoring and Repair of your GKE nodes, letting you focus on your applications while Google ensures your cluster is available and up-to-date.

42. 64 Core machines + more memory - We have doubled the number of vCPUs you can run in an instance from 32 to 64 and up to 416GB of memory per instance.

43. Internal Load balancing (GA) - Internal Load Balancing, now GA, lets you run and scale your services behind a private load balancing IP address which is accessible only to your internal instances, not the internet.

44. Cross-Project Networking (Beta) - Cross-Project Networking (XPN), now in beta, is a virtual network that provides a common network across several Google Cloud Platform projects, enabling simple multi-tenant deployments.


In the past year, we’ve launched 300+ features and updates for G Suite and this week we announced our next generation of collaboration and communication tools.

45. Team Drives (GA for G Suite Business, Education and Enterprise customers) - Team Drives help teams simply and securely manage permissions, ownership and file access for an organization within Google Drive.

46. Drive File Stream (EAP) - Drive File Stream is a way to quickly stream files directly from the cloud to your computer With Drive File Steam, company data can be accessed directly from your laptop, even if you don’t have much space on your hard drive.

47. Google Vault for Drive (GA for G Suite Business, Education and Enterprise customers) - Google Vault for Drive now gives admins the governance controls they need to manage and secure all of their files, including employee Drives and Team Drives. Google Vault for Drive also lets admins set retention policies that automatically keep what’s needed and delete what’s not.

48. Quick Access in Team Drives (GA) - powered by Google’s machine intelligence, Quick Access helps to surface the right information for employees at the right time within Google Drive. Quick Access now works with Team Drives on iOS and Android devices, and is coming soon to the web.

49. Hangouts Meet (GA to existing customers) - Hangouts Meet is a new video meeting experience built on the Hangouts that can run 30-person video conferences without accounts, plugins or downloads. For G Suite Enterprise customers, each call comes with a dedicated dial-in phone number so that team members on the road can join meetings without wifi or data issues.

50. Hangouts Chat (EAP) - Hangouts Chat is an intelligent communication app in Hangouts with dedicated, virtual rooms that connect cross-functional enterprise teams. Hangouts Chat integrates with G Suite apps like Drive and Docs, as well as photos, videos and other third-party enterprise apps.

51. @meet - @meet is an intelligent bot built on top of the Hangouts platform that uses natural language processing and machine learning to automatically schedule meetings for your team with Hangouts Meet and Google Calendar.

52. Gmail Add-ons for G Suite (Developer Preview) - Gmail Add-ons provide a way to surface the functionality of your app or service directly in Gmail. With Add-ons, developers only build their integration once, and it runs natively in Gmail on web, Android and iOS.

53. Edit Opportunities in Google Sheets - with Edit Opportunities in Google Sheets, sales reps can sync a Salesforce Opportunity List View to Sheets to bulk edit data and changes are synced automatically to Salesforce, no upload required.

54. Jamboard - Our whiteboard in the cloud goes GA in May! Jamboard merges the worlds of physical and digital creativity. It’s real time collaboration on a brilliant scale, whether your team is together in the conference room or spread all over the world.


Building on the momentum from a growing number of businesses using Chrome digital signage and kiosks, we added new management tools and APIs in addition to introducing support for Android Kiosk apps on supported Chrome devices. 

55. Android Kiosk Apps for Chrome - Android Kiosk for Chrome lets users manage and deploy Chrome digital signage and kiosks for both web and Android apps. And with Public Session Kiosks, IT admins can now add a number of Chrome packaged apps alongside hosted apps.

56. Chrome Kiosk Management Free trial - This free trial gives customers an easy way to test out Chrome for signage and kiosk deployments.

57. Chrome Device Management (CDM) APIs for Kiosks - These APIs offer programmatic access to various Kiosk policies. IT admins can schedule a device reboot through the new APIs and integrate that functionality directly in a third- party console.

58. Chrome Stability API - This new API allows Kiosk app developers to improve the reliability of the application and the system.


Attendees at Google Cloud Next ‘17 heard stories from many of our valued customers:

59. Colgate - Colgate-Palmolive partnered with Google Cloud and SAP to bring thousands of employees together through G Suite collaboration and productivity tools. The company deployed G Suite to 28,000 employees in less than six months.

60. Disney Consumer Products & Interactive (DCPI) - DCPI is on target to migrate out of its legacy infrastructure this year, and is leveraging machine learning to power next generation guest experiences.

61. eBay - eBay uses Google Cloud technologies including Google Container Engine, Machine Learning and AI for its ShopBot, a personal shopping bot on Facebook Messenger.

62. HSBC - HSBC is one of the world's largest financial and banking institutions and making a large investment in transforming its global IT. The company is working closely with Google to deploy Cloud DataFlow, BigQuery and other data services to power critical proof of concept projects.

63. LUSH - LUSH migrated its global e-commerce site from AWS to GCP in less than six weeks, significantly improving the reliability and stability of its site. LUSH benefits from GCP’s ability to scale as transaction volume surges, which is critical for a retail business. In addition, Google's commitment to renewable energy sources aligns with LUSH's ethical principles.

64. Oden Technologies - Oden was part of Google Cloud’s startup program, and switched its entire platform to GCP from AWS. GCP offers Oden the ability to reliably scale while keeping costs low, perform under heavy loads and consistently delivers sophisticated features including machine learning and data analytics.

65. Planet - Planet migrated to GCP in February, looking to accelerate their workloads and leverage Google Cloud for several key advantages: price stability and predictability, custom instances, first-class Kubernetes support, and Machine Learning technology. Planet also announced the beta release of their Explorer platform.

66. Schlumberger - Schlumberger is making a critical investment in the cloud, turning to GCP to enable high-performance computing, remote visualization and development velocity. GCP is helping Schlumberger deliver innovative products and services to its customers by using HPC to scale data processing, workflow and advanced algorithms.

67. The Home Depot - The Home Depot collaborated with GCP’s Customer Reliability Engineering team to migrate to the cloud in time for Black Friday and Cyber Monday. Moving to GCP has allowed the company to better manage huge traffic spikes at peak shopping times throughout the year.

68. Verizon - Verizon is deploying G Suite to more than 150,000 of its employees, allowing for collaboration and flexibility in the workplace while maintaining security and compliance standards. Verizon and Google Cloud have been working together for more than a year to bring simple and secure productivity solutions to Verizon’s workforce.


We brought together Google Cloud partners from our growing ecosystem across G Suite, GCP, Maps, Devices and Education. Our partnering philosophy is driven by a set of principles that emphasize openness, innovation, fairness, transparency and shared success in the cloud market. Here are some of our partners who were out in force at the show:

69. Accenture - Accenture announced that it has designed a mobility solution for Rentokil, a global pest control company, built in collaboration with Google as part of the partnership announced at Horizon in September.

70. Alooma - Alooma announced the integration of the Alooma service with Google Cloud SQL and BigQuery.

71. Authorized Training Partner Program - To help companies scale their training offerings more quickly, and to enable Google to add other training partners to the ecosystem, we are introducing a new track within our partner program to support their unique offerings and needs.

72. Check Point - Check Point® Software Technologies announced Check Point vSEC for Google Cloud Platform, delivering advanced security integrated with GCP as well as their joining of the Google Cloud Technology Partner Program.

73. CloudEndure - We’re collaborating with CloudEndure to offer a no cost, self-service migration tool for Google Cloud Platform (GCP) customers.

74. Coursera - Coursera announced that it is collaborating with Google Cloud Platform to provide an extensive range of Google Cloud training course. To celebrate this announcement  Coursera is offering all NEXT attendees a 100% discount for the GCP fundamentals class.

75. DocuSign - DocuSign announced deeper integrations with Google Docs.

76. Egnyte - Egnyte announced an enhanced integration with Google Docs that will allow our joint customers to create, edit, and store Google Docs, Sheets and Slides files right from within the Egnyte Connect.

77. Google Cloud Global Partner Awards - We recognized 12 Google Cloud partners that demonstrated strong customer success and solution innovation over the past year: Accenture, Pivotal, LumApps, Slack, Looker, Palo Alto Networks, Virtru, SoftBank, DoIT, Snowdrop Solutions, CDW Corporation, and SYNNEX Corporation.

78. iCharts - iCharts announced additional support for several GCP databases, free pivot tables for current Google BigQuery users, and a new product dubbed “iCharts for SaaS.”

79. Intel - In addition to the progress with Skylake, Intel and Google Cloud launched several technology initiatives and market education efforts covering IoT, Kubernetes and TensorFlow, including optimizations, a developer program and tool kits.

80. Intuit - Intuit announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

81. Liftigniter - Liftigniter is a member of Google Cloud’s startup program and focused on machine learning personalization using predictive analytics to improve CTR on web and in-app.

82. Looker - Looker launched a suite of Looker Blocks, compatible with Google BigQuery Data Transfer Service, designed to give marketers the tools to enhance analysis of their critical data.

83. Low interest loans for partners - To help Premier Partners grow their teams, Google announced that capital investment are available to qualified partners in the form of low interest loans.

84. MicroStrategy - MicroStrategy announced an integration with Google Cloud SQL for PostgreSQL and Google Cloud SQL for MySQL.

85. New incentives to accelerate partner growth - We are increasing our investments in multiple existing and new incentive programs; including, low interest loans to help Premier Partners grow their teams, increasing co-funding to accelerate deals, and expanding our rebate programs.

86. Orbitera Test Drives for GCP Partners - Test Drives allow customers to try partners’ software and generate high quality leads that can be passed directly to the partners’ sales teams. Google is offering Premier Cloud Partners one year of free Test Drives on Orbitera.

87. Partner specializations - Partners demonstrating strong customer success and technical proficiency in certain solution areas will now qualify to apply for a specialization. We’re launching specializations in application development, data analytics, machine learning and infrastructure.

88. Pivotal - GCP announced Pivotal as our first CRE technology partner. CRE technology partners will work hand-in-hand with Google to thoroughly review their solutions and implement changes to address identified risks to reliability.

89. ProsperWorks - ProsperWorks announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

90. Qwiklabs - This recent acquisition will provide Authorized Training Partners the ability to offer hands-on labs and comprehensive courses developed by Google experts to our customers.

91. Rackspace - Rackspace announced a strategic relationship with Google Cloud to become its first managed services support partner for GCP, with plans to collaborate on a new managed services offering for GCP customers set to launch later this year.

92. Rocket.Chat - Rocket.Chat, a member of Google Cloud’s startup program, is adding a number of new product integrations with GCP including Autotranslate via Translate API, integration with Vision API to screen for inappropriate content, integration to NLP API to perform sentiment analysis on public channels, integration with GSuite for authentication and a full move of back-end storage to Google Cloud Storage.

93. Salesforce - Salesforce announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

94. SAP - This strategic partnership includes certification of SAP HANA on GCP, new G Suite integrations and future collaboration on building machine learning features into intelligent applications like conversational apps that guide users through complex workflows and transactions.

95. Smyte - Smyte participated in the Google Cloud startup program and protects millions of actions a day on websites and mobile applications. Smyte recently moved from self-hosted Kubernetes to Google Container Engine (GKE).

96. Veritas - Veritas expanded its partnership with Google Cloud to provide joint customers with 360 Data Management capabilities. The partnership will help reduce data storage costs, increase compliance and eDiscovery readiness and accelerate the customer’s journey to Google Cloud Platform.

97. VMware Airwatch - Airwatch provides enterprise mobility management solutions for Android and continues to drive the Google Device ecosystem to enterprise customers.

98. Windows Partner Program- We’re working with top systems integrators in the Windows community to help GCP customers take full advantage of Windows and .NET apps and services on our platform.

99. Xplenty - Xplenty announced the addition of two new services from Google Cloud into their available integrations: Google Cloud Spanner and Google Cloud SQL for PostgreSQL.

100. Zoomdata - Zoomdata announced support for Google’s Cloud Spanner and PostgreSQL on GCP, as well as enhancements to the existing Zoomdata Smart Connector for Google BigQuery. With these new capabilities Zoomdata offers deeply integrated and optimized support for Google Cloud Platform’s Cloud Spanner, PostgreSQL, Google BigQuery, and Cloud DataProc services.

We’re thrilled to have so many new products and partners that can help all of our customers grow. And as our final announcement for Google Cloud Next ’17 — please save the date for Next 2018: June 4–6 in San Francisco.

I guess that makes it 101. :-)

          Una imagen vale más que mil palabras: PALO for Excel        
Excel Chaos. Un problema frecuente en algunas organizaciones. Ya sabemos los motivos: todo el mundo sabe usar Excel, todo el mundo lo tiene instalado por defecto, el uso es intuitivo,… Ya sabemos los problemas: múltiples versiones de la verdad, no integridad de datos, gestión complicada,…

Algunas soluciones buscan resolver o mejor dicho ayudar a que esa problemática sea menor. Múltiples enfoques. Desde el open source: PALO for Excel. Si no puedo con ellos,...

Más fácil que describir su funcionalidad es quedarse con la siguiente imagen.


          Talend Open Studio, MDM y Open Profile 4.0        
El equipo de Talend sigue trabajando bien duro para que cada cierto tiempo todos sus productos ofrezcan nuevas características y corrigan posibles bugs identificados. Ya se encuentra disponible la versión 4.0 (tanto las versión community como la enterprise):

  • Talend Open Studio 4.0.0: Entre las novedades podemos destacar: nueva conexión segura LDAP, uso de parámetros en conexiones JDBC, componentes Openbravo y tHL7Input.
  • Talend Open Profiler 4.0.0: a destacar que soporta la ejecución de consultas contra diferentes versiones de una base de datos.
  • Talend MDM Community Edition 4.0.0
Los productos de Talend destacan por su alto rendimiento y escalabilidad en los procesos de integracion de datos. Además de tener la primera solución open source MDM (Master Data Management) del mercado.

Más información:
Talend Forge:

          Google Summer of Code 2010        
Como cada año ya se ha anunciado las organizaciones que participan en el Google Summer of Code iniciativa que consiste en auspiciar proyecto tutelados en el ámbito del Open Source.

Este año, desde el sesgo del OSBI (Open Source Business Intelligence), nos encontramos las siguientes organizaciones (y los correspondientes enlaces de qué esperan de participar en el Summer of Code, así como otra información disponible en el momento: tutores, propuestas, contacto,...):

También me gustaría destacar organizaciones relacionadas como:

Si no sabéis en qué invertir este verano y os gusta programar os recomiendo echar un vistazo a las propuestas del Summer of Code 2010.

Fuente: y

          DUTraffic_Reset&Run: una soluzione per chi naviga con 3        

Coloro che sono clienti Tre e usano l'opzione "dati" Naviga 3 30 giorni, Naviga 3 7 giorni, Tre.Dati, Tre.Dati Plus, hanno solitamente un problema da risolvere: evitare che, durante la navigazione, si possa accidentalmente superare la soglia del traffico disponibile.
Se è abbastanza facile trovare un programma in grado di monitorare il traffico "consumato" (la stessa 3 ne fornisce uno), non altrettanto facile può dirsi trovare un programma che in automatico interrompa la connessione Internet non appena raggiunto il limite del traffico utilizzabile.
Io, ho sviluppato una soluzione personalizzata, un programma di tipo script, che agendo in combinazione con il software DUTraffic, è in grado di rispondere a tutte le esigenze di chi, come me, è un abituale fruitore della connessione dati con traffico prepagato offerta da 3.
Il programma, che ho chiamato DUTraffic_Reset&Run, fornisce le seguenti funzioni:
  • consente di impostare la disconnessione automatica al superamento di una determinata soglia di traffico giornaliero o mensile.
  • permette di modificare il valore dei contatori (è possibile, quindi, allineare il valore misurato da DUTraffic con quello letto nell'Area Clienti di 3; particolarmente utile quando la connessione Internet viene usata anche per navigare dal "telefonino")
  • distingue le connessioni Dial-Up e non mescola il traffico effettuato con una connessione con quello effettuato con un'altra (utile quando si dispone anche di una connessione Internet alternativa).
  • consente di usufruire del traffico disponibile anche al superamento della mezzanotte.
  • consente di impostare una data di scadenza superata la quale viene impedito l'uso SOLO della connessione Internet soggetta a soglie.
in più DUTraffic_Reset&Run.vbs, essendo uno script, ha questi vantaggi intrinseci:
  • è Open Source, e chiunque può leggerne il codice ed eventualmente correggerlo o migliorarlo.
  • può essere modificato per rispondere ad altre esigenze (come quella di impostare un limite di tempo anzichè di traffico).
  • non richiede installazione.
  • è piccolissimo (occupa 95Kbyte).
Per maggiori informazioni:

Chiunque voglia lasciare un commento su questa mia realizzazione può farlo qui.
          Open Source Security Inc. Announces World-First Fully CFI-Hardened OS Kernel        

The test patch for grsecurity® released today demonstrates a fully-featured version of RAP, a high-performance and high-security implementation of Control Flow Integrity (CFI). RAP is available commercially with a number of added benefits, including the ability to protect userland applications.

(PRWeb February 06, 2017)

Read the full story at

          Open Source Security Inc's grsecurity Project Gains Sponsorship from OVH        

Open Source Security Inc. is proud to welcome OVH as a 2016 Silver-Level sponsor for the open source project grsecurity®.

(PRWeb June 14, 2016)

Read the full story at

          Getting Started with MEAN.js         

Getting started guide with Open source MEAN stack development environment.

          Nikto : Open Source Web Server Scanner for Ubuntu / Debian Linux         

Nikto is an Open Source (GPL) web server scanner which scans your webserver against more than 6500 potentially dangerous files/CGIs, checks for outdated versions of over 1250 servers, and version specific problems on over 270 servers. It has a very good plugin support

          Ð‘разилия отказывается от Open Source в пользу Microsoft        
          FInally Promo DVD 12.1 Come to Indonesia        
Talking about bureaucracy, Indonesia maybe one of the worst case in the world. Sometimes officials cannot differentiate what is "commercial thing" and what is "social thing". Their head and brain full of how to monetizing something. Long story short when openSUSE 11.4 was out, SUSE sent 300 pieces promo DVD for me on August 2011 that I should then distribute again for free with my own time and money to spread the free open source software here in Indonesia.

At that time the combination of stupid person on forwarder side and corrupted-mind officials made me cannot took the DVD from Customs.

When openSUSE 11.1 out a couple of years ago I can easily got my openSUSE DVD sent from SUSE to Jakarta. So we are talking about declining quality of Indonesian Customs here after government always talking about good governance. Shame on them isn't it?

On March 2012 openSUSE sent me 500 pieces promo DVD of openSUSE 12.1 and this time SUSE change the stupid forwarder. The new forwarder asks me to prepare some documents that should be submitted through government offices, Ministry of Justice and Ministry of Commerce prior to go to Customs office. Previous forwarder not clearly explain this new thing to me. The new forwarder also doesn't allow me to bring the documents to Customs office, they will do it for me. I just only come to the Customs in the end of the process to claim the goods. So in April 26, 2012 three months after the release date I get the openSUSE 12.1 Promo DVD here in Jakarta. Huh...
I will distribute the DVD to school teachers and facilitators in Yogyakarta province in Indonesia, and also for Indonesia Translation Team for openSUSE Documentation.

Officials if you read this rambling "shit" don't get me wrong, I'm 100% Indonesian, I love this beautiful country very much so please don't ruin that. Please serve Indonesian citizen better because that's your only duty as a civil servants.
For capitalist company please train your staff and fire your in-competence people!
          Apply now for Arduino Core Developer Workshop!        
We will be hosting a three-day master class in our Turin office, September 29th to October 1st, designed for students, hackers, and engineers ages 18 to 28 with a deep interest in microcontrollers, IoT, and open source development. Led by Martino Facchin, senior developer at Arduino, the class will focus on three main topics: Teamwork and Open Source […]
          Building an Institutional Repository for Your Instutution        
Institutional repositories (IRs) have been successfully populated at higher education institutions, where users not only get open access to those scholarly publications, but also create collections. It opens the door to share research information within or beyond the community. IRs are also extremely useful at research companies.

Research companies could set up different communities to share information at different levels. This will reduce duplicate paperwork, such as lab reports, lab records and datasheet. It also minimize the requests for the same information, and leads to a green business environment. The easily customized workflow could be designed to facilate researchers to deposit thier data in a moment. IRs could also serve as a platform for record management. The descriptive metadata and administrative metadata can be shared or transferred as a part of management records.

Currently, some open source software, such as DSpace, Greenstone, Fedora, are widely used at academic libraries. The other commercial products, such as CONTENTdm and Inmagic Presto, are also used in business environment. How to create an institutional repository and promote it in your community? The article, Building an Instutional Repository at a Liberal Arts College, might give you some thoughts and inspiration.

          SlideWiki is now Open Source        
We are pleased to announce that we have just released the source code under the permissive Apache open-source license. It is now available for download from the AKSW Github repository at: The SlideWiki database dumps are also available … Continue reading
          Digital Foosball        

Created by German ad agency SinnerSchrader, digital Foosball uses some nifty soft and hardware to turn the analog game into a digital one. All is explained by the rather annoying voice over in the video and the software is can be downloaded for free here.

It's great to see they have respected the "rules" of open source technologies such as processing and arduino (the coding language and hardware used in the project) and made their code available to download for free. This project is about spreading a platform and an idea. The focus is not profit, it's ideas. As a promotional vehicle for an ad agency this sends out the message that ideas are more important than profit, and that they can conceive, build, and promote their own "non-traditional" product. I predict they'll get a nice chunk of new business from this project.

Thanks to Ed for the link.
          Google v/s Facebook - Office Pictures        
Which giant would you prefer to work for: Google or Facebook?

It might be a question of personal preference. You can’t help but love one company’s work more than another, which leads to desire to work for them. Aside from that, you could use some objective measurements in choosing between the two.

This might help you. We give you a chance to have a sneak view at the offices from both Google and Facebook. You’ll find that both are design great, to enhance the productivity of their workers. Absolutely no trace of standard office cubicles.

Google Office

About Google: (as if you needed this)

Google Inc. is an American public corporation specializing in Internet search. It also generates profits from advertising bought on its similarly free-to-user e-mail, online mapping, office productivity, social networking and video-sharing services. Advert-free versions are available via paid subscription.  

Google has more recently developed an open source web browser and a mobile phone operating system. Its headquarters, often referred to as the Googleplex, is located in Mountain View, California. As of March 31, 2012 the company had 53,546 full-time employees.

Facebook Office

About Facebook:

Facebook, Inc. is a company that operates and privately owns social networking website, Facebook. Users can add friends and send them messages, and update their personal profiles to notify friends about themselves. Additionally, users can join networks organized by city, workplace, school, and region. The website’s name stems from the colloquial name of books given at the start of the academic year by university administrations with the intention of helping students to get to know each other better.

Mark Zuckerberg founded Facebook with his college roommates and fellow computer science students Eduardo Saverin, Dustin Moskovitz and Chris Hughes while he was a student at Harvard University. The website’s membership was initially limited to Harvard students, but was expanded to other colleges in the Boston area, the Ivy League, and Stanford University. It later expanded further to include any university student, then high school students, and, finally, to anyone aged 13 and over. The website currently has more than 500 million active users worldwide.

          Truly embracing Open Source Software development by Microsoft        
Truly MS has become the ultimate definition for an OSS company. There is no stronger evidence for this than in the various hooks from VSTS to other OSS tools such as Designers, Code editors, Build and deploy tools etc.,. The following is a sample of the integration VSTS provides to OSS tools. You can see...
          Using Hugo to Open Source Tyk documentation        

Having explained why we have open sourced our documentation, we wanted to let you know a bit more about how we did it. History lesson Our docs were originally created within our WordPress powered website. This included the use of two plugins to get the content to display using a Table of Contents. It was […]

The post Using Hugo to Open Source Tyk documentation appeared first on Tyk API Gateway and API Management.

          A shiny new look, improved (Open Source) docs, and touch down in Singapore        

Picture the scene. It’s late at night. You’re surrounded by takeout boxes and coffee cups. You’re not sure when it got dark and you really need the bathroom, but just… 10.. minutes… more… perfecting your distributed blockchain deep-learning-powered todo list app (that also controls the weather). Sound familiar? We get it: we’ve been there too. […]

The post A shiny new look, improved (Open Source) docs, and touch down in Singapore appeared first on Tyk API Gateway and API Management.

          Comment on Игры с neural style transfer by My first experiments with ML using Keras | ruX's mind        
[…] I started to experiment with it. First of them was Prisma-like chat bot for Facebook which uses open source implementation of neural style transfer algorithm. Then I made AI-powered […]
But it gets open sourced at the end, I don't see any problem.
          RE[7]: Comment by shmerl        
Conjecture. Who cares? Only FSF zealots. Sorry people are going to use what works best now based now and developers will target the devices that real people are using not target something just because it is open source.
          You can easily install Ubuntu Linux inside Windows 10 soon        
If you’re the sort of technophile that follows Microsoft’s every movement on the Windows front, then you’re probably well aware of how many times Redmond froze hell over with its professed love for Linux and open source. Now it has done so again by making it almost too easy to install Ubuntu, one of the most popular Linux distributions around, … Continue reading
          Gratis-Tickets für die MagentoLive Konferenz am 12.11.13 in München        
In weniger als vier Wochen ist es soweit: Mit der MagentoLive Konferenz findet die erste offizielle Magento Veranstaltung am 12.11.2013 in München statt. Das Event ist als Business-Event für Entscheider ausgelegt, zu dem namhafte Referenten inkl. der Magento-Führungsmannschaft ihr Kommen zugesagt haben. Das eStrategy-Magazin verlost als Medienpartner unter allen registrierten Lesern fünf kostenlose Tickets im Gesamtwert von rund EUR 1.500.- sowie Rabattcodes für den Bezug der Tickets zum absoluten Vorzugspreis.

In nicht einmal vier Wochen ist es soweit und die erste offizielle Magento Veranstaltung – MagentoLive Germany– findet in München statt., . Nach sehr erfolgreichen und ausverkauften Stationen in Australien und UK mit enorm positiven Feedbacks ist MagentoLive die  Gelegenheit schlechthin, um aufschlussreiche Einblicke in Magento und das dazugehörige Ökosystem sowie Feedback von Führungskräften aus Handel und Industrie zu erhalten. Bei der Veranstaltung haben Besucher die Möglichkeit sich mit der Magento-Community sowie erfolgreichen Shopbetreibern auszutauschen und sich über spannende Trends und Entwicklungen aus der Welt des E-Commerce aus erster Hand zu informieren.

Magento Live wird dabei in vier Themengebiete aufgeteilt: Mobile, Conversion-Marketing, E-Commerce Erfolg und E-Commerce Best Practices. Nachfolgend finden Sie eine kleine Auswahl der Session-Themen:

  • Magento – Ein Blick hinter die Kulissen
  • Optimierung des Checkouts zur Verkaufssteigerung: Beseitigen Sie Ängste und Einkaufs-Barrieren von Kunden durch die Verbesserung Ihres Bezahl- und Checkoutvorgangs.
  • Going Global: Wie ihre internationale Expansion geschafft hat
  • Vorzüge und Best-Practises für grenzübergreifenden Handel
  • Ein Blick auf die E-Commerce-Trends von morgen

Auch der eStrategy-Chefredakteur und CEO von TechDivision, Josef Willkommer, wird bei der Veranstaltung vor Ort sein und einen interessanten Talk zum Thema „Managing Complex Catalogues“ halten. Alle aktuellen Infos zu den Referenten und Sessions sind ab sofort auf der MagentoLive Webseite unter einsehbar. 
Die Tickets für MagentoLive sind noch bis zum 23. Oktober für den günstigen Frühbucherpreis von € 249 zu haben. Für E-Commerce-Verantwortliche sowie Entscheider, die sich über Trends und Neuigkeiten aus der Welt des E-Commerce informieren möchten oder die den Einstieg in den E-Commerce planen, sollte MagentoLive daher zu einer Plichtveranstaltung gehören.

Zu guter Letzt: Als Teilnehmer der MagentoLive Deutschland, können Sie eine Karte für die Imagine Konferenz 2014 gewinnen. Für diejenigen, die dieses Event noch nicht kennen: Imagine ist die jährliche Magento-Leitkonferenz, die in den USA stattfindet. Mehr als 1.700 Magento-Begeisterte aus über 35 Ländern nehmen an diesem Ereignis teil. Unter können Sie sich einen ersten Eindruck vergangener Ausgaben machen. 

          Magento ist beliebteste E-Commerce-Software        
Die Februar-Ausgabe 2013 der vierteljährlichen E-Commerce-Umfrage von Tom Robertshaw zeigt, dass Magento zum dritten mal in Folge zu den führenden E-Commerce Plattformen gehört. Im Vergleich zu anderen E-Commerce-Plattformen wird Magento immer beliebter. So konnte Magento mit seinem innovativen Shopsystem bis dato noch mehr Kunden für sich gewinnen als im Vergleich zum Vorjahr 2012. Magento boomt auf der ganzen Welt und freut sich einer stetig wachsenden Beliebtheit – und das vor allem wegen seiner Sicherheit, Vielseitigkeit und seinen zahlreichen Features.


Magento Community wächst weiter
 Der Marktanteil des E-Commerce-Shopsystems scheint auch weiterhin auf der Überholspur zu sein. In den letzten 4 Jahren konnte sich Magento zum Marktführer entwickeln und zählt heute zu den führenden Technologien und E-Commerce Plattformen. Nach Magento – 26 Prozent – nehmen VirtueMart mit 9,5 Prozent und Prestashop mit 8,6 Prozent Platz zwei und drei der beliebtesten E-Commerce Plattformen ein. Zen Cart ist mit 7,2 Prozent mittlerweile nur noch mehr auf Platz vier der beliebtesten Shopsysteme.


Nachdem eBay im Jahr 2011 Magento aufkaufte und in X.commerce integrierte, konnte sich Magento weiter entwickeln und wachsen. Die nachfolgenden Graphiken verdeutlichen Magentos Wachstum im Vergleich zu anderen E-Commerce-Plattformen und zeigen die Entwicklung des innovativen Shopsystems in den letzten Jahren.

Quelle: Google Trend


Shopbetreiber bevorzugen Magento als E-Commerce-Lösung
In einer Umfrage wurden mehrere hundert Shopbetreiber im Zeitraum Q4 2011 und Q1 2012 von der Novalnet AG befragt, welches Shopsystem bzw. welche E-Commerce Shops bei Ihnen im Einsatz sind. Die Auswertung der Ergebnisse liegt mittlerweile für 250 Shopseiten vor, die von 130 Online-Händlern betrieben werden. Anhand der Umfrage konnte festgestellt werden, dass bei 24 Prozent der älteren Shops bzw. Bestandslösungen immer noch die Tendenz in Richtung Marke Eigenbau geht. Bei Neuprojekten, Zweit- oder Drittshops greifen die befragten Händler dagegen eher auf fertige bzw. etablierte System-Lösungen und Shopbaukästen zurück. Dabei belegt Magento mit 31 Prozent den Spitzenplatz unter allen eingesetzten Lösungen.


Magento hat es an die Spitze der E-Commerce-Plattformen geschafft und ist als Shopsystem in der E-Commerce-Welt nicht mehr weg zu denken. Wer auf der Höhe der Zeit sein möchte der tut gut daran, sich früher oder später ernsthaft mit dem Thema Magento zu befassen.

(Dieser Post ist im Original auf dem TechDivison-Blog erschienen)
          Kostenloses eStrategy-Magazin in neuer Ausgabe erschienen        
Ab sofort steht Ausgabe 02/2013 des eStrategy-Magazins unter wieder kostenlos zum Download bereit. Wie in den vergangenen Ausgaben konnten wir vom eStrategy-Team auch in der aktuellen Ausgabe des Magazins wieder jede Menge Fachwissen rund um E-Commerce und Online-Marketing auf über 120 Seiten bündeln.

Der Schwerpunkt in Ausgabe 02/2013 liegt im Bereich Big Data. Hierzu liefert u.a. das Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS einen umfassenden Artikel, welcher die Potentiale und Möglichkeiten von Big Data beschreibt und analysiert.

Ergänzt wird das Ganze durch viele weitere spannende Themen aus der Onlinewelt: Neben Big Data als Hauptthema konnte ein Expertenartikel aus dem Hause eBay gewonnen werden, der sich mit dem Thema „Commerce Revolution“ befasst. Außerdem führte die eStrategy-Redaktion ein Experteninterview mit, das einen Blick hinter die Kulissen des erfolgreichsten Online-Schuhhändlers ermöglicht. Zudem wird in der neuen Ausgabe das Trendthema „Sharing Economy“ genauer unter die Lupe genommen und Spezialisten geben ihr Know-How zu aktuellen Themen rund um Google Shopping, Future Commerce, Usability, etc. preis.


Die Themen der aktuellen eStrategy-Ausgabe im Überblick:

  • Big Data und Customer Journey – Träume werden wahr… 
  • In Deutschland wird das Potenzial von Webshops nach wie vor verkannt. Geht nicht gibt’s nicht: Wie Hersteller und Händler nicht nur in der Nische punkten können 
  • Sharing Economy – das neue Zeitalter des Social Webs 
  • Tante Emma 2.0 oder Big-Data 
  • Emotional Usability – mit Herz und Verstand erfolgreich im E-Commerce 
  • Big Data – Vorsprung durch Wissen. Mit neuen Technologien zum datengestützten „Tante-Emma-Laden“ 
  • eBay: Commerce Revolution 
  • Big Data – Modebegriff oder Trend? 
  • – Ein Blick hinter die Kulissen eines der erfolgreichsten Online-Händlers 

Online Marketing
  • Google Shopping – Produktdaten als kritischer Erfolgsfaktor 
  • Online Marketing Intelligence (OMI) 
  • Google AdWords 2.0 – Erweiterte Kampagnen und deren Möglichkeiten 
  • Wie man mit guten Daten SEO steuert

  • Rechtliche Tücken des Mobile Advertising 
  • Big Data – zwischen Urheberrecht und Datenschutzrecht 
  • Open Source Software: Rechtliche Grundlagen sowie Chancen & Risiken für Unternehmen

Die aktuelle Ausgabe des eStrategy-Magazins wird auch dieses mal wieder mit Buchempfehlungen sowie spannenden Surftipps abgerundet.

Für wen ist das Magazin gedacht?
Das Magazin wird überwiegend von Shop- und Website-Betreibern, Agenturen, Unternehmensberatungen sowie IT- und Marketing-Verantwortlichen gelesen und ist natürlich auch für alle anderen, die an den Themenbereichen E-Commerce, Online-Marketing, Webentwicklung und Mobile interessiert sind, gedacht.

Die Ausgabe 03 / 2013 wird am 10. September erscheinen. Die Themenplanung für die kommende Ausgabe läuft bereits und Gastautoren können sich gerne mit Themenvorschlägen unter an die eStrategy-Redaktion wenden. Zudem freuen wir uns über Feedback und Kontaktanfragen für Partnerschaften.

Hier geht's zum Download...

          TechDivision veröffentlich neues, kostenloses Magento Modul        

Das neuen Modul TechDivision_SystemConfigDiff ermöglicht den Abgleich der Konfiguration einer Magento Test-Instanz mit der dazugehörige Live-Instanz sowie die Synchronisierung der beiden Systeme per Mausklick.

Mit Magento steht seit Frühjahr 2008 eines der mächtigsten Open Source Shop-Systeme zur Verfügung, das inzwischen weltweit über 150.000 mal im Einsatz ist und ein Ende der Erfolgsstory ist derzeit nicht in Sicht.

Magento ist ist dabei sehr leistungsfähig und verfügt über umfangreichste Funktionalitäten wodurch sich die Shopsoftware tendenziell für komplexere und mittlere bis größere Shop-Projekte eignet.

Gerade in diesem Umfeld gehört eine mehrstufige Softwareumgebung – häufig bestehend aus Entwicklungs-, Test- und Live-Umgebung – häufig zum Standard, was auch durchaus empfehlenswert ist da Arbeiten am „offenen Herzen“ insbesondere bei stärkter frequentierten Shops absolut nicht zu empfehlen sind.

Das Problem bei solch mehrstufigen Umgebungen liegt dabei jedoch häufig daran, dass durch unterschiedliche Konfigurationen Probleme auftauchen können, die mitunter nur sehr schwer zu finden sind und dadurch Aufwände produzieren können, die sowohl für Shopbetreiber als auch Dienstleister „uncool“ sind. Neben den Konfigurationen vergleicht unser Modul zum auch CMS-Blöcke und zeigt entsprechende Unterschiede an, die bei Bedarf ebenfalls synchronisiert werden können.

Um genau dieses Problem, mit dem sich ein Großteil der Magento Shopbetreiber und ein Dienstleister herumschlagen dürfte, lösen zu können, haben wir ein Magento Modul entwickelt mit dem ein Vergleich der Konfiguration einer Test-Umgebung mit den Einstellungen der Live-Umgebung mit wenigen Mausklicks ermöglicht wird.

Das Modul TechDivision_SystemConfigDiff muss hierzu lediglich auf beiden Umgebungen installiert und aktiviert werden. Danach können mit wenigen Mausklicks die Systeme verglichen und bei Bedarf auch gleich synchronisiert werden.

Wie das Modul genau funktioniert, zeigt das nachfolgende Video:

Das TechDivision SystemConfigDiff Modul steht ab sofort sowohl unter Magento Connect, Github als auch auf nr-apps zum kostenlosen Download bereit:

           Magento Connect Marketplace – Endlich mehr Transparenz und Qualität        
Auf dem Magento Connect Marketplace wird aufgeräumt, um Usern mehr Qualität zu garantieren. Bisher konnten auf dem Magento Marktplatz beliebige Softwareerweiterungen für die Shopsoftware angeboten werden. Mit dem neu eingeführten Bewertungssystem Ratings & Reviews“ und den neuen Qualitätskriterien in Magento Connect ist es für User in Zukunft einfacher über Brauchbarkeit und Nutzen einer Extension zu entscheiden. 

Der Magento Connect Marketplace bietet mehr als 500 Erweiterungen für die Magento Enterprise Edition. Bisher hat Magento Extensions fast nahezu ohne Überprüfung zugelassen. So häufte sich zwar die Anzahl von Magento Erweiterungen an, das unübersichtliche System führte jedoch auch zu vielen Duplikaten, inkompatiblen Versionen und Modulen mit erheblichen Sicherheitslücken und/oder mangelhaftem Quellcode. Dieser Umstand war nicht gerade förderlich und führte zu Misstrauen und kritischen Zungen von Magento-Entwicklern und Magento Nutzern. Vor allem ärgerte es die Marktplatz-User, dass viele Extensions ohne Prüfung in Magento Connect einfach zugelassen wurden.

Neue Magento Qualitätskriterien: Schluss mit willkürlichem Upload

Magento hat die Marketplace-Schwächen erkannt und will dem Magento Connect Marketplace mehr Transparenz verleihen. So hat Magento Qualitätsrichtlinien für neue Magento Extensions eingeführt, um dem unkontrolliertem Upload von weiteren Extensions ein Ende zu setzen. Entspricht eine bisherige Extension nicht den Magento-Kriterien, wird dieses vom Marketplace entfernt. Bei Entfernung der Extension besteht jedoch weiterhin die Möglichkeit des erneuten Uploads, die neuen Bedingungen müssen jedoch erfüllt sein. Folglich setzt sich bei Magento der Grundsatz „Qualität statt Quantität“ durch.

Magento Ratings & Reviews System sorgt für Klarheit

Mit Hilfe des neuen Ratings & Reviews Systems sollen Händler und Solution-Partner in Zukunft schneller die Brauchbarkeit, Qualität und den Nutzen einer Extension erkennen. Das neue Magento-System macht es Händlern nicht nur einfacher Support, Entwickler, Qualität und Brauchbarkeit der Extension zu bewerten und zu beurteilen – das System ist auch ein Vorteil und eine Unterstützung für Entwickler ihre Extensions zu promoten.

Hier sind einige Tipps für das Gebrauchen des neuen „Ratings & Reviews“-Systems:

  • Achten Sie auf die zusätzlichen Ratings
: Kunden können jetzt für die Extensions eine Gesamtbewertung sowie drei weitere Bewertungen zu Developer Support, Funktionen und Nützlichkeit abgeben. Die Gesamtbewertungen können so als Leitfaden und Orientierung gesehen werden. 

  • Nutzen Sie positive und negative Bewertungen als Maßstab
: Mit den Kritiken über eine Extension ist es für User einfacher zu entscheiden, ob eine Extension gut oder schlecht ist. Der Extension-Inhaber hat das exklusive Recht auf Kundenkritiken zu reagieren. So geben die Bewertungen vorab einen besseren Einblick in die Qualität und Beschaffenheit der Erweiterung und zeigen mögliche Schwächen der Extension auf. Die Häufigkeit und Qualität der Antworten der Extension-Owner sind auch ein guter Indikator für das „support level“ der Erweiterung. Zudem ist die Einschätzung der User auch deswegen hilfreich, da sie eine Extension entweder empfehlen oder auch davon abraten können.


  • Durchsuchen Sie den Q&A Bereich
: Stöbern Sie unter Q&A – auch wenn Sie keine Fragen haben. Dies ist ein guter Weg, um Erfahrungen anderer User zu lesen. Welche Probleme hatten User mit der Extension? Haben die Extension-Owner auf die Probleme reagiert? Der Bereich Q&A ist sicherlich hilfreich, um eine erste Orientierung zu bekommen und sich über unterschiedliche Extensions zu informieren.


  • Interpretieren Sie den „Popularity Score“ richtig:Die Popularitäts-Nummer bedeutet, wie oft eine Free Extension bereits gedownloaded wurde oder wie oft ein Connect-User sich auf eine kostenpflichtige Extension bezogen hat. Der „Popularity Score“ sagt jedoch nichts über Quantität oder Qualität der Extension-Kritiken aus. Nutzen Sie die Popularität-Punktezahl vielmehr als Maßstab für die Extension-Bedeutung für User. Um etwas über die Qualität der Erweiterung zu erfahren ist es besser, das Ratings & Reviews System genauer zu betrachten.


Bewertungen werden binnen weniger Stunden geprüft und erst nach erfolgreicher Prüfung zugelassen. Das neue System wird sicherlich für mehr Klarheit und eine bessere Orientierung auf Magento Connect sorgen. 

Gerade im deutschsprachigen Raum sei an dieser Stelle aber auch nochmals explizit auf die Netresearch appfactory hingewiesen, in der geprüfte und insbesondere für den deutschen Markt relevante Extensions aufgeführt werden.
          TechDivision veröffentlicht PHPUnit Tests für Magento        

Eine der größten Schwächen von Magento, wenn man im Vergleich zu anderen Shopsystemen überhaupt von Schwächen sprechen kann, besteht im Fehlen einer PHPUnit-Testsuite analog der, die mit Magento 2 eingeführt wird. Da seitens Magento momentan kein finaler Fertigstellungstermin für Magento 2 genannt wird und die aktuellen Magento Versionen daher sicherlich noch einige Zeit für Projekte zum Einsatz kommen werden, hat der Magento Gold Partner TechDivision die Unit Tests von Magento 2 für die bisherigen Magento Versionen portiert. Diese Magento-Testsuite steht ab sofort unter Github zum Download zur Verfügung und kann von Interessierten jederzeit verwendet bzw. gerne auch weiterentwickelt werden.

Jeder Entwickler, der mit Magento arbeitet, vermisst sicherlich schmerzlich eine Testsuite, die die Basisfunktionalität von Magento mit Unit- und Integrationsstest automatisiert testbar macht. Zusätzlich sollen natürlich auch die eigenen Entwicklungen für sich, sowie in Abhängigkeit der Basisfunktionalität, testbar sein. Sieht man sich die Testsuite von Magento 2 etwas detaillierter an, ist gut erkennbar, in welche Richtung Magento hier künftig gehen wird. Nachdem die Qualität gerade bei eCommerce Applikationen ein ganz zentrales Element ausmacht, stellt Magento seinen Entwicklungspartnern künftig mit der Testsuite und dem Magento Automated Testing Guide die Tools für die notwendige Verbesserung der Qualität zur Verfügung.

Auf Basis dieser vorhandenen Ansätze hat TechDivision die Testsuite von Magento 2 jetzt für den Einsatz in bestehenden Magento Versionen – primär für die Magento Enterprise Edition – angepasst und setzt diese bereits seit geraumer Zeit sehr erfolgreich in der Praxis ein.

Trotz der aktuell noch relativ überschaubaren Testabdeckung, hat bereits die Einführung der Testsuite zu einer erheblichen Verbesserung der Qualität beigetragen. So konnten schon während der Migration zahlreiche im herkömmlichen Betrieb nicht sichtbare Probleme erkannt und beseitigt werden. Durch die Einführung und die tägliche Arbeit mit der Testsuite und die Umstellung auf Scrum wurden die TechDivision Entwickler beinahe über Nacht zur Änderung ihrer Arbeitsweise hin zu Test-Driven-Development gezwungen. Hierdurch entstanden zwar im ersten Schritt zusätzliche Aufwände in Form von Einarbeitung, Workshops und dem Schreiben von Tests, jedoch zeigt sich bereits nach kurzer Zeit, dass die Qualität und damit auch die Kundenzufriedenheit überproportional zu den Aufwänden steigt.

„Mit der Portierung der Magento Testsuite und ersten Einsätze in diversen Kundenprojekten konnten wir die Qualität und Stabilität sowie in der Folge auch die Kundenzufriedenheit bereits signifikant steigern, was bereits zum jetzigen Zeitpunkt zeigt, dass wir mit diesem Ansatz absolut richtig liegen,“ so Tim Wagner, Leiter Entwicklung bei TechDivision.

Der aktuelle Stand der Testsuite steht über das Github Repository der TechDivision GmbH zur Verfügung und kann unter nachfolgendem Link jederzeit heruntergeladen und kostenfrei in eigenen Projekte eingesetzt werden:

Da aus Kompatibilitätsgründen derzeit noch zahlreiche Tests deaktiviert wurden, würde sich TechDivision über Unterstützung aus der Community sehr freuen, um alle sinnvollen Tests aus Magento 2 zu portieren und eine möglichst große Testabdeckung zu erreichen.

Eine detaillierte Erläuterung zur Testmigration sowie zum richtigen Einsatz der Magento Testsuite wurde auf dem TechDivision Blog unter nachfolgendem Link veröffentlicht:

          Magento Benutzerhandbuch kostenlos zum Downlaod        

Der bisherige Erfolg des Magento Benutzerhandbuches der Magento-Agentur TechDivision, sowie das positive Feedback war genug Ansporn, die Publikation für den kostenlosen Download freizugeben. Bislang war das Handbuch im TechDivision Shop für EUR 34,90 erhältlich und ist in den vergangenen Jahren zu einem der Standardwerke für Magento Shopbetreiber geworden.
Ab  sofort steht das Magento Handbuch allen Interessierten kostenfrei auf der TechDivision Webseite unter zum Download zur Verfügung. Das Handbuch basiert auf der Magento Community Edition und enthält auf rund 350 Seiten neben allen relevanten Funktionalitäten des Backends auch sämtliche neuen Features von Magento inkl. dem integrierten CMS mit Widget-Funktion.

Magento ist mit über 150.000 Installationen weltweit inzwischen zu einer festen Größe im Bereich eCommerce-Software herangewachsen und zählt mittlerweile sicherlich zu einem der Treiber im deutschsprachigen eCommerce-Bereich.

Dabei besticht Magento nicht nur durch umfangreichste Funktionalitäten und eine sehr saubere, vor allem aber flexible Softwarearchitektur, sondern auch durch eine riesige, weltweite Entwicklergemeinde und Community, die für wichtige Impulse sowie eine solide Weiterentwicklung sorgt. Damit gehört Magento zu einem der heißen Kandidaten, wenn es um professionelle Shopsoftware geht.

Um Interessierten sowie Shopbetreibern den Einstieg in Magento etwas einfacher zu gestalten, hat der langjährige Magento Gold-Partner TechDivision bereits Ende 2008 das erste deutschsprachige Magento-Benutzerhandbuch veröffentlich, das zwischenzeitlich permanent weiterentwickelt und aktualisiert wurde. „Wir haben in den letzten Jahren als Dienstleister von Magento enorm profitieren können. Mit der Freigabe unseres Handbuches zum kostenlosen Download möchten wir damit etwas zurückgeben“, so Josef Willkommer, Geschäftsführer von TechDivision.

Das Handbuch richtet sich an alle aktuellen und zukünftigen Shopbetreiber, die Magento im Detail kennen lernen möchten und eine detaillierte Anleitung für die tägliche Arbeit mit der Magento-Administrationsoberfläche wünschen.

Ein klarer, leicht verständlicher Schreibstil sowie zahlreiche Abbildungen, Beispiele und Tabellen helfen, die Funktionsvielfalt der Software schnell zu verstehen. Programmierkenntnisse sind zum Verständnis des Buches nicht erforderlich, da die entsprechenden Bereich bewusst ausgeklammert wurden und sich das Handbuch ausschließlich an Shopbetreiber richtet.

Um ein besseres Verständnis für die Software zu schaffen, wird einleitend auf die Philosophie und die Grundüberlegungen hinter Magento eingegangen. Darauf aufbauend werden die Vorteilen aber auch Nachteilen dieses Ansatzes beleuchtet und die notwendige Hardwareausstattung sowie sinnvolle Vorüberlegungen erläutert. In den nachfolgenden Kapiteln werden dann alle für Shopbetreiber relevanten Menüpunkte des Backends beschrieben und Schritt-für-Schritt beleuchtet.

Beim Benutzerhandbuch von TechDivision handelt es sich um ein eBook zum Download. Diese Vertriebsform bietet Ihnen – gegenüber einem gedruckten Buch – folgende Vorteile:

  • Das Handbuch kann ständig und schnell aktualisiert werden.
  • Das Format DIN A4 bietet mehr Platz für Inhalte und Bilder als ein gebundenes Format, außerdem lassen sich die Inhalte komfortabel durchsuchen.
  • Die hinterlegten Links sind anklickbar, d.h. es entsteht kein Medienbruch.
  • Die Leser können sich in Ihrem Ausdruck Notizen machen und sich einzelne Seiten jederzeit wieder ohne Notizen ausdrucken.
  • Das Magento eBook war das erste deutschsprachige Buch am Markt, d.h. die Autoren beschäftigen sich schon sehr lange und intensiv mit der Thematik.

          Ausgabe 04/2012 des eStrategy-Magazins erschienen        

Ab sofort steht Ausgabe 04/2012 des eStrategy-Magazins unter wieder kostenlos zum Download bereit. Auch in dieser Ausgabe haben wir wieder einen spannenden Themenmix rund um E-Commerce und Online-Marketing zusammengetragen – eine ideale Weihnachtslektüre für die anstehenden Feiertage!

In der letzten Ausgabe für das Jahr 2012 beschäftigen wir uns schwerpunktmäßig mit dem Thema KPIs im Online-Business. Ohne die richtigen Kennzahlen wird es mit dem Erfolg im Web mittel- und langfristig schwierig. Dabei helfen Trackinglösungen das jeweilige Geschäft sehr transparent zu machen. Wir zeigen, auf was man achten sollte und stellen einige mögliche KPIs vor.

Nachfolgend alle Themen der aktuellen Ausgabe im Überblick:


  • Couch Commerce – Das Zeitalter des Post PC Commerce hat begonnen
  • Customer Experience im Online-Kanal
  • Mssbrauch des Wiederrufsrechts im Onlinehandel
  • Experten-Interview: „Es ist nie zu früh für Usability“
  • Auswirkungen der SEPA-Umstellung auf den Onlinehandel
  • Die Online- und E-Commerce-Welt in Zahlen und Fakten
  • Bedeutung von Zahlverfahren für den Erfolg im E-Commerce
  • Umsatzeinbrüche durch externe Widgets


  • KPIs im Online-Marketing
  • „Content is King“, oder wie man mit guten Inhalten Links aufbauen kann
  • Video-SEO – Tipps für ein efolgreiches Videomarketing
  • Effektive Erfolgskontrolle im E-Mail-Marketing
  • Google-Tools – Die sechs wichtigsten Helfer im Überblick


  • Praxis-Probleme beim Webseitenerstellungsvertrag

Abgerundet wird auch diese Ausgabe des eStrategy-Magazins mit interessanten Buch-empfehlungen sowie spannenden Surftipps.

Für wen ist das Magazin gedacht?
Das eStrategy-Magazin ist primär für Shop- und Website-Betreibern, Agenturen, Unternehmensberatungen sowie IT- und Marketing-Verantwortliche gedacht, die sich über aktuelle News, Trends und Fachbeiträge rund um E-Commerce und Online-Marketing informieren möchten. Die Inhalte werden von Experten und erfahrenen Praktikern verfasst wodurch entsprechende Praxisrelevanz gewährleistet werden kann.

Die Ausgabe 01/2013 erscheint am 12.03.2013. Die Themenplanung für diese kommende Ausgabe läuft bereits und Gastautoren können sich gerne noch mit Themenvorschlägen unter an uns wenden. Wer Feedback loswerden möchte oder an einer anderweitigen Partnerschaft mit dem eStrategy-Magazin interessiert ist, kann uns natürlich ebenfalls jederzeit kontaktieren.

          Komplexität des Online-Handels        
Den eigenen Online-Shop pflegen und optimieren, gleichzeitig Werbung betreiben und zwar mindestens in sozialen Netzwerken, über Newsletter-Marketing und SEO. Den rechtlichen Anforderungen neben dem eigentlichen Abwickeln der Geschäfte, Einkauf, Verkauf und Versand von Waren gereicht werden. Gerade für Online-Händler sind das zahlreiche Herausforderungen, die allesamt einen Einfluss auf den Geschäftserfolg haben.

 Nur die wenigsten Online-Händler haben jedoch das nötige Know-How in allen Bereichen gleichzeitig. Das nötige Wissen zu erlangen ist vor allem im Rechtsbereich und in den Marketing-Bereichen wie z.B. SEO, ein schwieriges und zeitaufwendiges Unterfangen. Da wünscht man sich, dass ein Knopfdruck genügen würde, um beispielsweise die Rechtssicherheit des eigenen Shops sicherzustellen.

Jeder zweite Online-Händler wurde 2011 abgemahnt 

Welche Formulierungen gehören in die AGB? Was muss unbedingt im Impressum genannt werden? Sollte ich eine Widerrufsbelehrung oder doch lieber die Rückgabebelehrung im Shop nutzen? Mit diesen und vielen weiteren Fragen müssen sich Online-Händler nahezu täglich auseinandersetzen. Dass den Händlern dabei viele Fallen lauern, belegt auch die Tatsache, dass im Jahr 2011 im Schnitt jeder zweite Online-Händler mindestens einmal abgemahnt wurde.
Viele Online-Händler geben die Erstellung der Rechtstexte inzwischen an Rechtsanwälte, die auf Onlinerecht spezialisiert sind. Die damit erreichte Zeitersparnis ist keinesfalls zu verachten. Allerdings verlagert sich der Zeitaufwand nun auf die Einstellung der rechtssicheren Texte im Online-Shop. Stellt der Händler die Texte nicht an die korrekte Stelle, kann dies im Übrigen auch abgemahnt werden.

Sorgenlos mit der AGB-Schnittstelle 

Einen Schritt weiter in Sachen Rechtssicherheit ist inzwischen der Händlerbund. Die dort entwickelte AGB-Schnittstelle sorgt dafür, dass die Rechtstexte stets an die dafür vorgesehene Stelle des Shops geladen werden und das auf Knopfdruck. Damit sind die Texte nicht nur rechtssicher, sondern auch stets an der einmalig vorher eingerichteten Position im Online-Shop eingebunden. Für viele Shopsysteme steht die AGB-Schnittstelle bereits zur Verfügung. Doch was passiert bei Gesetzesänderungen? Schließlich werden sich Online-Händler nicht mehr in dem Maße mit der Rechtssicherheit ihres Shops beschäftigen. „Die Online-Händler werden im Rahmen des AGB Update-Services durch uns über eine Gesetzesänderung informiert. Der Händler muss nun die geänderten Rechtstexte in den Shop laden. Dazu genügt erneut ein Knopfdruck und alle bereits eingebunden Rechtstexte sind aktualisiert und somit rechtssicher.“ so die Antwort von Andreas Arlt, Vorsitzender im Händlerbund, auf Nachfrage.
          Ausgabe 03/2012 des eStrategy-Magazins verfügbar        
Für alle E-Commerce-Interessierten gibt es passend für die kommende kühlere Jahreszeit wieder jede Menge fundierten Lesestoff.

Cover des eStrategy-Magazins Ausgabe 03/2012

Ab sofort steht Ausgabe 03/2012 des eStrategy-Magazins unter wieder kostenlos zum Download bereit. In der aktuellen Ausgabe liegt der Themenschwerpunkt im Bereich Email-Marketing. Hierzu wurde ein umfassender Artikel mit Grundlagenwissen sowie einen Marktüberglick gängiger Systeme erstellt. Ergänzt wird das Ganze durch viele weitere spannende Themen aus der Onlinewelt.

>> Am besten gleich Downloaden!

Nachfolgend alle Themen der aktuellen Ausgabe im Überblick:

  • Gedränge im Webshop? Last-Tests sichern „Service-Sofort“-Konzepte
  • Welche Alternativen gibt es zu Amazon und eBay?
  • Interview mit der Magento App Factory – dem Online-Markplatz für Magento Extensions
  • Realitätsgerechtes Usability-Testing
  • Die 5 wichtigsten Erfolgsregeln für Ihre Produktbeschreibungen
  • Customer-Live-Cycle im E-Mail Marketing
  • A/B-Testing und Multivariates Testing
  • Erfolgreiches E-Mail-Marketing – Wie der Newsletter-Versand gelingt!
  • Was Shop-Betreiber von den letzten Google-Updates lernen sollten!
  • Personal Branding im Online-Marketing
  • Sieben goldene Regeln für die Zustellbarkeit von E-Mails
  • Performanceoptimierung braucht Kennzahlen
  • Tag-Management-Systeme – Nutzen für Online-Marketing und Webanalyse
  • Online Projekte erfolgreich umsetzen
  • Auf der sicheren Seite?
  • Die Datenschutzerklärung im Web 2.0

Abgerundet wird auch diese Ausgabe des eStrategy-Magazins mit Buchempfehlungen sowie spannenden Surftipps.

Das eStrategy-Magazin ist ein Projekt der E-Commerce-Agentur TechDivision.

          Maßnahmen zur Kundenbindung im E-Commerce        
In Zeiten steigender Kosten für CPCs und die Neukundengewinnung generell, sind vermehrt Maßnahmen zur Kundenbindung gefragt. Online-Händler bauen auf Kundenbindungs-Maßnahmen wie Newsletter, Bonussysteme, Social Media oder stärkeren Kundenservice. Letztendlich können es sich viele Shop-Betreiber mit dem gestiegenen Wettbewerb schlicht nicht mehr leisten Kunden zu verlieren. Maßnahmen zur Steigerung der Kundenzufriedenheit wie eine verbesserte Shop-Suche, eine Optimierung der Usability und User-Experience, zusätzliche Bezahlverfahren sowie ein verbesserter Support sind die logische Konsequenz.

Welche Bedeutung CRM für Onlineshops hat und welche Maßnahmen zur Kundenbindung in welchem Stadium des Kundenlebenszyklus Sinn machen und eingesetzt werden sollten, ist im Artikel CRM für Onlineshops sehr übersichtlich beschrieben.

 Hier geht es zum vollständigen Artikel!
          PSMext Modul für Magento zur Anbindung diverser Preissuchmaschinen wieder verfügbar        
Ab sofort steht das beliebte Magento Exportmodul PSMext, das eine flexible Anbindung beliebiger Preissuchmaschinen und Produktportale ermöglicht, in überarbeiteter Version in der magentoappfactory wieder zum Download zur Verfügung. Die aktuelle Version von PSMext ist dabei auch mit den neuesten Magento Versionen 1.6 und 1.7 kompatibel. Ursprünglich wurde das Modul vom Magento Gold Partner TechDivision entwickelt. Ab sofort wird PSMext im Rahmen einer strategischen Partnerschaft vom Magento Modulspezialisten Netresearch weiterentwickelt und supportet.

Mit PSMext können per Drag & Drop beliebige Exportfiles für nahezu alle gängigen Preis-suchmaschinen und Produktportale sowie für interne Auswertungen schnell und einfach erstellt und verwaltet werden. Der Shopbetreiber kann dabei auf alle in Magento angelegten Attribute zurückgreifen.

Dabei werden die Exportdateien einfach per Drag & Drop zusammengestellt. Die Gestaltung der Exportfiles liegt vollständig in den Händen des Shopbetreibers. So lassen sich neben den benötigten Informationen wie z.B. Produktbezeichnung, Preis und Produkt-URL auch die notwendigen Spaltenüberschriften, Reihenfolgen und Trennzeichen (z.B. Semikolon oder TAB) sowie das Dateiformat (CSV oder TXT) individuell definieren.

“Wir freuen uns, mit Netresearch einen erfahrenen Partner gefunden zu haben, der sich in der Folge um die Weiterentwicklung und den Support von TechDivision Modulen kümmert. Netresearch ist weltweit einziger Magento Preferred Extension Development Partner wodurch hier die nötige Sicherheit, Professionalität und Seriösität für Magento User gewährleistet wird. Hier war uns sehr wichtig, unser bisheriges Produktgeschäft d.h. den Support und die Weiterentwicklung unserer Module in professionelle Hände zu geben, da wir uns in der Folge ausschließlich auf Entwicklungs- und Consultingleistungen im Magento-Umfeld fokussieren möchten. Insofern stellt diese Konstellation für uns und Magento-User einen idealen Ansatz dar,” so Josef Willkommer, Geschäftsführer von TechDivision.

Im Magento Exportmodul PSMext werden die Exportdateien einfach per Drag & Drop zusammengestellt. Somit werden alle Preissuchmaschinen unterstützt, die CSV- oder TXT-Dateien verarbeiten können! Dazu zählen unter anderem:
  • Google Shopping
PSMext unterliegt dabei keinerlei Beschränkungen! Damit eignet sich PSMext auch für große Shops mit vielen Tausend Produkten.

Das Exportmodul PSMext wird seit Herbst 2010 bereits von einer Vielzahl Kunden sehr erfolgreich in der täglichen Praxis eingesetzt und hat seine Praxistauglichkeit inzwischen in unterschiedlichsten Shopgrößen unter Beweis gestellt.

Die nachfolgende Aufstellung erlaubt einen ersten Einblick in den Funktionsumfang von PSMext:
  • Anbindung beliebiger Preissuchmaschinen und Produktkataloge per Drag & Drop
  • Vorkonfigurierte Exportprofile bekannter Preissuchmaschinen mit Editierfunktion
  • Modifizier- und formatierbare Spaltenwerte
  • Unterstützung von Tracking-Parameter (z.B. für Google Analytics)
  • Zugriff auf alle Magento-Attribute
  • Steuerung und Übergabe der Versandkosten
  • Selektion der zu exportierenden Produkte per Exportfilter
  • Volle Unterstützung von Multi-Websites, Multi-Stores und Multi-Store-Views
  • Flexible Erstellung von Exportdateien für interne Auswertungen / Affiliate Programme
  • Automatisierter Export ausgewählter Preissuchmaschinen per konfigurierbarem Cronjob
  • Vollständige Integration nach Magento-Standard
Weitere Informationen zu PSMext und den sich damit ergebenden Einsatzmöglichkeiten finden Sie unter sowie unter

Mehr über TechDivision und Magento lesen Sie unter:
          Ausgabe 02 / 2012 des kostenlosen eStrategy-Magazins ist erschienen        
Auch die zweite Ausgabe des eStrategy-Magazins in 2012 wartet wieder mit hochkarätigen Fachartikeln auf. Die insgesamt 11. Ausgabe kann ab sofort kostenlos heruntergeladen werden.

Das Titelthema dieser Ausgabe lautet „Hosting-Markt 2012 – Anbieter, Services & Trends“. Die aktuelle Ausgabe bietet auf über 112 Seiten aber auch sonst wieder einen ausgewogenen Themenmix aus den Bereichen E-Commerce, Online-Marketing, Webentwicklung und Mobile.

Jetzt eStrategy-Magazin herunterladen!

Hier der vollständige Themenüberblick:

  • Den Hebel an der richtigen Stelle ansetzen – Wichtige Erfolgsfaktoren im E-Commerce
  • Landesspezifische Zahlverfahren im Online-Shopping – Ein Blick über die Grenzen
  • Chancen und Risiken des Verkaufs via Amazon
  • Hosting-Markt 2012 – Anbieter, Services & Trends
  • Betrugsprävention und Kundenorientierung – wie Sie im Checkout beides optimieren
  • Von Multi-Channel zu Cross-Channel: Eine Herausforderung für den Handel
  • Retouren spürbar senken durch die Nutzung von PIM
  • Suchergebnisse aufwerten mit Google Rich Snippets
  • Content Marketing für Unternehmen
  • Die Methoden der Suchmaschinenoptimierung – State of the Art
  • Pinterest - Ein Muss für Shopbetreiber?
  • Mobile Frameworks
  • Abmahnfalle AGB: 12 Klauseln, die in AGB nicht verwendet werden sollten
  • Kundenbindungsprogramme und die Steuern – So vermeiden Händler böse Überraschungen in der Betriebsprüfung
  • Ab dem 1. August 2012: Neue gesetzliche Vorgaben für Online- Händler

Jetzt eStrategy-Magazin herunterladen!


          Rosenheimer Online-Forum am 11.05.2012 in Kolbermoor        
Das 1. Rosenheimer Online-Forum richtet sich an Unternehmer und Entscheider aus Rosenheim und der Region und ist speziell auf die Bedürfnisse von mittelständischen Unternehmen zugeschnitten. Führende Internet-Experten geben in Fachvorträgen wertvolle Praxistipps und informieren über aktuelle Entwicklungen und Trends im Internet. Veranstaltet wird das Rosenheimer Online-Forum von OVB24, dem Nachrichtenportal für Süd-Ost-Bayern und der Internet-Agentur TechDivision.

Das Internet bietet dem Mittelstand die Möglichkeit, sich mit einfachen Mitteln über regionale Grenzen hinaus einen Namen zu machen, neue Vertriebswege zu nutzen und einen besseren Service anzubieten. Als neutrale Informationsveranstaltung informiert das Rosenheimer Online-Forum über aktuelle Entwicklungen und Trends im Internet.

Für wen das Rosenheimer Online-Forum genau ist und um was es sich genau handelt, erfahren Sie im Video!

Programm des Rosenheimer Online-Forums:

12:30 Uhr

Get Together

13:30 Uhr


13:45 Uhr

Keynote-Speaker: Philipp Riederle
"24h online - 0h Freizeit? Das Kommunikationsverhalten der Generation Y"

14.30 Uhr

Dr. Matthias Orthwein
"Online-Recht für den Mittelstand - Das sollten Sie berücksichtigen!"

15.15 Uhr


15.45 Uhr

Dipl.-Kffr. Aline Eckstein
"Social Media - wenn Kunden Freunde werden"

16.30 Uhr

Prof. Dr. Mario Fischer
"Google – Black Box oder Profitquelle für den Mittelstand?"

17.15 Uhr

Ende & Get Together

Tickets für das Rosenheim Online-Forum können unter sowie unter der Ticket-Hotline: 08031/40904-264 gekauft werden!

          Preisvergleich - Preisvergleichsportale im Überblick        
Im deutschen Internet stößt man bei der Suche nach Produkten frühzeitig auf Preisvergleichs-Seiten. Zu den wohl bekanntesten gehören,, und Sie alle versuchen mit möglichst optimalen Suchtreffern und vielen Informationen zu den Produkten selbst kaufinteressierte Nutzer in die Onlineshops zu vermitteln. Für den Endverbraucher sind Preisvergleiche also ein guter Anlaufpunkt um gut und günstig Produkte online zu erwerben.

Für den Shopbetreiber ist die Auswahl riesengroß. Ohne Erfahrung ist es schwer mit den richtigen Diensten zu starten. Zu groß ist der Einfluss aus der medialen Welt und es gibt einfach auch zu viele Preisvergleiche. Nicht jeder Preisvergleich, der viel Banner- und Fernsehwerbungen schaltet, ist auch gut. Zudem gibt es Webseiten, die sich auf eine spezielle Nische konzentrieren und so zum Beispiel nur den Produkt- und Preisvergleich für Medikamente anbietet und manchmal genau in diesem Segment auch die beste Qualität liefert.


Dienstleistungen kosten Geld. Häufig, aber nicht immer gilt die Regel, je mehr Geld ausgegeben wird, desto höherwertig wird die Dienstleistung an sich sein. Für Preisvergleiche gilt dies im Grundsatz genauso. Die meisten Preisvergleichsdienste berechnen die Weiterleitungen aus dem Preisvergleich in den Onlineshop mit einer Klickgebühr (CPC). In der Tat gibt es auch Preisvergleichsdienste, die derzeit (noch) umsonst sind wie beispielsweise Google Shopping. Die Klickgebühren beginnen regelmäßig bei 8 € Cent und hören auf bei 38 € Cent pro Weiterleitung. Ob sich die Listung in solchen Portalen lohnt, ist mit heutigen Auswertungssystemen leicht herauszubekommen.


Moderne Shopsysteme sollten in der Lage sein Produktdaten zu exportieren. Dieser Export wird im TXT/CSV-Format erstellt. Die generierten Daten werden unter einer URL auf dem eigenen Server abgelegt. Beispiel: Je nach Konfiguration können diese Daten vom Shopsystem auch automatisch aktualisiert werden. Dies ist mit der Hilfe von „Scheduler“ (Windows-Server) und „cron job“ (Linux-Server) möglich.

Den Pfad zur generierten Datei URL wird an den Preisvergleichsdienst weitergegeben. Dadurch ist es dem entsprechenden Dienst möglich die Datei auf Veränderungen zu überprüfen und entsprechend zu verarbeiten. Sofern beide Seiten also einen Automatismus besitzen, werden nach Produktaktualisierung oder Preisaktualisierung im Shop die Daten umgehend in den Preisvergleichs-Dienst übergeben und dort ebenfalls aktualisiert.


Mithilfe von zum Beispiel Google Analytics (kostenlos) läßt sich sehr genau nachvollziehen, wie viele Nutzer von dem entsprechenden Preisvergleich in den Shop gekommen sind. Doch Vorsicht ist geboten. Übergibt der Preisvergleichs-Dienst die Daten an weitere Webseiten, kommt es sehr schnell zu Abweichungen zwischen den in Rechnung gestellten Klicks und den tatsächlich weitergeleiteten Usern in den Onlineshop.

Um dies zu vermeiden, sollten an alle Produktlinks (auch als Deeplinks bezeichnet) so genannte Kampagnen-Parameter angefügt werden. Beispiel für Google Analytics:

Um die Qualität eines Preisvergleiches zu kontrollieren ist dieser Weg der Messung unumgänglich.

Ist man als Shopbetreiber nicht in der Lage einen technischen Einfluss auf diese Produktdatenliste zu haben, so kann man den fett markierten Teil einfach per E-Mail an den Preisvergleich senden und um das Anfügen an die eigenen Produktlinks bitten. Wichtig hierbei ist, dass man an die unterschiedlichen Plattformen auch unterschiedliche Werte übergibt. In dem oben genannten Beispiel sind das die Informationen, die an übergeben werden würden.


Tom Adebahr ist seit vielen Jahren aktiver Blogger. Unter anderem schreibt er für das Blog und widmet sich auch hauptberuflich dem E-Commerce. In der Freizeit treibt er gerne Sport und verbringt die Zeit mit seiner Familie und mit Reisen.

          Neue Ausgabe 01 / 2012 des eStrategy-Magazins erschienen        
Die erste Ausgabe des eStrategy-Magazins in 2012 – insgesamt handelt es sich bereits um die 10. Ausgabe seit der Ersterscheinung in 2009 – steht ab sofort unter kostenlos zum Download zur Verfügung.

Der Fokus bzw. das Schwerpunktthema in dieser Ausgabe liegt auf dem umfangreichen Thema Versand- und Fulfillment. Die aktuelle Ausgabe bietet aber auch sonst wieder einen ausgewogenen Themenmix zu den Bereichen E-Commerce-, Online-Marketing, Webentwicklung und Mobile und umfasst insgesamt über 118 Seiten.

Hier der vollständige Themenüberblick:

  • E-Commerce Trends der Zukunft
  • Erfolgsfaktor Bestellprozess – Standards, Statistiken und Good Practises
  • Fulfillment – Versand- & Fulfillmentanbieter – Was steckt in ihnen?
  • Mehr Umsatz durch eine intelligente Shopsuche
  • Intelligentes Logistik-Management
  • Der deutsche E-Commerce Markt in Zahlen
  • Social Media Marketing – Produktivitätsverlust oder strategische Notwendigkeit
  • Sind Ihre Klickraten bei Google schon optimal? (Klickratenoptimierung Teil II)
  • Erfolgsfaktoren bei viralen Marketingkampagnen
  • E-Shop & Blog – eine ideale Verbindung zur Kundenakquise
  • SEO Trends 2012
  • Im Visier der Google Qualitätsrichtlinien – Google Quality Ratern „über die Schulter geblickt“
  • WordPress als Business-CMS? Möglichkeiten und Grenzen im Vergleich zu TYPO3
  • Alles, was Sie über agile Projektentwicklung mit Scrum wissen sollten
  • Bye bye Pixel – Der bessere QR-Code?
  • M-Days: Mobile Trends 2012
  • Markenschutz im E-Commerce – Grenzen Sie Ihr Angebot von Mitbewerbern ab
Abgerundet wird auch diese Ausgabe des eStrategy-Magazins mit Buchempfehlungen sowie spannenden Surftipps.

Für wen ist das Magazin gedacht?
Das Magazin wird überwiegend von Shop- und Website-Betreibern, Agenturen, Unternehmens­beratungen sowie IT- und Marketing-Verantwortlichen gelesen und ist natürlich auch für alle anderen, die an den Themenbereichen eCommerce, Online-Marketing, Webentwicklung und Mobile interessiert sind, gedacht.

Die Ausgabe 02 / 2012 wird am 12.06.2012 erscheinen. Die Themenplanung für die kommende Ausgabe läuft ebenfalls bereits wieder. Gastautoren können sich gerne noch bis 30.04. mit Themenvorschlägen unter an uns wenden. Wer Feedback loswerden möchte oder an einer anderweitigen Partnerschaft mit dem eStrategy-Magazin interessiert ist, kann uns natürlich ebenfalls jederzeit kontaktieren.

> eStrategy-Ausgabe 01 / 2012 herunterladen!
          eCommerce Tag am 10.02. in Kolbermoor        
Das Kompetenzzentrum eCommerce veranstaltet am 10.02. einen eCommerce Tag in Kolbermoor. Zielgruppe des eCommerce Tages sind mittelständische Unternehmen, die in den eCommerce einsteigen möchten oder bereits erfolgreich im Online-Verkauf tätig sind, aber nach weiteren Optimierungs-Möglichkeiten suchen. Der eCommerce Tag bietet außerdem eine ideale Plattform, um Erfahrungen mit anderen Unternehmen auszutauschen.

>> Zur Anmeldung für den eCommerce Tag (Teilnahme kostenfrei)


12:00 Uhr
Registrierung, kleiner Imbiss & Gespräche

13:00 Uhr
Begrüßung & eCommerce Trends

13:10 Uhr

Magento - der neue Star am Shop-Software-Himmel
Bas Nawijn, Head of Sales D-A-CH, Magento Inc.
Kundenbericht: Christian Sommer,

13:50 Uhr

Multichannel - Chancen & Herausforderungen an
& Warenwirtschaft
Thomas Buck, Geschäftsführer Thomas Buck GmbH
Kundenbericht: Daniel Popp, Popp-PC GmbH & Co. KG

14:30 Uhr
Kaffeepause & Ausstellung mit Erfahrungsaustausch

15:15 Uhr
Rechtliche Stolperfallen im eCommerce
Dr. Matthias Orthwein, SKW Schwarz

15:45 Uhr
Wie ein einziger Satz zum Umsatz-Turbo werden
- oder Usability-Optimierung in der Praxis
Josef Willkommer, Geschäftsführer TechDivision GmbH

16:15 Uhr
Der Google-Effekt - Mit den richtigen Besuchern zum

Adrian Gluchow, Geschäftsführer TechDivision eConsulting

16:45 Uhr
Abschluss & Get Together

Mehr über das Kompetenzzentrum eCommerce erfahren!

          4. Magento-Stammtisch am 24.11.2011 in München        
Nicht nur Magento entwickelt sich rasch weiter und kann ein reges Wachstum verzeichnen, wir berichteten "Magento Marktanteil - Das Wachstum geht weiter", auch die Magento Community wird immer größer und beliebter. Das Jahr 2011 neigt sich langsam dem Ende, aber ein Magento-Event sollte man noch mitnehmen. Die Fa. SOPRADO veranstaltet am 24.11. den 4. Magento-Stammtisch um 19.00 Uhr in den Räumlichkeiten der Fa. PayPort, Stollbergstrasse 11, 80539 München

Auf dem Programm stehen interessante Talks zum Thema Magento & E-Commerce:

  • Damian Luszczymak (icyapp) – Magento 2 der aktuelle Stand
  • Josef Willkommer (TechDivision) – Zusammenspiel aus Magento und TYPO3 im Rahmen des WMF-Relaunches
  • Thomas Buck (buck.making IT easy.) – Multichannel Selling
  • Charlotte Borgato (PayPort) – PayPort Kauf auf Rechnung / Ratenzahlung als Umsatz-Booster
Für Magento geht ein bewegtes aber sehr erfolgreiches Jahr 2011 zu Ende.
Wir würden uns freuen, Sie auf einer der letzten Veranstaltungen in diesem Jahr begrüßen zu dürfen!

Weitere Informationen erhalten Sie unter:
4.Magento Stammtisch, München
          Magento Marktanteil - Das Wachstum geht weiter        
Die letzte eCommerce-Umfrage im Oktober 2011 hat 4% mehr Shop-Systeme gefunden als die Umfrage noch im Juni, die insgesamt 26.594 Shop-Systeme gezählt hat. Die Analyse geht über die Top 1 Million Homepages bei Alexa und analysiert sie, indem sie Features von 32 verschiedenen eCommerce Systemen feststellt.

Die beiden Systeme Zen Cart und Magento konnten jeden Monat ein enormes Wachstum von 18% verzeichnen. Jedoch ist Magento der wirkliche Gewinner mit 20% aller gefundenen Shop-Systeme. Wir konnten über die letzten 12 Monate ein kontinuierliches Wachstum bei Magento feststellen und es gibt kein Zeichen der Verlangsamung.

osCommerce ist weiterhin die viert beliebteste Plattform, aber der Abwärtstrend mit 200 Seiten weniger im Vergleich zur Juni Umfrage hält weiter an.

Es gibt eine Vielzahl anderer eCommerce Plattformen, die über die letzten 12 Monate gewachsen sind, einschließlich Interspire, OpenCart, PrestaShop und UberCart.

Die Präsenz der Enterprise Edition von Magento ist um über ein Drittel von 274 auf 378 Seiten gestiegen. Für Magento Entwickler ist es natürlich überaus ermutigend zu sehen, dass mehr und mehr Firmen auf diese Plattform umziehen.

Am Rande sei erwähnt, dass der Hosting-Standort jeder Seite überprüft wurde. Von den 26.000 Seiten wurden über 10.000 in den Vereinigten Staaten gehostet. Die zweit meisten Hostings weist Deutschland mit etwas unter 2.000 Seiten auf.