Croquet : Virtual Worlds Platform
Croquet Software Demo Movie (August 2007)
This is an edited screen capture from a Croquet demo given by Julian Lombardi in August 2007. It shows some of the capabilities of the open source Croquet software development environment. Croquet is designed for use in creating and deploying large-scale distributed multi-user virtual 3D applications and metaverses.
The Croquet architecture supports synchronous communication, collaboration, resource sharing and computation among large numbers of users on multiple platforms and multiple devices.

          "We'll See If It Happens"   

NASHUA, N.H.—The metaphor of choice for Howard Dean's Internet-fueled campaign is "open-source politics": a two-way campaign in which the supporters openly collaborate with the campaign to improve it, and in which the contributions of the "group mind" prove smarter than that of any lone individual.

Dean campaign manager Joe Trippi has admitted on numerous occasions (including this Slashdot post) that his time in Silicon Valley affected his thinking about politics. "I used to work for a little while for Progeny Linux Systems," Trippi told cyber-guru Lawrence Lessig in an August interview. "I always wondered how could you take that same collaboration that occurs in Linux and open source and apply it here. What would happen if there were a way to do that and engage everybody in a presidential campaign?"

But tonight, at the end of a town hall meeting at Daniel Webster College, is the first time I've seen the metaphor in action. Even if it had nothing to do with the Internet.

At the end of tonight's event, Paul Johnson, an independent voter from Nashua who supported John McCain in 2000 and has supported Dean since May, tells Dean that he's "deeply troubled" by the idea that his candidate is going to turn down federal matching funds and bust the caps on campaign spending. Politics is awash in too much money, Johnson says. Why not take the moral high ground and abide by the current system? That sounds like a great idea until Bush spends $200 million, Dean says. Well, then "challenge him to spend less," Johnson replies. Tell him you'll stay under the spending limits if he does, too. Dean's face lights up. "I'll do that at the press conference on Saturday," he says. "That's a great idea." (Saturday at noon is when Dean is scheduled to announce the results of the campaign vote on whether to abandon public financing.)

I walk over to Dean's New Hampshire press secretary, Matthew Gardner, and tell him his candidate just agreed, in an instant, to announce on Saturday that he'll stay under the federal spending caps for publicly financed candidates, if President Bush agrees to do the same (which, admittedly, is more than a little unlikely.) Gardner looks puzzled, then laughs. "That'll be interesting," he says. "We'll see if it happens."

          Linus Explains What Surprises Him After 25 Years Of Linux   
Linus Torvalds appeared in a new "fireside chat" with VMware Head of Open Source Dirk Hohndel. An anonymous reader writes:Linus explained what still surprises him about Linux development. "Code that I thought was stable continually gets improved. There are things we haven't touched for many years, then someone comes along and improves them or makes bug reports in something I thought no one used. We have new hardware, new features that are developed, but after 25 years, we still have old, very ba ...
          Die bessere Mediathek   
English synopsis: MediathekView is a great open source software that makes it easy to scan the fragmented variety of video libraries run by German, Austrian and French public broadcasters. MediathekView ist ein kostenloses Programm (Open Source, für Mac, Linux, Windows), … Continue reading
          Cheese, Ambil Photo dan Video melalui WebCam di Ubuntu   
Anda sedang mencari program untuk mengambil photo atau video menggunakan webcam di ubuntu? Jika Ya maka disinilah tempat yang benar. Dalam artikel kali ini saya akan memberikan satu program open source yang bisa digunakan untuk mengambil photo atau video melalui webcam. Program ini bernama Cheese. :)
Sepertinya Anda tampak terburu-buru sekali, kalau begitu mari kita mulai cara penginstalannya. :D

Sebenarnya cara menginstallnya sederhana saja, karena data program ini sudah tersedia pada repository ubuntu kita. Jadi kita mulai dengan mengetikkan perintah dibawah ini untuk mengupdate terlebih dahulu repository kita siapa tahu ada update terbaru dari program-program kita atau cheese.
$ sudo apt-get update
Setelah itu, kita mulai installasi cheesenya dengan perintah berikut.
$ sudo apt-get install cheese
Mudah bukan?? ^^

Sekian dari saya,  semoga bermanfaat bagi kita semua.. ^_^

          Radio Station Broadcast Software Podcasting   
A ll the software that you require plus instructions on how to set up your computer to the on-line servers , audio logger, play out automation, Jingle players, it's all here A serious collection of software and tools All the Radio software has been used for the last 10yrs 24 hours a day seven days a week on Global Radio, World Dance Radio, Starpoint Radio, LWR Radio & Solar Radio London with no issues. I have added another folder for free which is more up to date software, I have not tested this because I did not like the layouts on these. These are for Windows Computers only. This is Open Source Software Instant Download! If you have ordered this collection on DVD, this will come printed and in a protected sleeve, please check that you have ordered the correct format before committing to buy from the pop up menu. The download files are zipped using RAR
          Red Hat lance une solution d’infrastructure hyperconvergée 100% open source   
Red Hat lance une solution d’infrastructure hyperconvergée 100% open source
          Senior Oracle DBA with operational and R&D experience   
Senior Oracle DBA with operational and R&D experience 3+ Years experience as an Oracle DBA both from an operational and R&D perspective. 1+ Years experience as an "Open Source" DBA of either MySQL or PostgreSQL.Appetite for learning new technologies and getting your hands dirty with cutting edge database solutions. Self proficient and hard worker.Fluent in speaking and writing technological English.Experience in:Proven hands-on with operating and managing Oracle 12c databases and de...
          Microsoft .NET’s JPEG encoder makes crappy JPEGs   
Microsoft .NET has been making quite a bit of headway in the developer community recently with both the open source efforts and the upcoming ASP.NET 5 modern web framework. With so much attention on making .NET a “hip” platform (and hopefully breaking into the startup ecosystem), I would like to draw attention to a very […]
          Outliners and hierarchic thinking   

Scott Rosenberg responds to a rant that outliners are considered harmful because they force hierarchic thinking.

I've spent most of my adult life thinking about this, at least part-time, and with all due respect, the people who are criticizing outliners have vastly over-simplified them.

Think of an outliner as text-on-rails. It does exactly the opposite from forcing you to live with a hierarchy. It allows you to edit the hierarchy.

The equivalent criticism of unstructured text would be to say word processors are harmful because printing forces a rigidity to thinking, but the power of a word processor is that it doesn't force your ideas on to paper, it makes revision easy, and that has led to many variants from email to blogs and wikis. All were descended from the word processor, which was originally designed (misguidedly imho) to put words on paper.

I am one of the major proponents of outlining, along with Doug Engelbart, and I never imagined an outliner that forced you into a hierarchic box. For me, the puzzle wasn't solved until the hierarchy was perfectly malleable. We reached that milestone, again imho, a long time ago, in the mid-late 80s, with MORE, and since then, text-on-rails has been a solved problem. Now the trick is to introduce the idea more broadly. That's still waiting to happen.

My entry in this cause is the OPML Editor, which is open source, GPL, and is also a powerful text programming environment and content management tool, following in the tradition of previous programmable text tools, like Emacs on Unix.

          Welcome to the Laughing Horse Blog.   
This is a blog for the Laughing Horse Books and Video Collective. We are a All Volunteer Lead and Run Bookstore. We have a meeting space available for like minded groups to either rent out or use free of charge. There are 2 public access terminals(computers-the public can use for free) running and testing free, open source software, (Ubuntu-Linux), we also send out a Free, Open Wi-Fi signal, so you can bring your own Wi-Fi ready device and surf away. We do enforce a "Safer Space" policy, and expect those who enter the bookstore to work with us on keeping the place a "safer space" to come to, if you have questions about it, please ask the volunteer staffing the shift at the bookstore. So come on down, you might wanna call first, and make sure someones there to let you in. Since we are all volunteers, we are not always able to fill all-shifts, for the whole shift. Thanks for your understanding. Mike-d.(part of the Laughing Horse Collective).

          Currently Opening for PHP Developer | Bangalore   
TalPro - Bangalore, Karnataka - 1. Years of experience in Drupal or Other PHP frameworks: 4+ Years 2.Years of experience in WordPress : 5+ Years 3. Years of experience... using Drupal or other PHP frameworks 7.Technical Skill: a. Architectural Design: Demonstrates the use ofopen source and commercial tools...
          Release Notes: OTRS Business Solution 5s Patch Level 19   
  June 06, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the nineteenth patch level release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Patch Level 19 Release date: June 06, 2017 Release type:
          Release Notes: OTRS 5s Patch Level 20   
June 06, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 20th patch level release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 20 Release date: June 06, 2017 Release type: patch level Security Issues Advisory 17-2 Advisory
          Release Notes: OTRS::ITSM Module 5s Patch Level 20   
  June 06, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 20th patch level release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS 4 Patch Level 24   
  June 06, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 24th patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 24   
  June 06, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 24th patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS 3.3.17   
  June 06, 2017 —OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the seventeenth release of OTRS 3.3.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5.   Release Details Release name: OTRS 3.3.17
          Release Notes: OTRS Business Solution 5s Patch Level 18   
  May 09, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the eighteenth patch level release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Patch Level 18 Release date: May 09, 2017 Release type:
          Release Notes: OTRS 5s Patch Level 19   
May 09, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 19th patch level release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 19 Release date: May 09, 2017 Release type: patch level Please note, that from
          Release Notes: OTRS::ITSM Module 5s Patch Level 19   
  May 09, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the nineteenth patch level release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS FAQ 5 Patch Level 9   
  May 09, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS FAQ 5 Patch Level 9. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5
          Release Notes: OTRS Survey 5 Patch Level 4   
   May 09, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the fourth patch level release of OTRS Survey 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS MasterSlave 5 Patch Level 9   
   May 09, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the ninth patch level release of the OTRS MasterSlave module 5.   Please note, that from now on we will only officially support OTRS 4, OTRS 5 and
          Release Notes: OTRS Business Solution 5s Patch Level 17   
  March 28, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the seventeenth patch level release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Patch Level 17 Release date: March 28, 2017 Release type:
          Release Notes: OTRS 5s Patch Level 18   
March 28, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 18th patch level release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 18 Release date: March 28, 2017 Release type: patch level Please note, that from
          Release Notes: OTRS::ITSM Module 5s Patch Level 18   
  March 28, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the eighteenth patch level release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS 4 Patch Level 23   
  March 28, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 23rd patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 23   
  March 28, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 23rd patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS Business Solution 5s Patch Level 16   
  March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the sixteenth patch level release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Patch Level 16 Release date: March 07, 2017 Release
          Release Notes: OTRS 5s Patch Level 17   
March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 17th patch level release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 17 Release date: March 07, 2017 Release type: patch level Please note, that from
          Release Notes: OTRS::ITSM Module 5s Patch Level 17   
  March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the seventeenth patch level release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS FAQ 5 Patch Level 8   
  March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS FAQ 5 Patch Level 8. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5
          Release Notes: OTRS Survey 5 Patch Level 3   
   March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the third patch level release of OTRS Survey 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS MasterSlave 5 Patch Level 7   
   March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the seventh patch level release of the OTRS MasterSlave module 5.   Please note, that from now on we will only officially support OTRS 4, OTRS 5 and
          Release Notes: OTRS 4 Patch Level 22   
  March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 22nd patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 22   
  March 07, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 22nd patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          OTRS Asia Ltd. winning new Partner in China   
New Partner in China: Xi’an Dian Tong Software Co., Ltd.   Oberursel, JAN 30, 2017 +++ OTRS Asia Ltd., part of the OTRS Group wins Xi’an Dian Tong Software Co., Ltd. as new certified OTRS Partner in Chinas mainland. The OTRS Gruppe, the vendor and world’s leading provider of the open source helpdesk software „OTRS“
          Release Notes: OTRS Business Solution 5s Patch Level 15   
  January 24, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the fifteenth patch level release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Patch Level 15 Release date: January 24, 2017 Release
          Release Notes: OTRS 5s Patch Level 16   
January 24, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 16th patch level release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 16 Release date: January 24, 2017 Release type: patch level Please note, that from
          Release Notes: OTRS::ITSM Module 5s Patch Level 16   
  January 24, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the sixteenth patch level release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following
          Release Notes: OTRS 4 Patch Level 21   
  January 24, 2017 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 21st patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 21   
  January 24, 2017 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 21st patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS Appointment Calendar 5 Patch Level 2   
December 20, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS Appointment Calendar 5 Patch Level 2. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS Business Solution 5s Patch Level 14   
  December 13, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the fourteenth patch level release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Patch Level 14 Release date: December 13, 2016 Release
          Release Notes: OTRS 5s Patch Level 15   
December 13, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 15th patch level release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 15 Release date: December 13, 2016 Release type: patch level Please note, that from
          Release Notes: OTRS::ITSM Module 5s Patch Level 15   
  December 13, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the fifteenth patch level release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following
          Release Notes: OTRS 4 Patch Level 20   
  December 13, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 20th patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 20   
  December 13, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 20th patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          OTRS Group is launching a new version of its service management suite   
OTRS 5s supports service management with integrated scheduling and deployment planning, encryption, and video communication at the click of a mouse Oberursel, NOV 1, 2016 – OTRS Group, the vendor and world’s leading provider of the open source service management suite “OTRS”, is launching the new version of its software today: OTRS 5s. OTRS 5s
          Release Notes: OTRS Business Solution 5s   
  November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the release of its OTRS Business Solution™ 5s.   Release Details Release name: OTRS Business Solution™ 5s Release date: November 01, 2016 Release type: patch level     What’s
          Release Notes: OTRS 5s Patch Level 14   
November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the stable release of OTRS 5s. Release Details Release name: OTRS 5s Patch Level 14 Release date: November 01, 2016 Release type: patch level Security Issues Advisory 15-3 Please note,
          Release Notes: OTRS::ITSM Module 5s Patch Level 14   
  November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the OTRS::ITSM Module 5s. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.  
          Release Notes: OTRS Appointment Calendar 5   
  November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS Appointment Calendar 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following
          Release Notes: OTRS FAQ 5 Patch Level 7   
  November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS FAQ 5 Patch Level 7. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5
          Release Notes: OTRS 4 Patch Level 19   
  November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 19th patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 19   
  November 01, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 19th patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS 3.3.16   
  November 01, 2016 —OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the sixteenth release of OTRS 3.3.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5.   Release Details Release name: OTRS 3.3.16
          OTRS AG and OTRS 5s bring the feature “EasyConnect” for voice and video communication   
Oberursel, OCT 11, 2016: OTRS AG (WKN A0S9R3), the vendor and world’s leading provider of the open source help desk software “OTRS”, including the supplementary IT-Service Management Module “OTRS::ITSM”, uses the established W3C standard WebRTC for its feature EasyConnect in order to make communication more professional in real time, directly in the OTRS browser window.
          Release Notes: OTRS Business Solution 5 Patch Level 12   
  September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the twelfth patch level release of its OTRS Business Solution™ 5.   Release Details Release name: OTRS Business Solution™ 5 Patch Level 12 Release date: September 20, 2016 Release
          Release Notes: OTRS 5 Patch Level 13   
September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 13th patch level release of OTRS 5. Release Details Release name: OTRS 5 Patch Level 13 Release date: September 20, 2016 Release type: patch level   Please note, that
          Release Notes: OTRS::ITSM Module 5 Patch Level 13   
  September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 13th patch level release of the OTRS::ITSM Module 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS Survey 5 Patch Level 2   
   September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the second patch level release of OTRS Survey 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS TimeAccounting 5 Patch Level 4   
  September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software,today announces the fourth patch level release of the module OTRS TimeAccounting 5. Please note, that from now on we will only officially support OTRS 4, OTRS 5 and following versions.  
          Release Notes: OTRS MasterSlave 5 Patch Level 3   
   September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the third patch level release of the OTRS MasterSlave module 5.   Please note, that from now on we will only officially support OTRS 4, OTRS 5 and
          OTRS AG presents: Ready2Adopt – The process-template and web-service feature of the new OTRS 5s   
Oberursel, SEPT 20, 2016: OTRS AG (WKN A0S9R3), the vendor and world’s leading provider of the open source help desk software “OTRS”, including the supplementary IT-Service Management Module “OTRS::ITSM”, presents another feature from OTRS 5s – available on Nov. 1st – with import-ready process templates and a series of useful connectors. Many of our daily
          Release Notes: OTRS FAQ 5 Patch Level 6   
  September 20, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS FAQ 5 Patch Level 6. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5
          Release Notes: OTRS Business Solution 5 Patch Level 11   
  August 12, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the eleventh patch level release of its OTRS Business Solution™ 5.   Release Details Release name: OTRS Business Solution™ 5 Patch Level 11 Release date: August 12, 2016 Release
          Release Notes: OTRS Business Solution 4 Patch Level 7   
  August 12, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the seventh patch level release of its OTRS Business Solution™ 4.     Release Details Release name: OTRS Business Solution™ 4 Patch Level 7 Release date: August 12, 2016
          Release Notes: OTRS 5 Patch Level 12   
August 09, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 12th patch level release of OTRS 5. Release Details Release name: OTRS 5 Patch Level 12 Release date: August 09, 2016 Release type: patch level   Please note, that
          Release Notes: OTRS::ITSM Module 5 Patch Level 12   
  August 09, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 12th patch level release of the OTRS::ITSM Module 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS 4 Patch Level 18   
  August 09, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 18th patch level release of OTRS 4.   Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and following versions.
          Release Notes: OTRS::ITSM Module 4 Patch Level 18   
  August 09, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 18th patch level release of the OTRS::ITSM Module 4. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS Business Solution 5 Patch Level 10   
  July 19, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the tenth patch level release of its OTRS Business Solution™ 5.   Release Details Release name: OTRS Business Solution™ 5 Patch Level 10 Release date: July 19, 2016 Release
          Release Notes: OTRS FAQ 5 Patch Level 5   
  Juli 1, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS FAQ 5 Patch Level 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5
          Release Notes: OTRS FAQ 4 Patch Level 5   
  July 1, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the release of the module OTRS FAQ 4 Patch Level 5 Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5
          Release Notes: FAQ 2.3.6   
  July 1, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the stable release 2.3.6 of the FAQ module.     Release Details Release name: FAQ 2.3.6 Required OTRS release: OTRS Help Desk 3.3.6 or higher Release date: July
          Release Notes: OTRS Business Solution 5 Patch Level 9   
  June 28, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the ninth patch level release of its OTRS Business Solution™ 5.   Release Details Release name: OTRS Business Solution™ 5 Patch Level 9 Release date: June 28, 2016 Release
          Release Notes: OTRS 5 Patch Level 11   
June 28, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the 11th patch level release of OTRS 5. Release Details Release name: OTRS 5 Patch Level 11 Release date: June 28, 2016 Release type: patch level   Please note, that
          Release Notes: OTRS::ITSM Module 5 Patch Level 11   
  Juni 28, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software, today announces the 11th patch level release of the OTRS::ITSM Module 5. Please note, that from now on we will only officially support versions OTRS 4 and OTRS 5 and
          Release Notes: OTRS TimeAccounting 5 Patch Level 3   
  June 28, 2016 — OTRS, the world’s leading provider of open source Help Desk software and ITIL® V3 compliant IT Service Management (ITSM) software,today announces the release of the module OTRS TimeAccounting 5 Patch Level 3. Please note, that from now on we will only officially support OTRS 4, OTRS 5 and following versions.  
          Release Notes: OTRS Business Solution 5 Patch Level 8   
  June 8, 2016 — OTRS, the world’s leading provider of open source Help Desk and ITIL® V3 compliant IT Service Management (ITSM) solutions, today announces the eight patch level release of its OTRS Business Solution™ 5.   Release Details Release name: OTRS Business Solution™ 5 Patch Level 8 Release date: June 8, 2016 Release
          AT&T Expands Ultra-Fast, Next Gen Wireless Trials in Austin -    

AT&T today announced that the company is launching a second trial of ultra-fast fifth generation (5G) wireless broadband in Austin. According to an AT&T announcement, the company is expecting to use millimeter wave broadband to deliver speeds up to 1 Gbps in the trial, which will involve testing the company's DirecTV Now streaming service over these connections. The company says it had already been testing streaming video over 5G at the company's Middletown, New Jersey lab, and this is technically its second trial of millimeter wave in Austin.

Obviously it's too early to know what AT&T intends to charge for this ultra-fast service, or what kind of usage restrictions or caps will be imposed on the connections.

"In Austin, we re testing DIRECTV NOW over ultra-fast internet speeds at a variety of locations," AT&T says of the trial. "The network of the future will help redefine what connectivity means to both consumers and businesses. This trial helps show that the new reality is coming fast."

Again, it's worth reiterating that you won't just be able to run out and nab a 5G phone later this year and experience blistering speeds. The 5G standard hasn't even been formally created yet, and widespread deployment of 5G technology isn't expected to arrive until 2020 or later. There's also the lingering question of what exactly AT&T will charge for the honor of using this ultra-fast, lower latency network.

Still, the shift toward 5G is really a shift toward a number of collaborative technologies AT&T's calling "AT&T Network 3.0 Indigo" for branding purposes. This includes an overall push toward software defined networking (SDN) and the adoption of AT&T's open source ECOMP orchestration platform created to power the software-defined network. Collectively, these shifts should dramatically boost efficiency and lower costs for AT&T (but not necessarily you).

AT&T's full announcement has some additional detail on the company's latest ultra-fast 5G wireless broadband trials.
read comment(s)

          (IT) Performance Engineering Lead - Testing/DevOps/Microservices/Clou   

Rate: AUD   Location: Melbourne, Melbourne   

Performance Engineering Lead - Testing/DevOps/Microservices/Cloud About Cognizant Cognizant (NASDAQ: CTSH) is a leading provider of information technology, consulting, and business process services, dedicated to helping the world's leading companies build stronger businesses. Headquartered in Teaneck, New Jersey (U.S.), Cognizant combines a passion for client satisfaction, technology innovation, deep industry and business process expertise, and a global, collaborative workforce that embodies the future of work. Cognizant is a member of the NASDAQ-100, the S&P 500, the Forbes Global 2000, and the Fortune 500 and is ranked among the top performing and fastest growing companies in the world. Our Culture Your passion, integrity and experience are integral to Cognizant's success. You will be welcomed into a dynamic and expanding global leader in IT and Business consultancy where you will be valued for who you are. We take pride in our partnership with our clients, so your ability to add value and provide exceptional service to our clients are fundamental to your success. In return, you will be empowered with opportunities to develop your career and collaborate with talented colleagues in a supportive, diverse environment. At Cognizant we recognize that companies that are open and welcoming to a multi-cultural diverse workforce will thrive with fresh perspectives and collaborative knowledge. Cognizant is focused on promoting & increasing gender diversity and providing a workplace which encourages great participation and an equal playing field, where merit and accomplishment are the only criteria for success. The Role and its Responsibilities Cognizant is seeking an experienced Performance Engineering lead to drive one of our key client engagements based in Melbourne. In this role your key responsibilities will be focused across providing: Work closely with customer to drive delivery at onsite Be the face of Cognizant on the ground and provide technical direction to the team Have experience managing large teams and a strong background in process & governance, stakeholder management. Very strong technical knowledge on Digital, Cloud, Micro services and DevOps Perform NFR analysis, Impact analysis of the change and Design performance tests to validate NFRs Detailed review of the solution design and architecture to provide improvement recommendations Build Tools and utilities to drive test solution including Open source and Licensed tools Execute tests, perform analysis and provide the recommendations for Performance improvement Do detailed analysis of the test results, and deep dive to the root cause of the Performance issues Work with Offshore centres to drive delivery Attend Daily and weekly sync ups with the stakeholders Participate in the weekly training session to upskill the team members Great team management and stakeholder management skills Working in communication and Telecom sector will be an added advantage Qualifications Skills and
Rate: AUD
Type: Full Time
Location: Melbourne, Melbourne
Country: Australia
Contact: Cognizant
Advertiser: Cognizant
Reference: JS579707845/233744424

          Bookmarks for May 16, 2011   

Cyberlibrary CyberLibrary is an Open Source library management system that can be run from anywhere on the internet or an intranet. openbook4wordpress When you insert an OpenBook shortcode with an […]

The post Bookmarks for May 16, 2011 appeared first on What I Learned Today....

          Open source documentation in practice (Kubernetes case study), by John Mulhausen   
Lightning talk presentation given at Write the Docs San Francisco, March 29, 2016.
 4.0.1 PowerPC-Una suite di lusso che non ti costa nulla   

Sono finiti i tempi in cui eri obbligato a pagare un prezzo proibitivo (o a procurarti una copia illegale) per avere una suite efficiente di programmi che ti permettesse di scrivere una lettera, creare una presentazione o utilizzare un foglio di calcolo. Adesso ci sono soluzioni totalmente gratuite, funzionanti e legali, tra le quali spicca

Basato nel codice di StarOffice, rilasciato liberamente da Sun Microsystems, questo pacchetto di applicazioni open source comprende Writer (processore testi), Calc (foglio di calcolo), Impress (presentazioni), Base (database), Math (formule matematiche) e Draw (grafiche vettoriali). La suite lavora con molti formati di documenti, è capace di esportare in PDF ed è pienamente compatibile con quelli più diffusi di Microsoft Office (tra cui .doc, .xls e .ppt).

Tutte le applicazioni hanno un'interfaccia intuitiva, localizzata in italiano e molto simile a quella dei programmi MS Office, perciò non farai fatica per sentirti a casa. Funzionano, sono stabili, affidabili, totalmente gratuite. E la community di sviluppatori comincia a sfornare le prime estensioni Mac che ne aumentano le funzionalità. Toccare per credere? Ti convertirai.

Giudizio complessivo
Tutto quello che ti offrono le suite office a pagamento, non solo gratuitamente ma addirittura con alcune funzionalità più evolute, un'ottima stabilità, piena compatibilità con i formati standard e un'interfaccia intuitiva in italiano.

Download 4.0.1 PowerPC in Softonic

          Jenis-jenis Database dan Teknologinya    
Pada era komputer dan internet ini, peran database atau basis data sangat dominan. Hampir semua kegiatan administratif di perkantoran dan institusi kini diintegrasikan ke sistem komputasi dengan model database terpadu. Demikian juga, layanan-layanan online di internet juga tidak terlepas dari peran database. Lantas apakah jenis-jenis teknologi yang digunakan untuk mengelola database?

Database Server

Berikut ini adalah daftar jenis-jenis teknologi database, yang sebagian besar merupakan Relational Database Management System (RDBMS):
  • Apache Derby (sebelumnya dikenal sebagai IBM Cloudscape), merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Apache Software Foundation. Lazim digunakan di program Java dan untuk pemrosesan transaksi online.
  • IBM DB2, merupakan aplikasi pengolah database yang dikembangkan IBM secara proprietary (komersial). DB2 terbagi menjadi 3 varian, yaitu DB2 untuk Linux - Unix - Windows, DB2 untuk z/OS (mainframe), dan DB2 untuk iSeries (OS/400).
  • Firebird, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Firebird Project. Lazim dijalankan di Linux, Windows dan berbagai varian Unix.
  • Microsoft SQL Server, merupakan aplikasi pengolah database yang dikembangkan oleh Microsoft dan bersifat proprietary (komersial),namun tersedia juga versi freeware-nya. Lazim digunakan di berbagai versi Microsoft Windows.
  • MySQL, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Oracle (sebelumnya Sun dan MySQL AB). Merupakan pengolah database yang paling banyak digunakan di dunia dan lazim diterapkan untuk aplikasi web.
  • Oracle, merupakan aplikasi pengolah database yang bersifat proprietary (komersial), dikembangkan oleh Oracle Corporation. Pengolah database ini terbagi dalam beberapa varian dengan segmen dan tujuan penggunaan yang berbeda-beda.
  • PostgreSQL atau Postgres, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh PosgreSQL Global Development Group. Tersedia dalam berbagai platform sistem operasi seperti Linux, FreeBSD, Solaris, Windows, dan Mac OS.
  • SQLite, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh D. Richard Hipp. Dikenal sebagai pengolah database yang sangat kecil ukuran programnya, sehingga lazim ditanamkan di berbagai aplikasi komputer, misalnya di web browser.
  • Sybase, merupakan aplikasi pengolah database yang bersifat proprietary (komersial), dikembangkan oleh SAP. Ditargetkan untuk pengembangan aplikasi mobile.
  • WebDNA, merupakan aplikasi pengolah database yang bersifat freeware, dikembangkan oleh WebDNA Software Corporation. Didesain untuk digunakan di web.
  • Redis, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Salvatore Sanfilippo (disponsori oleh VMware. Difungsikan untuk jaringan komputer.
  • MongoDB, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh 10gen. Tersedia untuk berbagai platform sistem operasi dan dikenal telah digunakan oleh situs Foursquare, MTV Networks, dan Craigslist.
  • CouchDB, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Apache Software Foundation. Difokuskan untuk digunakan di server web.


Corey Robin has written a long article arguing that Austrian economic thought and marginalism in general is descended from Nietzsche. Hence, Hayek et al are a bunch of aristocrats dedicated to oppressing society. I'd be happy to be persuaded that the marginalists -- at least as Robin uses the term, which isn't the only way -- are Nietzschean, but Robin's article didn't do it. Too many strong assertions based on tenuous evidence, and Robin is not exactly an impartial observer. This follow-on John Holbo post -- while exceptional in many ways -- doesn't do it either. Partially because it rests so heavily on a peculiar (I think) reading of this quote from Hayek in The Constitution of Liberty:

To grant no more freedom than all can exercise would be to misconceive its function completely. The freedom that will be used by only one man in a million may be more important to society and more beneficial to the majority than any freedom that we all use.

This does not have to mean that "some people’s freedom is a lot more valuable than other people’s freedom", as Holbo wrote in a previous post and quotes here. In light of the rest of Hayek's work (or even the rest of the passage from which this quote is pulled: see below) it is strange to argue that this passage means anything like what Holbo thinks it means: "Ideally, we would find that one man and even make all others his slaves, if that is what it took to let him exercise his freedom to the fullest. Hayek thus affirms a freedom monster argument somehat analogous to the classic pleasure monster reductio."

The Hayek quote refers to the exercise of liberty, which may not be universal even if the liberty is extended universally. Hayek is arguing that it does not follow from the fact that the exercise may be non-universal that the liberty should be restricted. Instead, those who would exercise their liberty should be free to do so. In fact, there are a million liberties. None of us can act on all them, but all of us will act on some of them. Restricting liberties that the majority isn't exercising may be tempting, but it would be wrong to do so since the exercise of liberties leads to the improvement of society. That's the argument.

To get from Hayek to Holbo's interpretation of Hayek you have to take a few steps. First you'd have to ignore the footnote on the very passage Holbo pulls, which contains this quote (from some people I've never heard of): "If there is to be freedom for the few who will take advantage of it, freedom must be offered to the many." How does that imply enslavement of the masses for the benefit of one? Suppose I proposed universal suffrage while acknowledging that many people will stay home on election day. Would that make me anti-democratic? It's a strange argument.

Hayek believes that social progress occurs partially through experimentation, the results of which are ex ante unknowable. The "unknowable" part is extremely important for Hayek. It's how he can simultaneously support a welfare state and public provision of public goods while opposing egalitarian redistribution and social ownership of the means of production. "Knowable" advances can be socially planned, and Hayek was fine with using the power of the state to do so. ("It is the character rather than the volume of government activity that is important.") Unknowable advances cannot be planned but are nevertheless desirable, thus experimentation must be allowed through constitutionalization and encouraged by the preservation of (market) reward for experiments that succeed. Holbo is correct that this is Millian -- J.S. Mill's "utilitarianism" is dynamic, not static: society benefits at time t+1 from the exercise of individual liberty at time t. Whether this is actual utilitarianism or something else is, I suppose, the question -- but wrong when he implies that Mill, by way of Hayek, was Nietzschean.

Hayek seems to believe that the majority of society will choose not to experiment because they are risk averse -- "The freedom that will be used by only one man in a million..." emphasis added -- or because they are exercising some other liberty, but it is socially optimal to have somebody doing some experimenting. Those who are interested in doing so, those who will exercise their liberties while others do not, should be allowed to do so, since society will benefit if they succeed. The rest of the passage in The Constitution of Liberty quote above goes:

The less likely the opportunity, the more serious will it be to miss it when it arises, for the experience that it offers will be nearly unique. It is also probably true that the majority are not directly interested in most of the important things that any one person should de free to do. It is because we do not know how individuals will use their freedom that it is so important. If it were otherwise, the results of of freedom could also be achieved by the majority’s deciding what should be done by the individuals. But the majority action is, of necessity, confined to the already tried and ascertained, to issues on which agreement has already been reached in that process of discussion that must be preceded by different experiences and actions on the part of different individuals. 
The benefits I derive from freedom are thus largely the result of the uses of freedom by others, and mostly of those uses of freedom that I could never avail myself of. It is therefore not necessarily freedom that I can exercise myself that is most important for me. It is certainly more important that anything can be tried by somebody than that all can do the same things… What is important is not the freedom that I personally would like to exercise but what freedom some person may need in order to do things beneficial to society. This freedom we can assure to the unknown person only by giving it to all.

How can one (e.g. Holbo) read this and come away thinking Overman? It's more like the open source movement. Which makes sense, since Hayek's view on this goes back to his 1945 essay "The Uses of Knowledge in Society". He argues there that because much knowledge is local, and centralized planning cannot incorporate local knowledge, that centralized planning will fail. This argument is what he's drawing from to say that the exercise of liberty by some -- who are in possession of local knowledge not available to all -- is not suboptimal.

Now, possibly you could argue that Hayek's view of the masses is too dim (even though he includes himself in them) and that is where the aristocracy comes in and takes us to Nietzsche. Or possibly you could argue that the "knowable" advances are greater than the "unknowable" advances. Indeed, this is the argument against which Hayek dedicated himself to in writing The Constitution of Liberty, which was published in 1960 -- a time after Sputnik when Paul Samuelson was predicting that the USSR's GNP per capita would surpass the US's within a generation or so -- so you'd at least be meeting him where he is. The question Hayek is trying to address is whether society will benefit more from planning or from spontaneous emergence. He obviously believes it is the latter. He may be wrong, but not because he's a Nietzschean.

The better argument is the one made by Amartya Sen: it is not necessarily local knowledge which precludes some from exercising certain liberties, but rather material opportunity. It may not be that some will not exercise their ability but that they cannot do so. In this case they are not free. Hayek says almost nothing about this directly, but does say that just because some are free does not mean that all should be enslaved. Those who can exercise their freedom should do so, as this will provide benefit for society.

This is the point, I think, that Hayek goes astray. His argument about the importance of local knowledge and decentralization falls apart when those with local knowledge cannot employ it and opportunity is centralized. His claim that innovations by the few will benefit the many is empirical: sometimes they do, often they do not. His argument that order is spontaneous is contingent, in other words, not a law of nature. Complex systems can, and do, break down. To simply admit does not require giving anything else up.

But, again, this is not Nietzschean. Which is why Objectivists do not like Hayek. This is an argument that Hayek should revise his beliefs to include a role for a marginally bigger -- although not fundamentally different -- state, or some other redistributionary apparatus. This is doable using Hayekian language, even if Hayek himself and many of his supporters recommend a minimalist state. Hayek is reconcilable with somewhat-modest forms of social democracy, in other words, and social democracy is reconcilable with a deregulated-but-redistributionary political economy.

But to do that you'd have to admit that Hayek was not quite a moral monster. Corey Robin is dedicated to showing that the right wing is authoritarian in all its guises. He believes that when Hayek writes "Why I Am Not A Conservative" he simply cannot be trusted: he's a reactionary like all the rest, and all the rest are motivated first and foremost by a lust for exploitation and oppression. There is nothing inherently wrong with this intellectual project, and I've learned a lot by following it. But it is inherently limiting at the same time, and it commits one to answers before questions have even been asked.

          Keep Calm and Hack On   

KDE Project:

I've just got back from the Qt Contributor's Summit, and I had a really good time.

I arrived on Wednesday evening and we had arranged to meet in a bar called 'Brauhaus Lemke' in Hackescher Markt which is quite near Alexanderplatz. It did look easier to find on the map than it actually was, but Hackescher Markt is a great place. There is a big square with loads of bars that have seats outside. The Lemke is slightly off the main square.

When I got there we had about one and a half giant tables full of KDE people. I was really impressed with the beer menu; they had four brews which they make themselves. A 'Saison' which means seasonal and could be anything, a Pils, a Weiz beer and a very nice 'Lemke Original' which was amber coloured which an interesting depth of taste, and that was my personal favourite. You really need to go through all four at least twice I would say to get started, and you can get a 'test kit' with small samples of each to do that. They have vegetarian food, and I thought the Swiss Rusti - potato cakes with fried eggs on top - was particularly good.

So for the Desktop Summit I don't think you could go far wrong with arranging to meet people at one of the bars in Hackescher Markt. It's even got 'Hacke..' in the name after all..

The following morning we arrived at the Summit for a morning of keynotes.

The conference was held in an East German 'retro modernist' building (if that is the right word). The rooms were named after space things like 'Vostok' or 'Mars' and there was even a Sputnick sculpture above a mural of 'herioc workers' type stuff at the front. My only criticism was that there were so many attendees that the rooms were packed and the air conditioning couldn't keep up and it got really hot.

After Alex Leisse introduced the conference, the first keynote was from the Nokia guy running the Open Source software program - I didn't take a note of his name. He reassured us that an Open Source plan does actually exist contrary to what you might think from the public reports. One major theme of the conference is that what is really happening in private isn't quite the same as the public image of Nokia although they couldn't tell us anything about what they called 'confidential things.'

Then Lars Knoll gave an account of the plans for Qt5 in detail. This is what I had mainly been looking forward to finding out about, and I was very impressed. They want to make the QML scene graph 3d the main focus of UI development although imperative QPainter based apis for drawing will still be there if you need them (I assume that is in the QtGui module). The biggest change was that QWidgets were being moved out of QtGui into their own module. I don't think the idea was to make them 'legacy only', but instead to optimize memory usage for small devices which don't need them. Other than those major changes most other things were just clean ups and tidying that would be reasonably source compatible.

The aim was to try and move as fast as possible, but not include too much in the 5.0 release so that either quality would be compromised or feature creep would cause delays. From the point of view of KDE we will need to just wait and see, and make sure we don't commit too early. We have plenty of other things to do at the moment, and I don't think we absolutely must switch when Qt 5.0 comes out, and instead just do enough research to have a thorough understanding of what it's about.

The first bindings guy I hadn't met before was Matti who runs the PySide project. He told me he wasn't too technical, but he could relay any issues we came up with for bindings back to the guys in Brazil.

After the keynotes, we had lunch which was a box with sandwiches and stuff in it. Half the sandwiches and food were vegetarian throughout the conference, which was pretty good, and I didn't end up being distracted by feeling really hungry as I couldn't eat any of the food, like had happened once or twice at the recent UDS conference in Buderpest.

The first discussion sessions started in the afternoon, and I was particulary interested in one about slots/signals changes in Qt5. I had read a blog on Qt Labs by Olivier Goffart about proposals for new slots/signals (I can't seem to find it via Google at the moment). The proposals looked as though they might be a bit 'boosty', and that they would screw up language bindings projects by changing to entirely statically typed signals/slots. It turned out that wasn't the plan, and in fact the new functor style slots weren't replacements for Qt's QMetaObject based slots, but were in addition to them. Attaching a C++ lamda directly to a signal looked pretty neat, and pretty similar to what we've done with QtRuby with invoking ruby blocks attached to signals.

In the Thursday evening we had a party and listened to the Troll's house band who did some good covers of sometimes 'dodgy' original material. I probably liked Pierre's trumpet riffs best - they were appropriate, to the point and not overdone. Similar to how I like code I suppose. We had a robot dance from the legendary Knut Yrvin along with some lessons on how to do his basic moves - only feasible for normal humans after quite a few Becks I would say.

On the Friday there was a second QMetaObject session about dynamically generating them. There were some minor issues about how to handle the function that is used by qt_static_metacall(), but mainly it seems pretty good from a bindings point of view and I don't think we're going to need to drastically redesign the slots/signals stuff and it will carry on working much the same.

There was this guy there who talked about Python in the QMetaObjects session, but he had his name badge reversed so I couldn't actually confirm who I suspected it was. After the session I introduced myself and found out that he was indeed Phil Thompson of PyQt fame. I had wanted to meet Phil for a while - every now and then he posts a helpful mail to the kde bindings mailing list and I had been impressed with the PyQt code when I studied it.

We went downstairs outside and had lunch outside sitting on benches in the sunshine which was very pleasant. We had a long chat about bindings issues and agreed that Qt4 was mainly fine from the point of view of bindings and Qt5 looked like it was carrying on being much the same.

A common issue was what on earth to do about QML integration. We have some problems about how to do custom QML types in bindings languages, but far more important is the effect of mixing two dynamic languages with different syntaxes in the same application. Phil's approach was to provide basic support for QML for those who wanted to use it, but he is working on a very ambitious sounding Python based declarative language that would be used instead of QML, not as well as QML. I had thought about that for Ruby, but decided that it wasn't really option. So we'll have to see how it turns out for PyQt.

On the Friday evening I went back to the Lemke bar for some more 'research' and was pleased to find there were a pile of people from the conference already there.

We finished up on Saturday with more follow on session mainly to expand on what had already been discussed. I had a language bindings session and we went over a few things especially QMetaObject stuff again with Olivier.

So that was it. I have been impressed with what I've learned about Qt5, and think that Berlin is a great place and am looking forward to the Desktop Summit. I really, really must get my flight and hotel booked this week.

Oh, one last thing. Why is this blog called "Keep Calm and Hack On"? It is because we were all given t-shirts with that phrase in big green letters with a pair of crossed swords at the top. Are we all the 'Right Stuff' in other words. I think we will be..

          Organize your multimedia with Banshee   

Banshee is an open source freeware multimedia manager that will help you organize and centralize your media. It is loaded with tons of options for sorting, organizing and playing your music and video files, and even has a nifty ‘favorites’ feature that tracks what files you play most often and remembers them for use in […]

The post Organize your multimedia with Banshee appeared first on

          Usare i Canali di Telegram come Newsletter esclusiva a portata di Smartphone   
I Canali di Telegram possono diventare in un futuro una nuova forma di newsletter mobile push? Telegram è un servizio di messaggistica istantanea, uno dei pochi veri competitor di WhatsApp in termini di funzionalità. Da poco WhatsApp ha annunciato il raggiungimento di 1 Miliardo di utenti attivi al mese sulla piattaforma. Telegram attualmente a livello mondo conta qualcosa di più di 60 Milioni di utenti. Rispetto al colosso WhatsApp, successivamente acquistato da Facebook, sono numeri davvero bassi. Ma non fermiamoci solo ai numeri. Guardiamo le potenzialità. Che cosa è Telegram? Telegram è un servizio di messaggistica istantanea, fondato nel 2013 dai fratelli Nikolai e Pavel Durov, i fondatori del social network russo vkontakte. Nikolai ha creato il nuovo protocollo MTProto sul quale Telegram è basato mentre Pavel ha fornito sostegno finanziario e infrastrutture attraverso il suo fondo Digital Fortress.   TELEGRAM WHATSAPP Velocità di trasferimento ottimizzata 👍 💀 Crittografia end-to-end 👍 💀 Messaggi autodistruttivi temporalizzati 👍 💀 Tutti i tipi di file supportati 👍 💀 Accedi ai tuoi messaggi da ogni dispositivo 👍 💀 Client desktop (Win, OS X, GNU/Linuz) 👍 💀 Dimensioni di file massima inviabile  1,5GB 12MB Limite massimo di utenti in chat  1000 500 Creazione di Canali 👍 💀 Open Source 👍 ...
          Tom Igoe-Open Source Design: Camel or Unicorn?   

Open source development has taken hold in software design, and is beginning to show up in electronics hardware design as well. Open source for design, however is behind. What incentives do designers need to work on open source projects?


Tom Igoe is an Associate Arts Professor at NYU's Interactive Telecommunications Program (ITP). Coming from a background in theatre lighting design, he teaches courses and workshops in physical computing and networking, exploring ways to allow digital technologies to sense and respond to a wider range of human physical expression. Current research focuses on ecologically sustainable practices in technology development and how open hardware development can contribute to that. Igoe has written two books on physical computing: Physical computing: Sensing and Controlling the Physical World with Computers, co-authored with Dan O'Sullivan, and Making Things Talk both have been adopted by digital art and design programs around the world. He is a regular contributor to MAKE magazine on the subject as well. He is also a member of the core development team of Arduino, an open source microcontroller environment built for non-technicians. He is currently realizing a lifelong dream to work with monkeys as well.

Cast: Interaction Design Association

          BetterLabs is looking for talented Ruby on Rails and Ajax developers   
If you have commercial development experience on the web If you are involved in open source development and you are passionate about it If you can walk the walk in any direction and can get stuff done alongside with agile development team If you are right fit into open source development culture Then we are […]
          Comment on Enlightening Technical Leadership by How Open Source transformed my career « Kartik Subbarao   
[…] Enlightening Technical Leadership […]
          Comment on Open Source and Interdependent IT by Kartik Subbarao   
One of the resources that I mention in the article is Patty Seybold's blog on Outside Innovation: Check it out and I think you'll get a good appreciation for the kinds of collaboration that are possible between employees of a company and its customers.
          Le hérisson et le renard   

La parabole du renard et du hérisson

Extrait de l'article de Hubert Guillaud
A l’heure de l’incertitude permanente, le futur n’est pas aussi tracé qu’on le croit ? Comment nous adapter à ce qui arrive ? Telle était le sujet d’une session d’interventions de la dernière conférence Lift qui se tenait à Genève du 6 au 8 février 2012.

Venkatesh Rao (@vgr) est l’auteur de Tempo, un livre sur la prise de décision et prépare un autre ouvrage sur la création et la destruction de la valeur. Il est également blogueur pour Forbes et Information Week. Sur la scène de Lift (vidéo),  il est venu présenter une parabole sur notre capacité à nous adapter (voir sa présentation), s’appuyant sur l’opposition entre le renard et le hérisson introduite initialement par le poète grec Archilogue, le romancier Léon Tolstoï et développée par le philosophe Isaiah Berlin dans son livre "Le hérisson et le renard". 

Venkatesh Rao
Image : Venkatesh Rao sur la scène de Lift,photographié par Ivo Napflin.
Pour Venkatesh Rao, la meilleure définition de larésilience c’est d’être capable de se relever à chaque fois qu’on vous cogne et qu’on vous jette à terre, comme dans un combat de boxe. C’est un peut-être un peut synthétique, mais l’image à l’avantage d’être claire.
“Comment est-on résilient ? Pourquoi certains le sont-ils plus que d’autres ? Pourquoi certains se relèvent-ils quand ils sont à terre et d’autres non ?” Pour comprendre cela, il faut comprendre que nous ne réagissons pas tous de la même manière à l’imprévu. D’où le passage par la parabole. Si le renard connaît bien des tours pour s’adapter à une situation et la relever, le hérisson n’en connaît qu’un, celui de se rouler en boule et de sortir ses piquants. Leurs résiliences sont différentes. Après avoir été mis à terre, le renard se relève pour repartir à l’aventure, pour faire autre chose, alors que le hérisson retourne à ce qu’il était en train de faire.
Si le renard, la dinde et le hérisson étaient tailleurs de pierre, le renard construirait la cathédrale, le hérisson serait le meilleur tailleur de pierre, le plus appliqué, quant à la dinde, elle confesserait ne pas trop savoir pourquoi elle est là. Si on lançait une meute de chiens devant ces animaux, le renard se mettrait à courir. Le hérisson se roulerait en boule… et la dinde se dirait qu’il ne sert à rien de courir…
Ces paraboles, estime Venkatesh Rao, nous apprennent des choses importantes sur la question de la résilience et nos réactions face à ce que l’on ne comprend pas, mais aussi sur les questions de leadership et de collaboration. Reste, qu’il n’est pas clair de savoir s’il vaut mieux être un renard ou un hérisson (même si a priori il vaut mieux ne pas être la dinde).
La résilience du renard demande de de s’avoir s’adapter. Le renard créé en acceptant les contradictions, à l’image des Jugaad indiens, ces “arrangements improvisés”, ces “contournements”, marque de la créativité à laquelle les Indiens sans ressources ont recours pour résoudre les problèmes de la vie quotidienne comme quand ils arrangent des véhicules à leurs problématiques locales, en faisant tirer une voiture par un cheval par exemple ou en ou une charrue à une moto. Si les hérissons sont plus prévisibles que les renards, le talent est de savoir mêler les deux, à l’image de Tolstoï, qui avait les valeurs d’un hérisson et les talents du renard.
C’est dans une matrice entre talents et valeurs qu’il faut comprendre ces deux formes de résilience, explique Venkatesh Rao. Alors que le renard pense qu’il a l’esprit ouvert, le hérisson pense que le renard n’est pas fiable. Le hérisson a un modèle de pensée assez simple qui lui fait exporter son style de vie partout, qui fait qu’il a tendance à avoir une seule identité quelque soit la situation à laquelle il est confronté. Il créé en éliminant les contradictions, ce qui fait qu’il a tendance à être un mauvais prévisionniste, mais sait apprendre des fondamentaux. Enfin, il est autonome et a des valeurs constantes.
L’esprit du renard, lui, a un modèle de pensé multiple : il créé en acceptant les contradictions. Il a une identité changeante et s’adapte à l’endroit où il se trouve. S’il est meilleur dans ses prédictions, il a tendance à s’enfuir face à un problème et à se précipiter vers les opportunités. Il préfère le transitoire, a des valeurs fondées sur l’utilité immédiate, il a un état d’esprit jetable et apprécie le hacking.
Chaque hérisson est résilient de la même façon quand chaque renard est résilient de sa propre façon. Si les hérissons sont prévisibles, les renards ne le sont donc pas.
Pour le hérisson, le renard est égoïste, irresponsable, intrigant, crédule… A l’inverse, pour le renard, le hérisson est doctrinaire, hypocrite, naïf, avare, ennuyeux et prévisible. Pour l’un et pour l’autre, les valeurs et talents sont différents et ils ne se perçoivent pas de la même façon. Le hérisson se voit comme cohérent, consciencieux, responsable et sceptique, alors que le renard se voit comme ouvert, adaptable, intéressant, aventureux et imaginatif.
Pour adopter la résilience du renard, estime Venkatesh Rao, il faut se fonder sur les contradictions plutôt que sur les valeurs : l’entreprise – faisant référence au vaisseau Enterprise de Star Trek qui avait pour mission d’aller là où aucun homme n’était jamais allé – signifie un voyage aventureux avant que d’être une institution. Il faut également préserver la mémoire plutôt que l’identité. Les renards préfèrent les souvenirs, mêmes cassés, que conserver les choses. Il faut également rechercher les motifs plutôt que la vérité. La mondialisation se comprend plus par le modèle du conteneur que par l’axiome de la platitude du monde. Enfin, il vaut mieux poursuivre l’aventure que l’amour, il vaut mieux favoriser un projet de satellite open source àl’arme la plus inattaquable du monde pour faire référence à deux projets de l’artiste sud-coréen Hojun Song (qui avait été l’invité d’un précédent Lift où il avait évoqué justement ces deux projets).
“Le renard et le hérisson sont deux jumeaux complémentaires, un peu comme le ying et le yang. Si vous essayez d’agir comme le renard, immanquablement, le hérisson en vous va émerger. C’est-à-dire que si vous construisez sur les contradictions, la question de la valeur va émerger. Si vous cherchez à préserver la mémoire, la question de votre identité va émerger. Si vous recherchez les modèles, la question de la vérité va émerger. Si vous cherchez l’aventure, incontestablement, la question de l’amour, de la passion pour ce que vous faites, va émerger.”
Une parabole, parfois verbeuse ou grandiloquente sur le management des hommes et des organisations, mais finalement plus réflexive qu’il pouvait n’y paraître, et qui au final, souligne que les valeurs sur lesquelles se fondent aujourd’hui une grande part du discours du management doivent peut-être devenir un peu plus complexes pour affronter des temps difficiles.

          Android and Chrome   
As someone who follows browser development pretty closely, a friend sent along this perfect summation of Google's open source strategy a la Andriod and Chrome by Dan Lyons (formerly known as Fake Steve Jobs): [Android is] the desktop Linux...
          Teaching IronRuby math tricks   
The first release of IronRuby brought a lot of buzz, and in my opinion rightly so. However, if you expect it to run Rails, seemlessly integrating ASP.NET widgets today, I must say you're delusional. Have you tried to run it at all? People, there are reasons why it is versioned Pre Alpha.

In the post titled "Is IronRuby mathematically challenged?", Antonio Cangiano rightfully complains of these fads. He writes:

Well, I was very interested in trying out IronRuby, but I immediately discovered that it is very crippled from a mathematical standpoint, even for a pre-alpha version. (...) However after running some simple tests, it is clear that a lot of work is required in order for this project to live up to the buzz that is being generated online about it, when you take into account that even some simple arithmetic functionalities are either flawed or missing altogether.

To be fair, the focus of this release is working method dispatch core and built-in class Array and String, as John Lam himself wrote. But it is understandable for people to worry that these problems may be difficult to remedy. Fortunately, it is not the case, as I will demonstrate below.

Remember, IronRuby is open source, so you can fix problems yourself. Can't divide two floating numbers? It turns out to be as easy as adding one-line method to FloatOps class. Big numbers don't work? Conviniently, DLR provides a high performance class to deal with arbitrary precision arithmetic, namely Microsoft.Scripting.Math.BigInteger. This is how Python long type is implemented in IronPython.

Without further ado, here's a small patch (34 lines added, 1 lines deleted) to remedy problems Antonio pointed out. I think you will be able to understand it even if you don't know C#! It's that simple.

If you are using a certain operating system which lacks such a basic tool like patch, I heartily recommend you to head to GnuWin32 and get it. Add it to your PATH. Let's assume that you extracted the zip file to C:\. You need to pass --binary option to patch because of different line endings; I generated the patch on Linux.

C:\IronRuby-Pre-Alpha1>patch --binary -p1 < patch-math
patching file Src/Ruby/Builtins/Bignum.cs
patching file Src/Ruby/Builtins/FixnumOps.cs
patching file Src/Ruby/Builtins/FloatOps.cs

After that, you need to build ClassInitGenerator. This is necessary because the patch adds new methods to built-in classes.

C:\IronRuby-Pre-Alpha1>cd Utils\ClassInitGenerator

Now it is built, you need to run it to regenerate Initializer.Generated.cs. There is a batch file to do this, GenerateInitializers.cmd, but for some inexplicable reasons it won't work because it got the parent directories(..) one too many. It seems that they haven't tested this.

C:\IronRuby-Pre-Alpha1\Utils\ClassInitGenerator>cd ..\..
C:\IronRuby-Pre-Alpha1>Bin\Debug\ClassInitGenerator > Src\Ruby\Builtins\Initializer.Generated.cs

Now to the main build.


Let's test! Did IronRuby learn the math we taught?

C:\IronRuby-Pre-Alpha1>cd Bin\Debug

IronRuby Pre-Alpha ( on .NET 2.0.50727.832
Copyright (c) Microsoft Corporation. All rights reserved.
>>> 1/3.0
=> 0.333333333333333
>>> 1.0/3.0
=> 0.333333333333333
>>> 2**3
=> 8
>>> 1_000_000 * 1_000_000
=> 1000000000000
>>> exit

It did!
          Global Big Data Infrastructure Market Growth, Drivers, Trends, Demand, Share, Opportunities and Analysis to 2020   

Global Big Data Infrastructure Market 2016-2020, has been prepared based on an in-depth market analysis with inputs from industry experts. The report covers the market landscape and its growth prospects over the coming years. The report also includes a discussion of the key vendors operating in this market.

Pune, Maharashtra -- (SBWIRE) -- 02/09/2017 -- The Global Big Data Infrastructure Market Research Report covers the present scenario and the growth prospects of the Global Big Data Infrastructure Industry for 2017-2021. Global Big Data Infrastructure Market, has been prepared based on an in-depth market analysis with inputs from industry experts. The report covers the market landscape and its growth prospects over the coming years and discussion of the key vendors effective in this market.

Big data refers to a wide range of hardware, software, and services required for processing and analyzing enterprise data that is too large for traditional data processing tools to manage. In this report, we have included big data infrastructure, which includes mainly hardware and embedded software. These data are generated from various sources such as mobile devices, digital repositories, and enterprise applications, and their size ranges from terabytes to exabytes. Big data solutions have a wide range of applications such as analysis of conversations in social networking websites, fraud management in the financial services sector, and disease diagnosis in the healthcare sector.

Report analysts forecast the Global Big Data Infrastructure Warming Devices market to grow at a CAGR of 33.15% during the period 2017-2021.

Browse more detail information about Global Big Data Infrastructure Report at:  

The Global Big Data Infrastructure Market Report is a meticulous investigation of current scenario of the global market, which covers several market dynamics. The Global Big Data Infrastructure market research report is a resource, which provides current as well as upcoming technical and financial details of the industry to 2021.

To calculate the market size, the report considers the revenue generated from the sales of Global Big Data Infrastructure globally.

Key Vendors of Global Big Data Infrastructure Market:
- Dell
- HP
- Fusion-io
- NetApp
- Cisco


Other prominent vendors
- Intel
- Oracle
- Teradata

And many more……


Get a PDF Sample of Global Big Data Infrastructure Research Report at:  

Global Big Data Infrastructure market report provides key statistics on the market status of the Global Big Data Infrastructure manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the Global Big Data Infrastructure industry.

Global Big Data Infrastructure Driver:
- Benefits associated with big data
- For a full, detailed list, view our report

Global Big Data Infrastructure Challenge:
- Complexity in transformation of procured data to useful data
- For a full, detailed list, view our report

Global Big Data Infrastructure Trend:
- Increasing presence of open source big data technology platforms
- For a full, detailed list, view our report

Purchase report @  


Geographical Segmentation of Global Big Data Infrastructure Market:
· Global Big Data Infrastructure in Americas
· Global Big Data Infrastructure in APAC
· Global Big Data Infrastructure in EMEA


The Global Big Data Infrastructure report also presents the vendor landscape and a corresponding detailed analysis of the major vendors operating in the market. Global Big Data Infrastructure report analyses the market potential for each geographical region based on the growth rate, macroeconomic parameters, consumer buying patterns, and market demand and supply scenarios.

Have any query? ask our expert @

Key questions answered in Global Big Data Infrastructure market report:
- What are the key trends in Global Big Data Infrastructure market?
- What are the Growth Restraints of this market?
- What will the market size & growth be in 2020?
- Who are the key manufacturer in this market space?
- What are the Global Big Data Infrastructure market opportunities, market risk and market overview?
- How revenue of this Global Big Data Infrastructure market in previous & next coming years?

Get Discount on Global Big Data Infrastructure Research Report at:

The report then estimates 2017-2021 market development trends of Global Big Data Infrastructure market. Analysis of upstream raw materials, downstream demand, and current market dynamics is also carried out. In the end, the report makes some important proposals for a new project of Global Big Data Infrastructure market before evaluating its feasibility.

And continued….

About Absolute Report:

Absolute Reports is an upscale platform to help key personnel in the business world in strategizing and taking visionary decisions based on facts and figures derived from in depth market research. We are one of the top report resellers in the market, dedicated towards bringing you an ingenious concoction of data parameters.

For more information on this press release visit:

Media Relations Contact

Ameya Pingaley
Absolute Reports
Telephone: +14085209750
Email: Click to Email Ameya Pingaley

          Stage: Opdrachten voor PHP - Programmeur/Developer in Nieuwegein   
<p>Wij hebben een product, wat zich met name richt op content management (CMS) en document management (DMS). Dit is volledig web gebaseerd en is gebouwd in PHP, .Net en Java. De bedoeling van het stage opdrachten is om het product te verbeteren en aan te vullen met nieuwe technologie waardoor het op een bredere manier aangeboden kan worden. De opdracht zal bestaan uit onderzoek en programmeren/implementeren van diverse pakket uitbreidingen. Voordat producten en uitbreidingen van het pakket aan klanten worden aangeboden dienen ze getest te worden in een testomgeving. <br /> <br /> Beschikbare Opdrachten: <br /> - Ontwikkelen van een tablet(mobile) APP om documenten op een Ipad of Adriod tablet te kunnen bekijken en te reviewen, waarbij technologieën als HTML 5 en Bootstrap zullen worden ingezet of de wellicht platform afhankelijke ontwikkelomgevingen. <br /> <br /> - Ontwikkelen AutoCAD extenties voor integratie met OpenIMS DMS (opslaan in DMS, koppelingen tussen tekeningen). <br /> <br /> - Realiseren van een koppeling met de API van om documenten te kunnen voorzien van een digitale handtekening via dit platform.</p> <p>- Integreren van open source statistieken tool PIWIK met het OpenIMS platform om zo het gebruik van het systeem inzichtelijk te kunnen maken. </p> <p>- De bestaande Outlook integratie uitbreiden zodat er volledig document beheer binnen Outlook kan plaats vinden. </p> <p>- Implementeren van nieuwe veld types op basis van Bootstrap. Zoals voorzieningen om middels @ medewerkers te kunnen taggen of middels # tags toe te kunnen voegen aan berichten.</p> ...
          Open Source versus Free Software from a Marketing Perspective   
Via Sandro Grogans comes an interesting interview / discussion from about the use of the phrases “open source” and “free software” and the need to tailor the message to the audience. Bruce Perens (co-founder of the Open Source Initiative) and Shane Coughlan (from FSF Europe): Perens essentially calls the exclusion or downplaying of Richard...
          Samsung Continuum Source Code Available For Download   

samsung continuum

It's been less than two weeks since the phone launched, but the Continuum's source code has already been thrown up on Samsung's Open Source Release Center. Regular users won't get much use out of it, but developers may be able to, and it should certainly help out with the development of custom ROMs for the device. Perhaps the awesome folks over at XDA could even find some interesting uses for the second "ticker" display?

Read More

Samsung Continuum Source Code Available For Download was written by the awesome team at Android Police.

          Fake News From Fossbytes and Techworm   

          Artificial intelligence/Machine learning   
  • Is your AI being handed to you by Google? Try Apache open source – Amazon's AWS did

    Surprisingly, the MXNet Machine Learning project was this month accepted by the Apache Software Foundation as an open-source project.

    What's surprising about the announcement isn't so much that the ASF is accepting this face in the crowd to its ranks – it's hard to turn around in the software world these days without tripping over ML tools – but rather that MXNet developers, most of whom are from Amazon, believe ASF is relevant.

  • Current Trends in Tools for Large-Scale Machine Learning

    During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.

  • Your IDE won't change, but YOU will: HELLO! Machine learning

    Machine learning has become a buzzword. A branch of Artificial Intelligence, it adds marketing sparkle to everything from intrusion detection tools to business analytics. What is it, exactly, and how can you code it?

  • Artificial intelligence: Understanding how machines learn

    Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

  • Your future boss? An employee-interrogating bot – it's an open-source gift from Dropbox

    Dropbox has released the code for the chatbot it uses to question employees about interactions with corporate systems, in the hope that it can help other organizations automate security processes and improve employee awareness of security concerns.

    "One of the hardest, most time-consuming parts of security monitoring is manually reaching out to employees to confirm their actions," said Alex Bertsch, formerly a Dropbox intern and now a teaching assistant at Brown University, in a blog post. "Despite already spending a significant amount of time on reach-outs, there were still alerts that we didn't have time to follow up on."

          TensorFlow 1.0 Coverage   

          Scientific Linux 7.3 Released   
  • Scientific Linux 7.3 Officially Released, Based on Red Hat Enterprise Linux 7.3

    After two Release Candidate (RC) development builds, the final version of the Scientific Linux 7.3 operating system arrived today, January 26, 2017, as announced by developer Pat Riehecky.

    Derived from the freely distributed sources of the commercial Red Hat Enterprise Linux 7.3 operating system, Scientific Linux 7.3 includes many updated components and all the GNU/Linux/Open Source technologies from the upstream release.

    Of course, all of Red Hat Enterprise Linux's specific packages have been removed from Scientific Linux, which now supports Scientific Linux Contexts, allowing users to create local customization for their computing needs much more efficiently than before.

  • Scientific Linux 7.3 Released

    For users of Scientific Linux, the 7.3 release is now available based off Red Hat Enterprise Linux 7.3.

          Open source machine learning tools as good as humans in detecting cancer cases   
  • Open source machine learning tools as good as humans in detecting cancer cases

    Machine learning has come of age in public health reporting according to researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing at Indiana University-Purdue University Indianapolis. They have found that existing algorithms and open source machine learning tools were as good as, or better than, human reviewers in detecting cancer cases using data from free-text pathology reports. The computerized approach was also faster and less resource intensive in comparison to human counterparts.

  • Machine learning can help detect presence of cancer, improve public health reporting

    To support public health reporting, the use of computers and machine learning can better help with access to unstructured clinical data--including in cancer case detection, according to a recent study.

          TensorFlow/Google: Latest Moves   

          FOSS and Artificial Intelligence   

          Accelerating Scientific Analysis with the SciDB Open Source Database System   

Science is swimming in data. And, the already daunting task of managing and analyzing this information will only become more difficult as scientific instruments — especially those capable of delivering more than a petabyte (that’s a quadrillion bytes) of information per day — come online.

Tackling these extreme data challenges will require a system that is easy enough for any scientist to use, that can effectively harness the power of ever-more-powerful supercomputers, and that is unified and extendable. This is where the Department of Energy’s (DOE) National Energy Research Scientific Computing Center’s (NERSC’s) implementation of SciDB comes in.

Read more

          Female SanDisk engineer who made it in Silicon Valley offers her words of wisdom   

At Milpitas-based flash memory storage and software company SanDisk Corp., Nithya Ruff, director of the company’s open source strategy, is a huge driver behind science, technology, engineering and math initiatives to get more girls interested in the field. After growing up in Bangalore, India, Ruff learned to code at North Dakota State University, where she earned her computer science master’s degree.

Read more

Also: 8 ways Portland tech companies can follow through on diversity talk

          NDU Holds “Challenge” to Increase Development, Awareness of Disaster Applications   

To address the growing development and use of mobile applications for emergencies, the Center for Technology and National Security Policy (CTNSP) at the National Defense University (NDU) has launched the “Disaster Apps Challenge.” The challenge asks developers not to necessarily build an application, although they won’t turn new ideas away, but to expand upon an open source disaster relief application that's already being used by citizens or first responders.

          Eyeo 2015 – James George   

Imaging The Future – Ten percent of all photographs were taken in just the last year. As culture invents new customs around social image sharing, artists are excavating the deluge of pictures for insights and creative potential. We’ll trace groundbreaking projects in an effort to expand the definition photography and see the big picture of the future of imaging.


Cast: Eyeo Festival // INSTINT

Tags: eyeo2015, eyeo, eyeofestival, data, dataart, dataviz, data visualization, interactive art, computational photography, James George, DepthKit, open source, 3D cinema, CLOUDS, interactive documentary, creative code, Specular, artist and flilm

          Site Reliability Engineer (Software) - Indeed - Andhra Pradesh   
Serve as subject matter expert for multiple proprietary and open source technologies. As the world’s number 1 job site, our mission is to help people get jobs....
From Indeed - Fri, 23 Jun 2017 07:16:14 GMT - View all Andhra Pradesh jobs
          Divide et impera con c# e un message broker   

Questo lo voglio condividere. Qualche tempo fa si discuteva sul semplice paradigma informatico divide et impera e il suo approccio reale. Di base non si basa su niente di difficile: avendo un compito X per risolverlo lo si suddivide in compiti più piccoli, e questi in più piccoli ancora e così via, in modo ricorsivo. Un esempio reale che usa questo paradigma è il famoso quicksort che, preso un valore medio, detto pivot, al primo passaggio suddivide gli elementi dell'array da ordinare a sinistra se più piccoli del pivot, a destra se più grandi. Dopodiché, questi due sotto array, sono ancora suddivisi, il primo con un suo valore medio, il secondo con un altro valore medio; quindi si ricomincia a la suddivisione spostando gli elementi da una parte all'altra a seconda del valore del pivot: alla fine di questo passaggio saranno quattro gli array. Se l'ordinamento non è completo, si dividono ancora questi quattro array in otto più piccoli, ognuno con il suo pivot medio e con il dovuto spostamento da una parte o dall'altra... così fino a quando l'array è ordinato (maggiori info alla pagina di wikipedia dove è presente anche in modo grafico tale algoritmo). Prendendo il codice di esempio dalla pagina di Wikipedia, posso costruire la versione in C#:

static List QuickSortSync(List toOrder) { if (toOrder.Count <= 1) { return toOrder; } int pivot_index = toOrder.Count / 2; int pivot_value = toOrder[pivot_index]; var less = new List(); var greater = new List(); for (int i = 0; i < toOrder.Count; i++) { if (i == pivot_index) { continue; } if (toOrder < pivot_value) { less.Add(toOrder); } else { greater.Add(toOrder); } } var lessOrdered = QuickSortSync(less); var greaterOrdered = QuickSortSync(greater); var result = new List(); result.AddRange(lessOrdered); result.Add(pivot_value); result.AddRange(greaterOrdered); return result; }

Anche se non è ottimizzata, non importa per lo scopo finale di questo post: il suo sporco lavoro lo fa ed è quanto basta. Eseguito, si potrà vedere l'array di numeri interi prima e dopo l'ordinamento:

Per migliorare tale versione potremmo utilizzare chiamate asincrone e più thread: in fondo già la prima suddivisione prima spiegata con due parole, che ritorna due array, può essere elaborata con due thread separati, ognuno che elabora il suo sotto-array. E alla divisione successiva, potremo utilizzare altri thread. Avendo a disposizione un microprocessore con più core, avremo immediatamente vantaggi prestazionali non indifferenti se confrontati con la versione mono-thread prima esposta. Per un approccio multithread ho parlato già in modo esteso in questo mio altro post, e in questo portale potete trovare molte altre informazioni. Ovviamente sempre la cura per ogni male la possibilità di utilizzare tutti i core della propria macchina e una moltitudine di thread paralleli. Ma fino a quando si può estendere oltre questi limiti? I thread non sono infiniti così come i core di una cpu. Spesso alcuni novizi - lasciatemi passare tale termine - pensano che l'elaborazione parallela sia la panacea per tutti i problemi. Ho molte operazioni da svolgere in parallelo, come posso risolverle da programmazione? Semplice, una marea di thread paralleli - e prima del Framework.Net 4 e dei suoi Task e dell'async/await del Framework 4.5 - sembrava una delle tecniche più facili da utilizzare e forse anche più abusate. Alla prima lettura, spesso il neofito, alla seguente domanda sbaglia:

Ipotizzando di avere una cpu monocore (per semplificare), e dovendo eseguire N operazioni un nostro programma ci mette esattamente 40 secondi. Se questo programma lo modificassi per potere usare 4 thread paralleli, quanto tempo ci metterebbe questa volta ad eseguire tutta la mole di calcoli?

Se si risponde in modo affrettato, si potrebbe dire 10 secondi. Se sapete come funziona una cpu e i suoi core e non vi siete fatti ingannare dalla domanda, avrete risposto nel modo corretto: ~40 secondi! La potenza di calcolo di una cpu è sempre quella e non è suddividendola in più thread che si ottengono miracoli. Solo con 4 core avremo l'elaborazione conclusa in 10 secondi.

Ma perché questa divagazione? Perché se dovessimo estendere il paradigma Divede et impera ulteriormente al programmino di ordinamento qui sopra perché potesse superare, teoricamente, le limitazioni della macchina (cpu e memoria), quale strada si potrebbe - ripeto - si potrebbe intraprendere? La soluzione è facilmente intuibile: aggiungiamo un'altra macchina per la suddivisione di questo processo; non basta ancora? Possiamo aggiungere tutte le macchine e la potenza necessaria per l'elaborazione.

Per risolvere questi tipi di problemi e per poter poi avere la possibilità di estendere in modo pressoché infinito un progetto, la suddivisione di un nostro processo in microservice è una delle soluzioni più gettonate, così come conoscere il famoso scale cube di Martin L. Abbott e Michael T. Fisher.

Mettendo da parte il programmino prima scritto per il sort ed estendendo il disco ad applicativi di un certo peso, possiamo definire una piccola web application che funge da blog: visualizzazione dell'elenco dei post, il singolo dettaglio, un'eventuale ricerca e l'utilizzo dei tag. Di base, avremo un database dove salvare i post, quindi una web application con i classici 3 layer per la presentazione, la business logic e il layer per il recupero dei dati. Questo tipo di applicazione, nel cubo, starebbe nel vertice in basso a sinistra. Si tratta di una applicazione monolitica, dove tutte le funzioni sono racchiuse all'interno di un singolo processo. Se volessimo spostarci lungo l'asse X, dovremmo duplicare questa web application su più processi e server. I vantaggi sarebbero subito evidenti: in caso questa applicazione avesse successo e le dotazioni hardware della macchina su cui gira non fossero più sufficienti, l'installazione dello stesso su più server risolverebbe l'aumento di carico di lavoro (tralasciando il potenziamento del database). L'asse Y del cubo è quello più interessante: con esso spostiamo le varie funzioni della web application in piccoli moduli indipendenti. Sempre tenendo come esempio il blog, potremmo usare il layer di presentazione sul web server, ma i layer della busines logic suddividerla in più moduli indipendenti; il primo layer, in questo modo, interrogherà questo modulo o altri per richiedere la lista di post e, appena ricevuta la risposta, ritornerà al client i dati richiesti. Solo per questo punto si può notare un notevole vantaggio: in un mondo informatico sempre più votato all'event driver e all'asincrono - basti notare il notevole successo di Node.js verso cui si stanno spostando pressoché tutti in modo più o meno semplificato, async/await fa parte ormai della quotidianità del programmatore .net - dove nulla dev'essere sprecato e nessun thread deve rimanere in attesa, questo approccio permette il carico ottimale dei server. Sono in vena di quiz, cosa c'è di "sbagliato" (notare le virgolette) nel codice seguente (in c#)?

var posts = BizEntity.GetPostList(); var tags = BizEntity.GetTagList();

Cavolo, sono due righe di codice, che cosa potrebbero avere mai di sbagliato? Ipoteticamente la prima prende dal database la lista dei post di un blog, mentre la seconda la lista dei tag (questo codice potrebbe essere utile per la web application vista prima). La prima lista è usata per visualizzare il lungo elenco di post del nostro blog, la seconda lista per mostrare i tag utilizzati. Se avete spostato ormai la vostra mentalità nella programmazione del vostro codice in modo asincrono o se usate node.js, avrete già capito che cosa c'è di sbagliato in queste due righe di codice: semplicemente esegue due richieste in modo sequenziale! Il thread arriva alla prima riga di codice e qui rimane bloccato in attesa della risposta del database; avuta la risposta, esegue una seconda richiesta e rimane ancora in attesa. Piuttosto, perché non lanciare entrambe le richieste in parallelo e liberare il thread in attesa della risposta? In C#:

var taskPost = BizEntity.GetPostListAsync(); var taskTag = BizEntity.GetTagListAsync(); Task.WaitAll(new Task[] {taskPost, taskTag}); var posts = taskPost.Result; var tags = taskTag.Result;

Ottimo, questo è quello che volevamo: esecuzione parallela e thread liberi per processare altre richieste.

Ritorniamo all'esempio del blog: ipotizziamo a questo punto di voler aggiungere la possibilità di commentare i vari post del nostro blog. Nel caso dell'applicazione monolitica all'inizio esposta, dovremo mettere mano al codice dell'intero progetto, mentre con la suddivisione in moduli indipendenti più piccoli, appunto microservice, dovremo scrivere un modulo indipendente da installare su uno o più server (si ricordi l'asse X), quindi collegare gli altri moduli che richiedono questi dati. Infine l'asse Z si ha una nuova differenziazione, possiamo partizionare i dati e le funzioni in modo che le richieste possano essere suddivise, per esempio, per l'anno o il mese di uscita del post, o se fanno parte di certe categorie e così via... Non si penserà che tutte le pagine archiviate e su cui fa la ricerca Google, sono su un solo server replicato, vero?

Spiegato in teoria il famoso scale cube (con i miei limiti), non ci rimane che rispondere all'ultima domanda: chi è il collante tra tutti questi micro service? Il mette a disposizione una buona tecnologia per permettere la comunicazione tra processi che siano sulla stessa macchina o su una batteria di server in una farm factory, o che siano in remoto e comunichino con internet. Utilizzando WCF si può passare facilmente tra i web service WSDL standard a comunicazioni più veloci via tcp e così via. Questo approccio ci pone di fronte ad un evidente limite essendo queste comunicazioni dirette: ipotizzando di avere una macchina con il layer di presentazione del blog, per richiedere i post al microservice che gira su un secondo server, deve sapere innanzitutto DOVE si trova (ip address) e COME comunicare con esso. Risolto questo problema in modo semplice (salvando nel web.config, per esempio, l'ip della seconda macchina e usando un'interfaccia comune per la comunicazione) ci troviamo di fronte immediatamente ad un altro problema: come possiamo spostarci lungo l'asse X del cubo inserendo altre macchine con lo stesso microservice in modo che le richieste vengano bilanciate automaticamente? Dovremo fare in modo che il chiamante sia aggiornato continuamente sul numero di macchine con il servizio di cui ha bisogno, con eventuali notifiche di anomalie volute o no: manutenzione del server con la messa offline del servizio, oppure una rottura improvvisa della macchina. Per l'esempio qui sopra, il layer di presentazione dovrebbe avere al suo interno anche la logica di gestione di tutto questo... e così ogni microservice della nostra applicazione... Assolutamente troppo complicato e ingestibile. Perché dunque non delegare questo compito ad un componente esterno come un message broker?

Azure mette a disposizione il suo Microsoft Azure Service Bus, molto efficiente e ottimale nel caso di utilizzo del cloud di Microsoft; nel mio caso le mie preferenze vanno per RabbitMQ, anche perché è l'unico su cui ho lavorato in modo approfondito. Innanzitutto RabbitMQ è un message broker open source completo che permette ogni tipo di protocollo (da AMPQ a STOMP e, soprattutto, possiede client per quasi tutte le tecnologie, dal framework .net, passando per Java, per nodejs e così via. Inoltre è possibile installarlo come server sui sistemi operativi principali: Windows, Linux ad Apple. Se per fare delle prove non si vuole installare nulla sulla propria macchina di sviluppo ci si può affidare ad alcuni servizi gratuiti (e limitati) disponibili in internet. Attualmente CloudAMPQ ( mette a disposizione tra i pacchetti anche la versione free con un limite di 1.000.000 di messaggi al mese):

Per esigenze di altissimo livello sono disponibili anche piani da centinaia di migliaia di messaggi al secondo in cluster, ma per dei semplici test va più che bene la versione free. Una volta registrati si avrà a disposizione un servizio RabbitMQ completo con tutti i parametri di accesso anche via API Rest sia da classica pagina web:

(User e password non sono reali in questo caso.)

Cliccando sul pulsante arancio "RabbitMQ management interface", si avrà a disposizione il pannello di controllo completo per eventuali configurazioni, come la creazione di code (Queue) e delle exchange (eventuali perché il tutto è possibile anche da codice):

Se non si vuole usare un servizio pubblico si può scaricare direttamente dal sito di RabbitMQ la versione adatta al proprio sistema operativo:

Per la versione Windows ho riscontrato in tutte le occasioni che l'ho dovuto installare un problema di avvio del servizio. Per verificare che tutto funzioni è sufficiente andare nel menu start e, dopo aver selezionato la voce: "RabbitMQ Command Prompt", scrivere il comando:

rabbitmqctl.bat status

Se la risposta è un lungo JSON vuol dire che è tutto corretto, altrimenti si noteranno errori di avvio del nodo di RabbitMQ e roba simile. In questi casi il primo passo è controllare il contenuto dei cookie creati da Erlang (da installare insieme a RabbitMQ), il primo è nel path:


Il secondo:


Se sono uguali e il problema sussiste, da terminale precedente avviato in modalità amministratore, avviare questi tre comandi di seguito:

rabbitmq-service remove rabbitmq-service install net start rabbitmq

Se anche questo non funziona, non resta che affidarsi a San Google. Se vogliamo che nella versione installata in locale sia disponibile l'interfaccia web, si devono utilizzare questi comandi:

rabbitmq-plugins enable rabbitmq_management rabbitmqctl stop rabbitmq-service start

Ora sarà possibile aprire un browser e aprire l'interfaccia web con:


Username: guest, password: guest.

Ora, sia che si sia utilizzato un servizio free in internet, sia che si sia installato il tutto sulla propria macchina, per una semplice prova si può andare nel tab Queues e creare una nuova Queue su cui si potranno inserire e leggere messaggi direttamente da questa interfaccia. Ok, ma cosa sono le code e le exchange? In rabbitMQ (e in qualsiasi altro broker message) ci sono tre componenti principali:

  • L'exchange che riceve i messaggi e li indirizza a una coda (queue); questo componente è facoltativo.
  • Queue, è la coda vera e propria dove sono salvati i messaggi.
  • Il binding che lega una exchange a una coda.

Come detto prima, creando una coda, poi è possibile inserire e leggere i messaggi inseriti da codice. Tutto qua. Niente di complicato. Una coda può avere più proprietà, le principali:

  • Durata: possiamo fare in modo che RabbitMQ salvi i messaggi sul disco, in modo che, in caso di riavvio della macchina, la coda eventualmente in attesa di elaborazione non vada persa.
  • Auto cancellazione: è possibile fare in modo che una coda, appena tutte le connessioni collegate sono chiuse, venga automaticamente cancellata.
  • Privata: una coda accetta come lettore della coda un solo processo; ma chiunque può aggiungere elementi al suo interno.

Come scritto sopra, da codice possiamo connetterci direttamente con una coda, inviare messaggi e altri processi prelevarli ed eventualmente elaborarli. La vera forza dei message broker non si ferma qui ovviamente. L'uso dell'exchange ci permette di scrivere regole per la consegna del messaggi nelle code collegate dal relativo binding. Abbiamo a disposizione tre modi di invio con l'exchenge:

  • Direct: inserendo la routing key diretta il nostro messaggio sarà inviato a quella e solo quella coda collegata all'exchange con quel binding come nella figura sottostante:
  • Topic: è possibile inserire dei caratteri jolly nella definizione della routing key in modo che un messaggio sia inviato a una o più code che rispettano questo topic. Semplice esempio: un exchange può essere collegato a più queue; nel caso fossero due, prima con entrambe la routing key: #.message, inviando un messaggio in modalità topic con una di queste routing key: a1.message, qwerty.message, entrambe le code riceverebbero il messaggio.
  • Header exchange: invece della routing key vengono controllati i campi header del messaggio.
  • Fanout: tutte le code collegate ricevono il messaggio.

Un altro dettaglio da non sottovalutare nell'utilizzo dei message broker è la sicurezza della consegna e della ricezione dei messaggi. Se un log perso può essere di minore importanza e sopportabile (causa il riavvio della macchina o una qualsiasi causa esterna), la perdita di una transazione per una prenotazione o pagamento comporta gravi problemi. RabbitMQ supporta l'acknowledgement message: in questo modo RabbitMQ invia il messaggio a un nostro processo che lo elabora, ma non lo cancella dalla coda finoaché il processo non gli invia un comando per la cancellazione. Se, durante, questo processo muore e cade la connessione tra lui e il message broker, questo messaggio sarà inviato al prossimo processo disponibile.

Per fare una semplice prova da interfaccia, andando nella sezione "Queues" e creiamo una nuova coda dal nome "TestQueue":

Cliccando su "Add queue" la nostra nuova coda apparirà nella lista della pagina. Si possono anche modificare la durata e le altre proprietà della queue prima citate, ma si può lasciare tutto così com'è e andare avanti. Creiamo ora una exchange dalla sezione "Exchanges" dal nome "ExchangeTest" e il type in "Direct":

E ora colleghiamo l'exchange e la queue prima creata. Nella tabella della stessa pagina si noterà che è apparsa la nostra Exchange. Cliccandoci sopra abbiamo ora la possibilità di definire in binding:

Se è tutto corretto, vedremo una nuova immagine che mostra il collegamento.

Ora nella stessa pagina aprire la sezione "Publish message" e inserire la routing key prima definita e del testo di prova. Quindi cliccare su "Publish message":

Se tutto è andato bene, apparirà un messaggio su sfondo verde che avvisa che il messaggio è stato inviato alla queue correttamente. Per verificare andare nella sezione "Queues" e si vedrà che ora la coda avrà un messaggio:

Andando nella parte inferiore della pagina in "Get Message" sarà possibile leggere e cancellare il messaggio.

Ok, tutto semplice e bello... ma se volessi farlo da codice? Di base il modo di comunicazione più semplice è la one-way. In questo caso un processo invierà un messaggio ad una queue e un altro processo leggerà tale messaggio (nel progetto in allegato sono i progetti Test1A e Test1B). Innanzitutto è necessario aggiungere il reference alla libreria RabbitMQ.Client, disponibile in Nuget. Quindi ecco il codice che aspetta i messaggi alla queue (il codice crea automaticamente la queue Example1 e nella soluzione il cui link si trova a fine di questo post, ha come nome Example1A il progetto), innanzitutto il codice per la lettura e svuotamento della coda:

const string QueueName = "Example1"; static void Main(string[] args) { var connectionFactory = new ConnectionFactory(); connectionFactory.HostName = "localhost"; using (var Connection = connectionFactory.CreateConnection()) { var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, true, false, null); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, true, consumer); Console.WriteLine("Wait incoming message..."); while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); if (content == "10") { break; } } } }

ConnectionFactory ci permette di creare la connessione al nostro server di RabbitMQ (in questo esempio con localhost, l'username e password guest sono utilizzate automaticamente). In questo caso in ModelCentralized è specificato il nome della queue, i tre valori boolean successivi servono per specificare se essa è durable (i messaggi sono salvati su disco e recuperati in caso di riavvio), exclusive (solo chi crea la queue può leggerne il contenuto) e autoDelete (la queue si cancella quando anche l'ultima connessione ad essa viene chiusa). Alla fine l'oggetto consumer con la funzione Dequeue interrompe il thread del processo e rimane in attesa del contenuto della queue; all'arrivo del primo messaggio ne prende il contenuto (questa dll per il ritorna un array di byte) e trasformato in stringa lo visualizza a schermo.

Il codice per l'invio (Example1B):

// Tralasciato il codice uguale all'esempio precedente // fino all'istanza di ModelCentralized: var ModelCentralized = Connection.CreateModel(); Console.WriteLine("Send messages..."); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); byte[] msgRaw; for (int i = 0; i < 11; i++) { msgRaw = Encoding.Default.GetBytes(i.ToString()); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw); } } Console.Write("Enter to exit... "); Console.ReadLine(); }

Il resto del codice sono istanze ad oggetti necessari all'invio dei messaggi (senza impostare alcune proprietà particolare) e trasformato il nostro messaggio in un array di bytes, grazie alla funzione BasicPublish viene inviato effettivamente alla queue QueueName (il primo parametro con una stringa vuota, è il nome dell'eventuale exchange utilizzato; in questo caso inviando il messaggio direttamente alla queue non c'è bisogno dell'exchange). Il codice invia una sequenza di numeri alla coda, e se dopo l'avvio si controlla nell'applicazione web prima vista, si vedrà che la queue "Example1" contiene 11 messaggi.

Il risultato:

Introduciamo l'uso dell'exchange con l'invio dei messaggi con una queue privata. Inoltre impostiamo la coda in modo che siamo noi a inviare l'acknowledgement message. Il codice si complica di poco per la console application in attesa dei messaggi (Example2A).

// Si definisce il nome dell'exchange: const string ExchangeName = "ExchangeExample2"; // Il nome della queue non serve più perché è creata in modo dinamico e casuale da RabbitMQ. // Il codice rimane uguale al precedente fino all'istanza di ModelCentralized: var ModelCentralized = Connection.CreateModel(); string QueueName = ModelCentralized.QueueDeclare("", false, true, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Fanout); ModelCentralized.QueueBind(QueueName, ExchangeName, ""); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer); // Resto del codice per l'attesa dei messaggi e la sua visualizzazione uguale al precedente

Alla definizione della queue con la funzione QueueDeclare si è lasciato il nome vuoto perché sarà RabbitMQ ad assegnarcene uno con nome casuale. Non è importante il suo nome per la ricezione dei messaggi perché un altro processo, per inviarci i messaggi, utilizzerà il nome dell'exchange. ExchangeDeclare fa proprio questo: crea, se non esiste già, un exchange e con il QueueBind è legata la queue con l'exchange. Inoltre viene definito questo exchance come Fanout: qualsiasi queue collegata a questo exchange, riceverà qualsiasi messaggio inviato. C'è una differenza in questo codice con il precedente: ora siamo noi che dobbiamo comunicare a RabbitMQ che abbiamo ricevuto ed elaborato il messaggio, e lo facciamo con questo codice:

string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer);

Il secondo parametro, false, impostiamo il sistema perché siamo noi che vogliamo inviare il comandi di avvenuta ricezione che si completa con la riga successiva:

ModelCentralized.BasicAck(e.DeliveryTag, false);

L'invio dei messaggi non cambia molto se confrontato con il precedente, cambia solo il codice per l'invio:

ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw);

In questo caso viene specificato il nome dell'exchange e non il nome della queue. Avviati i due processi, la soluzione il risultato sarà uguale al precedente. Ma ora possiamo avviare due istanze del primo programma e vedremo che i messaggi saranno ricevuti da entrambi:

Potremo attivare tutte le istanze che vogliamo: tutte riceveranno i nostri messaggi.

Con l'uso dell'exchange possiamo definire oltre al Fanout visto in precedenza, anche la modalità Topic: in cui possiamo specificare che il collegamento tra un exchange e una o più queue venga attraverso a delle routing key con caratteri jolly. I caratteri jolly sono due: * e #. Ma non permettono la libertà che si potrebbe immaginare. Un errore che può accadere ai novizi e pensare che l'uso dei caratteri jolly debba essere utilizzato nell'invio dei messaggi. Questo è sbagliato: questi devono essere utilizzati nella definizione dell'exchange. Il messaggio inviato dovrà avere sempre una routing key valida (o vuota). Innanzitutto le routing key devono essere definiti come parole separate dal punto. Esempio:


Se definiziamo due routing key di collegamento tra un exchange e due queue in questo modo:

basso.*.maschile *.marroni.*

E inviamo questi messaggi con queste routing key:

basso.azzurri.maschile alto.marroni.femminile alto.azzurri.femminile basso.marroni.maschile azzurri.maschile

Il primo sarà inviato solo alla prima queue, il secondo solo alla seconda queue, la terza a nessuna di esse, la quarta ad entrambi. L'ultima, non essendo composta da tre parole, sarà scartata.

Oltre all'asterisco possiamo usare il carattere hash (#):


La differenza è che con l'asterisco il filtro utilizzerà una sola parola mentre l'hash è un jolly completo e include qualsiasi parola o numero di parole al suo posto. La regola qui sopra accetterebbe:

basso.azzurri.maschile marroni.maschile magro.alto.marroni.maschile

L'esempio "Example3A" avvia due thread con due routing key differenti. Il codice è uguale agli esempi precedenti tranne che per queste due righe:

ModelCentralized.ExchangeDeclare(_exchangeName, ExchangeType.Topic); ModelCentralized.QueueBind(QueueName, _exchangeName, _routingKey);

Nella prima specifichiamo che il tipo di exchange è Topic, nel secondo, oltre al nome della queue e al nome dell'exchange, inseriamo anche le seguenti routing key:

*.red small.*

Nell'invio il codice è uguale agli esempi precedenti tranne per questa riga:

ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw); ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw); ...

Ecco la schermata di ouput:

Se finora si è spinta l'attenzione all'invio dei messaggi con quasi tutte le sue sfaccettature - manca l'exchange con in modalità direct che vedremo nell'esempio successivo e la modalità header che non tratterò - e ora di muoverci nella direzione opposta e prestare maggiore attenzione alla modalità di lettura dei messaggi dalla queue. Con i fonout message e i topic abbiamo visto che possiamo inviare i messaggi a più queue alla volta alla quale è collegato un solo processo... e se collegassimo più processi a un'unica queue? Ecco, siamo al punto più interessante dell'utilizzo dei message broker. La queue quando riceverà i messaggi li distribuirà tra tutti i processi collegati:

Possiamo vedere qui la distribuzione equa di tutti i messaggi tra tutti i processi. L'invio dei messaggi non è nulla di nuovo da quello che si visto finora: si usa il nome dell'exchange (in modalità direct) e una routing key (non obbligatoria); avendo il messaggio in MsgRaw l'invio è semplice:

ModelCentralized.BasicPublish(ExchangeName, RoutingKey, basicProperties, msgRaw);

Un po' di novità sono presenti nell'esempio per la lettura della queue (nel progetto da scaricare è Example 4A). Definiti i nomi della queue, dell'exchange e della routing key:

const string ExchangeName = "ExchangeExample4"; const string RoutingKey = "RoutingExample4"; const string QueueName = "QueueExample4";

... e connessi al solito modo:

var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, false, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Direct); ModelCentralized.QueueBind(QueueName, ExchangeName, RoutingKey); ModelCentralized.BasicQos(0, 1, false);

Nella dichiarazione della queue abbiamo ora specificato il nome, non durable, non exclusive ma con l'autodelete. L'exchange è dichiarata come Direct. La novità e la funzione richiamata "BasicQos". Qui specifichiamo che processo leggerà uno e solo un messaggio alla volta. La lettura dei messaggi avviene allo stesso modo:

var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue();

Dopotutto questo sciolinare le possibilità di RabbitMq e avere visto l'invio e la ricezione dei messaggi, è ora di tornare con i piedi per terra con esempi reali. Un processo che invia un messaggio e un altro, completamente indipendete che lo preleva dalla coda e lo visualizza va bene solo nella dimostrazioni e nella demo: tutt'al più può essere utili nell'invio di messaggi per il log e poco altro. Nel mondo reale un processo ne chiama un altro per richiedere dei dati. Il Request/reply pattern fa al caso nostro. Per ricrearlo con RabbitMQ, negli esempi visti finora, dobbiamo innanzitutto creare una queue pubblica dove inviare le richieste. E ora il problema: come può il processo rispondere al processo richiedente i dati? La soluzione è semplice: il processo che richiede i dati deve avere una propria queue dove saranno depositate le risposte. Finora non siamo andati nel dettaglio dei messaggi ricevuti e avviati da RabbitMq. Possiamo specificare diverse proprietà utili per poi elaborare la comunicazione. Ecco il codice visto fino per l'invio con alcune proprietà aggiuntive:

IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = ... basicProperties.ReplyTo = ...; msgRaw = Encoding.Default.GetBytes(...); ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw);

MessageId e ReplyTo sono due proprietà stringa liberamente utilizzabili. E' facilmente intuibile che potranno essere utilizzati per specificare, nel caso di ReplyTo, la queue del processo richiedente. E MessageId? Lo possiamo utilizzare per specificare a quale richiesta stiamo rispondendo. Nell'esempio "Example5A" e "Example5B" facciamo tutto quanto detto finora. "Example5A" è il processo che elaborerà i nostri dati, in questo caso una banale addizione matematica. La parte più importante è quella che attende la richiesta e invia la risposta:

var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); IBasicProperties props = e.BasicProperties; string replyQueue = props.ReplyTo; string messageId = props.MessageId; string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); int result = GetSum(content); Console.WriteLine("< {0}", result); var msgRaw = Encoding.Default.GetBytes(result.ToString()); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = messageId; ModelCentralized.BasicPublish("", replyQueue, basicProperties, msgRaw); ModelCentralized.BasicAck(e.DeliveryTag, false);

In questo codice prendiamo il nome della queue del richiedente e il messageId che identifica la chiamata. Utilizzando un nuovo oggetto IBasicProperties (avremo potuto usare lo stesso ma in questo modo si capisce meglio l'utilizzo), impostiamo la proprietà del MessageId e inviamo la riposta al nome della queue prelevata alla richiesta.

Fin qui niente di complicatissimo. La parte più intricata è quella del processo che richiamerà questo servizio perché dovrà nello stesso momento crearsi una queue privata ed esclusiva, e inviare le richieste alla queue pubblica. Non potendo usare una chiamata sincrona (e sarebbe assurdo), utilizzerò due thread, uno che invierà le richieste e un secondo per le risposte. Per gestire le richieste utilizzeremo un dictionary dove sarà salvato il messageId e la richiesta:

messageBuffers = new Dictionary(); messageBuffers.Add("a1", "2+2"); messageBuffers.Add("a2", "3+3"); messageBuffers.Add("a3", "4+4");

Quindi è definito il nome fittizio della queue privata dove il servizio dovrà inviare le risposte:

QueueName = Guid.NewGuid().ToString();

L'invio delle richieste è il seguente (come già visto):

foreach (KeyValuePair item in messageBuffers) { IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = item.Key; // a1, a2, a3 basicProperties.ReplyTo = QueueName; msgRaw = Encoding.Default.GetBytes(item.Value); // 2+2, 3+3, 4+4 ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw); }

E ora il thread per le risposte:

while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); string messageId = e.BasicProperties.MessageId; Console.WriteLine("{0} = {1}", messageBuffers[messageId] , content); ModelCentralized.BasicAck(e.DeliveryTag, false); }

Molto semplice, letta la risposta inviata da RabbitMq, si legge il MessageId con il quale si prende il testo della richiesta per potergli assegnare la corretta risposta (in questo caso è solo per la visualizzazione).

Anche in questo caso possiamo avviare più processi in attesa di essere chiamato. Esso può essere sulla stessa macchina, oppure potrebbe essere dall'altra parte del pianeta: unica regola perché possa rispondere a una richiesta è che sia raggiungibile e collegato a RabbitMQ. A questo punto è facile intuire la potenzialità che avrebbe questa possibile strada: un message broker al centro e una o più macchine collegate a esso su cui girano decine di micro servizi ognuno responsabile di una o più funzioni. Nulla ci vieta di mettere, insieme alla funzione della somma qui sopra esposta, un servizio per la richiesta dell'elenco di articoli per un sito di e-commerce. Possiamo creare un altro micro service per la gestione degli utenti e del loro carrello. E il bello è che possiamo installarli sulla stessa macchina come in una webfarm con decine di server. Inoltre, al crescere delle necessità, potremo installare lo stesso micro service su più server - si ricordano le ascisse del cubo?

Siamo onesti: questa tecnica ha potenzialità incredibili ma ha il classico fascino da demo dinanzi a clienti: bello finché rimane semplice. Alzando anche di poco l'asticella si scoprirà che solo passare da una queue a più queue in una nostra applicazione rende il tutto dannatamente confuso e poco gestibile. Se si osserva solo il codice di Example5A si può notare che, per rendere il codice più breve possibile, ho lasciato il tutto in un unico thread e non in versione ottimale e ciò non aiuta completamente la comprensione dello stesso; inoltre per la gestione più performante l'uso di thread separati per le richieste e le risposte sarebbe consigliabile, così come per la gestione di possibili multi queue. Il consiglio è incapsulare tutte queste funzioni così come ho provato nell'esempio finale che si può trovare nella soluzione con il nome "QuickSortRabbitMQ". Quicksort? Dove avevamo già parlato di esso? Ma certo, all'inizio di questo post. Da esso era nato tutto questo discorso che ci ha fatto spaziare dalla distribuzione di processi sino all’uso di un message broker. Anche se solo per scopo didattico, si immagini che creare un micro service per l'ordinamento di un array di interi. Come visto, il quicksort divide l'array in due sotto array grazie ad un valore pivot. E se questi due sotto array li ripassassimo allo stesso micro service e così via per avere l'array ordinato? Il micro service in questione rimarrà in attesa, grazie a rabbitMQ, di una richiesta diretta a lui o, per meglio dire, alla queue dove lui preleverà l'array da ordinare. Quindi avrà una seconda queue privata, dove aspetterà l'ordinamento degli array che lui stesso invierà alla queue principale. Incasinato, vero? Sì, questo è il modo di richiamare micro service con un message broker in modo ricorsivo, e la ricorsività è quella che abbiamo bisogno per il quicksort.

Spiegando con un esempio, avendo questo array da ordinare, lo inviamo alla queue principale del nostro micro service di ordinamento:

[4,8,2,6] -> Public queue

Al primo giro, il nostro micro service potrebbe dividere l'array in due sotto array con pivot 5, che saranno inviati a loro volto alla public queue:

[4, 2] -> Public queue [8, 6] -> Public queue

Ora due volte sarà richiamato lo stesso micro service che ordinerà i due soli numeri presenti e dovrà restituirli... sì, ma a chi? Semplice, a se stesso... E come? Potremo usare ancora la public queue, ma questo comporta un problema non di poco conto. Se si ricorda l'algoritmo di ordinamento quicksort, lo stesso metodo attende i due array inviati ma questa volta ordinati, quindi deve restituire l'unione dei due array a chi lo aveva chiamato. Quindi dobbiamo tenere traccia dei due array inviati per essere uniti: e come si potrebbe fare se questo micro service avesse più istanze attive e un'array finisse in un processo sulla macchina A e il secondo array in un processo sulla macchina B? Il processo che invia la richiesta DEVE ESSERE quello che riceve le risposte viste ora, e lo possiamo fare solo creando una queue privata nello stesso processo e con l'uso delle property nei messaggi, indirizzare la risposta alla queue corretta.

[2,4] -> Private queue processo chiamante [6,8] -> Private queue processo chiamante

Il processo chiamante rimarrà in attesa anche sulla queue private di entrambe le risposte che unirà prima di restituirla a sua volta al processo chiamante, che potrebbe essere ancora se stesso o un altro.

Già si può immaginare la complessità di scrivere del codice che, utilizzando su più thread, gestisca questo casino. Per semplificare le cose ho creato una piccola libreria che, in completa autonomia, crea i thread di cui ha bisogno e grazie agli eventi comunica con il processo i messaggi provenienti dal message broker. Nel codice della soluzione di esempio è nel progetto "RabbitMQHelperClass". Questa libreria è utilizzata dal progetto "QuickSortRabbitMQ". Siamo arrivati a destinazione: qui troviamo la console application che utilizza questo message broker per la comunicazione delle "porzioni" di array da ordinare con il quicksort. La prima parte è semplice: creato un array di 100 elementi e popolato con numero interi casuali da 1 a 100, ecco che viene istanziata la classe che ci faciliterà il lavoro (o almeno ci prova).

using (rh = new RabbitHelper("localhost")) { rh.AddPublicQueue(queueName, exchangeName, routingKey, false); var privateQueueThread = rh.AddPrivateQueue();// "QueueRicorsiva"); privateQueueName = privateQueueThread.QueueInternalName;

E' presente la funzione per la creazione di una queue public (dove saranno inviate le richieste di array da ordinare). Quindi è creata una queue privata, che sarà usata per restituire al chiamante l'array ordinato.

var privateQueueResultThread = rh.AddPrivateQueue();// "QueueFinale"); privateQueueNameResult = privateQueueResultThread.QueueInternalName;

Qui è creata una ulteriore queue privata: questa sarà utilizzata perché sarà quella che conterrà il risultato finale alla fine del ciclo ricorsivo (a differenza delle queue pubblica che è univoca, la queue privata possono essere più di una). E' ora di collegare gli eventi:

string messageId = RabbitHelper.GetRandomMessageId(); rh.OnReceivedPublicMessage += Rh_OnReceivedPublicMessage; privateQueueThread.OnReceivedPrivateMessage += Rh_OnReceivedPrivateMessage; privateQueueResultThread.OnReceivedPrivateMessage += Rh_OnReceivedPrivateMessageResult;

In messageId è un guid univoca casuale. OnReceivePublicQueue è l'evento che sarà eseguito quando arriverà un messaggio alla queue pubblica prima creata. La stessa cosa per le due queue private con: OnReceivedPrivateMessage. E' ora di far partire la nostra procedura di ordinamento:

var msgRaw = RabbitHelper.ObjectToByteArray>(arrayToSort); rh.SendMessageToPublicQueue(msgRaw, messageId, privateQueueNameResult);

Come visto in precedenza, tutto quello che è trasmesso via RabbitMQ viene serializzato in formato di array di byte. La funzione ObjectToByteArray fa questa operazione (ne riparleremo anche più avanti) e SendMessageToPublicQueue invia l'array da ordinare alla queue pubblica; inoltre viene inviato anche il nome della queue privata che rimarrà in attesa della risposta finale dell'ordinamento completato. Ok, ora la classe che avrà creato per noi thread per l'elaborazione delle queue, riceverà il messaggio e invierà il suo contenuto, con altre informazioni, all'evento prima definito "OnReceivedPublicMessage". Qui è stata riscritta la funzione di ordinamento vista all'inizio di questo post:

private static void Rh_OnReceivedPublicMessage(object sender, RabbitMQEventArgs e) { string messageId = e.MessageId; string queueFrom = e.QueueName; var toOrder = RabbitHelper.ByteArrayToObject>(e.MessageContent); Console.Write(" " + toOrder.Count.ToString()); if (toOrder.Count <= 1) { var msgRaw = RabbitHelper.ObjectToByteArray>(toOrder); rh.SendMessageToPrivateQueue(msgRaw, messageId, queueFrom); return; }

Preso il contenuto dell'array, più le informazioni (l'id univoco del messaggio e la coda a cui dovremo rispondere), si controlla che la sua dimensione sia maggiore di uno, altrimenti manda come risposta, alla queue privata, lo stesso array inviato. Il resto del codice divide dal valore pivot gli elementi dell'array minore e maggiore e invia questi due array alla queue pubblica:

var rs = new RequestSubmited { MessageParentId = messageId, MessageId = RabbitHelper.GetRandomMessageId(), QueueFrom = queueFrom, PivotValue = pivot_value }; lock (requestCache) { requestCache.Add(rs); } var msgRaw1 = RabbitHelper.ObjectToByteArray>(less); rh.SendMessageToPublicQueue(msgRaw1, rs.MessageId, privateQueueName); var msgRaw2 = RabbitHelper.ObjectToByteArray>(greater); rh.SendMessageToPublicQueue(msgRaw2, rs.MessageId, privateQueueName);

RequestSubmited è una classe che contiene solo delle proprietà per l'identificazione della risposta inviata da un altro (o dallo stesso) processo proveniente dal message broker.

Solo quando gli array sono ridotti a una unità viene inviato il tutto alle queue private: Rh_OnReceivedPrivateMessage. Questo evento dev'essere richiamato due volte per le due parti di array divise dal valore pivot. La prima parte di questa funzione non fa altro che aspettare che entrambe arrivino alla funzione prima di essere unite. L'oggetto di tipo RequestSubmited è usato per richiamare i valori dell'id del messaggio e del pivot:

private static void Rh_OnReceivedPrivateMessage(object sender, RabbitMQEventArgs e) { string messageId = e.MessageId; string queueFrom = e.QueueName; ... codice per riprendere entrambe le queue ... infine viene mandato l'array ordinato alla queue privata var msgRaw = RabbitHelper.ObjectToByteArray>(result); rh.SendMessageToPrivateQueue(msgRaw, messageParentId, queueParent); }

Non mi soffermo sul codice che esegue l'ordinamento (che abbiamo già visto) e per recuperare le due code (è un banale controllo su un oggetto List<...>); inoltre il codice sorgente è facilmente consultabile ed è possibile testarlo. Vediamo però l'effetto finale:

Il bello è possiamo attivare più volte questo processo per permettere l'ordinamento in parallelo su più processi, anche su diverse macchine:

Nella window sottostante è visibile solo un'informazione di debug che sono il numero di elementi inviati.

Se ci fosse un premio su come complicare una procedura per sua natura semplice, dopo quest'ultimo codice potrei concorrere per il primo premio senza problemi. In effetti, questo esempio - il quicksort - presenta un grave problema perché possa essere utilizzato con profitto per il calcolo distribuito: il primo tra tutti è che le porzioni di array da ordinare sono da inviare completamente quando, con molto meno e con solo i reference all'array da ordinare, il tutto si sistemerebbe in modo molto più veloce. Ma questo mi era venuto in mente come esempio...

Iniziamo a tirare qualche conclusione: la più semplice è che devo lavorare meglio sugli esempi da portare; la seconda conclusione è che effettivamente il message broker (in questo caso RabbitMQ) esegue molto bene il suo lavoro se decidiamo di abbracciare il mondo dei micro service. Ritornando al cubo, possiamo far comunicare i processi su qualsiasi ascissa in modo veloce e affidabile. Inoltre possiamo fare in modo che anche comunicazione tra processi siamo facilitati anche per l'invio di cambiamenti di configurazione. Tornando a un esempio precedente dove abbiamo utilizzato la comunicazione fanout (senza filtri, a tutte le queue collegate a quell'exchange). E ora pensiamo a una moltitudine di microservice avviati su uno o più server. Di default questi potrebbero avere un proprio file di configurazione utilizzato all'avvio dello stesso: ma cosa succederebbe se dobbiamo cambiare uno di questi parametri? Di default su tutti i processi potrebbe essere inserito il nome di un server ftp dove inserire dei file. In caso si dovesse cambiare l'URI di questo server, che cosa dovremo fare? Modificare tutti i file di configurazione di tutti i processi? E se ce ne dimentichiamo qualcuno? Soluzione più pratica potrebbe essere la predisposizione di un micro service che fa solo questo: tutti i processi, una volta avviati, leggerebbero di default il file di configurazione salvato insieme all'eseguibile, e di seguito potrebbe richiedere a quel microservice la nuova configurazione (che potrebbe sovrascrivere quelle precedente). Oppure ogni microservice potrebbe avere una queue privata collegata a un exchange che potrebbe inviare modifiche alla configurazione in tempo reale. Questo passaggio ci consentirebbe addirittura di inviare il nuovo URI per il server FTP, aspettare che tutti i processi si siano aggiornati, quindi spegnere il primo server tranquilli che nessuno lo sta utilizzando.

Ripetendoci, possiamo fare in modo di scrivere una miriade di microservice per le operazioni più disparate: dall'accesso ad una tabella di un database, all'invio di email, alla creazione di grafici; ogni servizio raggiungibile grazie ad un exchange e, a seconda delle esigenze che potrebbe aumentare, poter collegare più processi che il message broker possa richiamare bilanciando il carico tra tutte le risorse disponibili. E l'interoperabilità? Il message broker non si crea problemi su chi o cosa lo chiami. Potrebbe essere così come un client scritto con il Framework .Net come fatto in questo post, oppure Java... In questo caso, come comunichiamo con oggetti più complessi delle stringhe usate nei primi esempi e inviamo, come nell'esempio del quicksort, oggetti serializzati nativi per una determinata tecnologia?

Vediamo che succede. Nell'Esempio6A proviamo proprio questo: innanzitutto creiamo un oggetto semplice da serializzare come una lista di interi:

var content = new int[] { 1, 2, 3, 4 };

Quindi la inviamo a RabbitMQ con il codice che conosciamo:

IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); var msgRaw = ObjectToByteArray(content); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw);

ObjectToByteArray è una funzione usata anche dall'esempio del quicksort:

public static byte[] ObjectToByteArray(T obj) where T : class { BinaryFormatter bf = new BinaryFormatter(); using (var ms = new MemoryStream()) { bf.Serialize(ms, obj); return ms.ToArray(); } }

E ora non ci rimane che vedere che cosa succede, una volta inserito questo oggetto in una queue, come leggerlo con un'altra tecnologia. Proviamo con node.js (confesso che questo mio interesse per i message broker sia nato per questa tecnologia e solo dopo l'ho "convertita" al mondo del Framework .Net). Avendo installato sulla propria macchina npm e nodejs, da terminale, basta preparare i pacchetti:

npm install amqp

Quindi un text editor:

var amqp = require('amqp'); var connection = amqp.createConnection({ host: 'localhost' }); // Wait for connection to become established. connection.on('ready', function () { // Use the default 'amq.topic' exchange connection.queue('Example6', function (q) { // Catch all messages q.bind('#'); // Receive messages q.subscribe(function (message) { // Print messages to stdout console.log(; }); }); });

NodeJs rende il tutto più banale grazie alla sua natura asincrona. Connesso al server di RabbitMQ su localhost, all'evento ready al momento della connessione ci si connette alla queue "Example6" (la stessa queue usata nell'Esempio6A) e con il subscribe si attende la risposta. Lanciamo questa mini app:

node example1.js

Quindi avviamo Esempio6A.exe:

Ovviamente, NodeJs non sa che farsene di quell'array di byte incomprensibili. E la soluzione? Io non so qual è la strada più performante e migliore, ma la più semplice e riutilizzabile per qualsiasi tecnologia è grazie a json. Il codice di invio prima visto lo possiamo trasformare in questo modo:

var json = new JavaScriptSerializer().Serialize(content); msgRaw = Encoding.Default.GetBytes(json); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw);

E inserito nella queue, una volta letto dall'app in nodejs avremo:

Perfetto, con nodejs l'uso di json è semplicissimo così come con il Framework .Net visto che come possiamo serializzare un oggetto in formato json lo possiamo anche trasformarlo nel suo formato originale. Ci sono soluzioni migliori? Sono disponibile a qualsiasi consiglio, come sempre.

E' ora di chiudere. Conclusioni? Non ce ne sono. Un message broker semplifica la struttura di app basate su microservice. E' l'unica scelta? No. Avrei voluto continuare questo discorso con la comunicazione tra microservice grazie a Redis dell'Italiano Salvatore Sanfilippo, ma la mia conoscenza in merito è ancora deficitaria e mi sono scontrato subito con delle problematiche che non ho ancora risolto (il tempo disponibile è quello che è). Uno dei vantaggi che ho notato immediatamente a confronto di RabbitMQ è la velocità spaventosamente superiore. Forse in futuro affronterò l'argomento su questo blog... sempre se, lo ripeto, la voglia e il tempo me lo permetteranno. Altra soluzione è con l'utilizzo di Akka.Net: anche in questo caso le prestazioni sono superiori e il tutto è più semplificato per la comunicazione di messaggi; il problema grosso in cui mi sono imbattuto subito è la difficile interoperabilità tra diverse tecnologie, ma la mia conoscenza da novizio non mi hanno fatto andare oltre le basi. Ok, basta così.

Tutto il codice di esempio è disponibile qui:

Tags: ,

Continua a leggere Divide et impera con c# e un message broker.

(C) 2017 Network - All rights reserved

          Spark Your Startups #2 爬蟲比武大會   
Microsoft BizSpark對台灣的創業生態圈一直都相當地關注,在9月30日舉辦的第一場Spark Your Startup系列活動<避免走向創投眼中新創公司的五種死法>獲得相當不錯的評價後,12月20日所舉辦的第二場<爬蟲比武大賽>報名情況相當踴躍,活動過程中不僅邀請到幾位知名講師進行分享,活動後半段的爐邊對談也非常熱烈。透過下面的文字描述,希望能夠讓讀者們對當天的活動以及講者的分享有更進一步的認識。 首先針對“爬蟲”這項技術做一點介紹: 實際上爬蟲是一種常見的網路資料擷取技術,使用者在日常生活的網路瀏覽行為中也常常會接觸到爬蟲技術所提供的資訊或服務。簡單地說,爬蟲是指透過深度瀏覽特定網站內容以蒐集網站中的各類連結與資訊,以供使用者進行檢索;爬蟲這項技術也可以用以將蒐集擷取而來的資料進行整理與分析,以供未來面對不確定性問題的決策參考依據。一般常見的蒐尋引擎、比價網站或是租屋資訊平台等等大多都是透過爬蟲技術所蒐集而來的資料建置,以提供使用者進行檢索的服務。透過爬蟲這項技術,不僅可以減少在不同網站間進行資料收集與比較的時間、幫助監控特定人事物的動態,還可以讓使用者以即時又快速的方式收集網路上的各類數據資料,真的是一種省時省事又省錢的好技術。  這次的爬蟲比武大賽邀請到的講者個個都身懷絕世武功,除了微軟技術傳教士王凡以外,還有人稱“蟲王”的大數軟體有限公司CEO丘祐瑋(David)、優拓資訊系統開發工程師曾建勳(Daniel)還有剛獲得新一輪融資提供廉價航空機票比價蒐尋的異域科技CEO徐向賢(Mark)。以下就重點摘要幾位講者的分享精華:   大數軟體David 蟲王 首先登場的蟲王David過去曾在趨勢科技服務,主要負責BI以及CRM;而大數軟體有限公司則是透過爬蟲提供數據服務並從而協助建構商業模式,客戶主要包括政府機構、電信公司以及金融公司等等,過去曾數次獲得各項大獎。David本身也是知名的作者,曾出版過以及。 David在創業之初曾經利用爬蟲技術建立追價神器以及InfoLite兩個網路服務,105年推出的InfoMiner即時輿情分析平台讓大數軟體一炮而紅,也獲得資訊月百大創新商品的殊榮。除了分享創業過程中的甘苦之外,David建議在場的程式開發者利用Python來打造自己的網路爬蟲;David也當場示範Python打造一條龍服務的過程。除此之外,透過Microsoft Azure所提供的強大雲端服務,程式開發人員可以將輕鬆地建立具備分散式架構的爬蟲服務,不但能夠有效地提升服務品質,更能夠加入例如機器學習等功能,擴大爬蟲服務的利用範圍,同時強化商業模式的競爭優勢。   優拓資訊Daniel蟲王子  優拓資訊的Daniel是HackNTU的共同創辦人,他用實例描述利用爬蟲蒐集與分析台灣新聞媒體網站資訊的程序。不同於蟲王主要利用Python來打造爬蟲服務,Daniel所習慣的程式語言是Java;利用Java所提供的各項彈性功能以及描述檔,程式開發者可以針對爬蟲所蒐集到的資訊,依照不同的需求會製圖表、整理分析趨勢,甚至建立訓練的模型。只要程式開發者或是爬蟲服務的客戶對於所蒐集到的資訊有一定的理解或想像,後續的應用潛力相當大。除了在利用爬蟲來蒐集資料的過程中,不該影響到其他網站服務的完整性這項最高指導原則以外,Daneiel在分享的最後建議網站開發者盡量開放Public API,同時也提出"Developer Friendly Princople"、網站開發者不要隨意變更網站架構,還有在建構爬蟲服務過程中不可輕忽錯誤處理及預警機制等結論。   異域科技Mark 蟲神 剛獲得新一輪融資的異域科技CEO除了分享Hellowings廉價航空機票搜尋引擎的打造過程以外,也以異域科技本身有五位女性程式開發者來勉勵在場的女性參與者。Hellowings主要也是利用Java所打造的爬蟲來提供服務,不過在Queue Server以及資料庫的選擇部分則經歷了數次的變動。分享過程中,Mark用相當詼諧的方式描述了幾種不同類型的廉價航空公司網站設計方式,還有因應不同網站設計方式所採用的爬蟲設計原則;Mark所提到的某些航空公司網站設計相當逗趣,在現場引發了不少的笑聲。儘管認同透過爬蟲可以為使用者帶來相當多的便利,不過Mark也希望程式開發者尊重不同網站的設計,不要因為濫用爬蟲而造成網站服務的中斷或是效率降低等狀況。   台灣微軟James 技術小王子 台灣微軟技術傳教士James分享了就學過程中利用PHP建構爬蟲來搜集數位人文領域的文獻引用狀況,再分析所蒐集到的資料並以視覺化的方式呈現其中所隱含的趨勢。近來James也根據本身對於閱讀MSDN裡各類技術文件的需求,打造了一個爬蟲服務,以協助挑選符合本身興趣與需求的技術類文章;James以實例說明用Python打造爬蟲服務的過程,還有如何透過Azure所提供的Open Source服務以及Power BI來產出智慧報表。從一個非技術專業人士的角度來看,整個過程真的相當便利簡單。 爐邊對話 四位講者分享過後的爐邊對談也相當精彩,現場的參與者對於法律的議題、侵權的議題、常用來打造爬蟲服務的程式語言以及幾位講師所遇過最難使用爬蟲的網站等等。   總結 參與了兩次Spark Your Startup的系列活動,真的可以感受到BizSpark對台灣新創圈的付出與關懷,後續還有更多更棒的活動,也希望新創圈的夥伴們可以一起來參與。
          The other half of the Jolla story   
They say there are two sides to every coin, and that holds true for the story of the history leading up to Jolla and it's Sailfish OS. The Jolla story usually starts out with Nokia, but it's really a convergence with Nokia as the center point.

This side of the story starts in Norway, not Finland. Oslo, in fact. Not with Nokia, but with a small company named Trolltech.

I won't start at the very beginning but skip to the part where I join in and include a bit about myself. It was 2001, I was writing a Qt based app called Gutenbrowser. I got an email from A. Kozak at Trolltech, makers of Qt. Saying that Sharp was planning to release a new PDA based on Qt, and wouldn't it be cool if Gutenbrowser would be ported to it? I replied, yes, but as I have no device it might be difficult. He replied back with a name/email of a guy that might be able to help. Sharp was putting on a Developer Symposium where they were going to announce the Zaurus and hand out devices to developers. I jumped at the chance.

It was in California. At that time I was in Colorado. Jason Perlow was working for Sharp at that time, and said he had an extra invite to the Developer symposium. WooHoo! The Zaurus was going to run a Qt based interface originally named QPE, later named Qtopia (and even later renamed Qt Extended). The sdk was released, so I downloaded it and started porting even before I had a device to test it on.

Qtopia was open source, and it was available for developers to tinker with, and put on other devices. There was a community project based on the open source Qtopia called Opie that I became involved with. That turned into me getting a job with Trolltech in Australia, where Qtopia was being developed, as the Qtopia Community Liaison, which luckily later somehow turned into a developer job.

Around the time that Nokia came out with the Maemo tablets, I was putting Qtopia on them. N770, N800, N810, and N900 all got the Qt/Qtopia treatment. (Not to mention the OpenMoko phones I did as well).

Then I was told to flash a Qtopia on an N810 because some Trolls were meeting with Nokia. That became two or three images I had to flash over the coarse of a few weeks. I knew something was up.

Around this time, one of the Brisbane developers (A. Kennedy, I'm looking at you!) had a Creative Friday project to make a dynamic user interface framework using xml. (Creative Friday was something Trolltech did that allowed developers to spend every Friday (unless impending doom of bug fixes/release) of their time on research projects) It was really quite fluid and there was a "prototype" interface running on that N810 as well. It only took a few lines of non c++ code to get dynamic UI's. This would have turned into what the next generation of Qtopia's interface would be made with. It was (and still is) quite amazing.

Then came the news that Nokia was buying Trolltech! Holy cow! A HUGE company that makes zillions of phones wanted to buy little ol' Trolltech. But they already had a Linux based interface - Maemo that was based on Gtk toolkit, and not Qt. WTH!?

Everyone speculated they wanted Trolltech for Qtopia. Wrong. Nokia wanted Qt, and decided to ditch Qtopia. We had a wake for the Qtopia event loop to say our good riddance. All of us in Brissie worried about our jobs.

So our little Trolltech got assimilated into this huge behemoth phone company from Finland. Or was it that Trolltech took over Nokia...? Nokia had plans for Qt that would provide a common toolkit for their massively popular Symbian and new Linux based phones.

The Brisbane office started working on creating the QtMobility API's. Yes, there are parts of Qtopia in QtMobility.

Meanwhile, that creative friday xml interface was still being worked on. It got canceled a few times and also revived a few. That eventually evolved into QML, and QtQuick.
Then came N9 and MeeGo, which was going to use this new fangled dynamic UI. MeeGo was also open source, and it's community version was called Mer and Nemo. Yes, there are parts of Qtopia in MeeGo.

The rest of the story is famous, or rather, infamous now. Nokia made redundant the people working on MeeGo. Later on, all of us Brisbane developers, QA and others were also made redundant. The rest of what I call the Trolltech entity got sold to Digia. The QA server room was packed up and shipped to Digia, who is doing a fantastic job of getting Qt Everywhere!

A few of those guys that were working on MeeGo got together and created a company called Jolla, and created a Linux based mobile OS based on Mer named Sailfish. Yes, there are a few Trolltech Trolls working for Jolla. and yes, there are parts of Qtopia in Sailfish.

          Nokia and open source   

Nokia makes some  big news announcement

Ok, so they made an announcement they are buying a controlling percent of Symbian, and will be putting it under an open source license. Specifically the Eclipse Public License (EPL).

What remains to be seen is just how much source will be released as open. The EPL  is very generous to proprietary interests. Allowing contributors to relicense their derived works. Which means, "it" won't stay open source. If it is anything like their Maemo offering, it won't be much. Just enough that PR can call it open source. 

Unfortunely, the EPL is not compatible with all the GPL and LGPL works out there, so not very many of the thousands of really great applications for Linux will be available for this. Unless, of course, an application can be relicensed and uses Qt, which will surely be licensed to allow development on this new EPL Symbian.

As with Nokia news lately, I feel it is generally good and it is great to see such a huge company try and be a better 'citizen'.

          Big business needs to learn open source   
There are certain open source rules businesses need to obey. 

Most of them I learned in kindergarden, "Share everything. Play fair. Don't hit people. Put things back where you found them. Clean up your own mess. Don't take things that aren't yours. Say you are sorry when you hurt somebody."

Open Source and the GPL put power and more rights into the hands of users than proprietary software. Such things as DRM, and IPR take rights away from the people, the users.  Richard Stallman sought to empower users and take power back from the establishment that so often abuses it's power and doesn't play fair like a good citizen. This is what open source is about.

To Big Businesses that use open source, you cannot be part of the solution, if you are part of the problem. You cannot move beyond old business models if you are standing still. 

I dare you to take not small steps, but giant leaps forward to bring back power to the users of your software,  hardware and services. 

          responding via blog!   
wooT gotta love blog conversations!

A lovely respond to Lorn Potter of Trolltech

I just love that people hold Trolltech to some higher threshold. For some, nothing that TT can do is ever good enough, or that Trolltech has a special, evil GPL, which isn't truly open source somehow, since TT doesn't have public source development repositories.

TT releases Greenphone with some close source and closed kernel drivers and gets backlashed, the Neo is released with closed kernel drivers, and a license that allows closed source and is touted as the first open source linux phone.

umm excuse me! LGPL is not about free and open source software! hello! "Dont lend your hand to raise no flag atop no ship of fools."

One of the differences is that Trolltech is not a hardware company, FIC is. TT contracted the hardware from a vendor and what you see is what Trolltech was given. Neo was designed by FIC.
          LGPL and the Neo 1973   
The Neo 1973 phone that runs Openmoko is being touted as the first open source phone, and that they are dedicated to open source...

For one, Trolltech's Greenphone was the first open source phone. Granted, the kernel is less then completely open, as is the phone libraries, but Qtopia is as open source as any.

For two.. if they are so dedicated to open source, why choose a gui library that is LGPL'd? Even RMS says do not use LGPL!
Why you shouldn't use the Lessor GPL for your libraries

As you all know, the LGPL is not about open source, its about letting commercial business keep proprietary code while still using open source. i.e. not give back to the community.. i.e. rip you off.

OpenMoko: if you are so dedicated to open source and keeping it that way, why choose GTK? Qt and Qtopia would be a much better option, and Qtopia is mature and stable phone interface, and it is GPL. All you would have had to do, is write phone libraries, instead of everything.

But, by looking at you choice for gui toolkits, you like doing things the hard way.
          Comment on Linux 4.12 receives second release candidate by Linus Torvalds releases last Linux kernel 4.12 RC - Open Source For You   
[…] original schedule of the Linux 4.12 development, we can expect the final release next week. Meanwhile, the final RC update has emerged with just a handful of minor […]
          Comment on Daimler joins Open Invention Network patent-protection group by Open Invention Network aims to drive a world of patent non-aggression - Open Source For You   
[…] Google, IBM, NEC, Red Hat and SUSE. Alongside the open source supportive companies, automakers like Ford, Toyota and Daimler as well as electronics manufacturers such as Philips and Sony are a part of the non-profit […]
          Extracting Data from Open Source Communities   
On Sunday at FOSDEM, I have a 5 minute lightning talk about extracting data from open source communities in the HPC, Big Data, Data Science devroom (slides). Open source communities are filled with huge amounts of data just waiting to be analyzed. Getting this data into a format that can be easily used for analysis … Continue reading Extracting Data from Open Source Communities
          Open Source: A Job and an Adventure   
At LinuxCon in Düsseldorf, I gave a talk about the many ways that you can turn your work in open source into a career, so that you can get paid for all of the awesome work that you do in open source. If you already have a job in open source, I also talked about … Continue reading Open Source: A Job and an Adventure
A social microblogging service similar to Twitter, built on open source tools and open standards. Allows users to send text-based posts up to 140 characters long.
Handphone / Hp Android semakin populer di dunia dan menjadi saingan serius bagi para vendor handphone yang sudah ada sebelumnya seperti Nokia, Blackberry dan iPhone.
Tapi bila anda menanyakan ke orang Indonesia kebanyakan “Apa itu Android ?” Kebanyakan orang tidak akan tahu apa itu Android, dan meskipun ada yang tahu pasti hanya untuk orang tertentu yang geek / update dalam teknologi.
Ini disebabkan karena masyarakat Indonesia hanya mengenal 3 merek handphone yaitu Blackberry, Nokia, dan merek lainnya :)

Ada beberapa hal yang membuat Android sulit (belum) diterima oleh pasar Indonesia, antara lain:

  • Kebanyakan handphone Android menggunakan input touchscreen yang kurang populer di Indonesia,
  • Android membutuhkan koneksi internet yang sangat cepat untuk memaksimalkan kegunaannya padahal Internet dari Operator selular Indonesia kurang dapat diandalkan,
  • Dan yang terakhir anggapan bahwa Android sulit untuk dioperasikan / dipakai bila dibandingkan dengan handphone lain macam Nokia atau Blackberry.

Apa itu Android

Android adalah sistem operasi yang digunakan di smartphone dan juga tablet PC. Fungsinya sama seperti sistem operasi Symbian di Nokia, iOS di Apple dan BlackBerry OS.
Android tidak terikat ke satu merek Handphone saja, beberapa vendor terkenal yang sudah memakai Android antara lain Samsung , Sony Ericsson, HTC, Nexus, Motorolla, dan lain-lain.
Android pertama kali dikembangkan oleh perusahaan bernama Android Inc., dan pada tahun 2005 di akuisisi oleh raksasa Internet Google. Android dibuat dengan basis kernel Linux yang telah dimodifikasi, dan untuk setiap release-nya diberi kode nama berdasarkan nama hidangan makanan.
Keunggulan utama Android adalah gratis dan open source, yang membuat smartphone Android dijual lebih murah dibandingkan dengan Blackberry atau iPhone meski fitur (hardware) yang ditawarkan Android lebih baik.
Beberapa fitur utama dari Android antara lain WiFi hotspot, Multi-touch, Multitasking, GPS, accelerometers, support java, mendukung banyak jaringan (GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE & WiMAX) serta juga kemampuan dasar handphone pada umumnya.

Versi Android yang beredar saat ini

Eclair (2.0 / 2.1)

Versi Android awal yang mulai dipakai oleh banyak smartphone, fitur utama Eclair yaitu perubahan total struktur dan tampilan user interface dan merupakan versi Android yang pertama kali mendukung format HTML5.
Apa itu android

Froyo / Frozen Yogurt (2.2)

Android 2.2 dirilis dengan 20 fitur baru, antara lain peningkatan kecepatan, fitur Wi-Fi hotspot tethering dan dukungan terhadap Adobe Flash.

Gingerbread (2.3)

Perubahan utama di versi 2.3 ini termasuk update UI, peningkatan fitur soft keyboard & copy/paste, power management, dan support Near Field Communication.

Honeycomb (3.0, 3.1 dan 3.2)

Merupakan versi Android yang ditujukan untuk gadget / device dengan layar besar seperti Tablet PC; Fitur baru Honeycomb yaitu dukungan terhadap prosessor multicore dan grafis dengan hardware acceleration.
Tablet pertama yang memakai Honeycomb adalah tablet Motorola Xoom yang dirilis bulan Februari 2011.
Tablet Android
Google memutuskan untuk menutup sementara akses ke source code Honeycomb, hal ini dilakukan untuk mencegah perusahaan pembuat handphone menginstall Honeycomb pada smartphone.
Karena pada versi Android sebelumnya banyak perusahaan yang menggunakan Android ke dalam tablet PC yang menyebabkan pengalaman buruk penggunanya dan mengesankan citra Android tidak bagus.

Ice Cream Sandwich (4.0)

Anroid 4.0 Ice Cream Sandwich diumumkan pada 10 Mei 2011 di ajang Google I/O Developer Conference (San Francisco) dan resmi dirilis pada tanggal 19 Oktober 2011 di Hongkong. “Android Ice Cream Sandwich” akan dapat digunakan baik di smartphone ataupun tablet. Fitur utama yang ditambahkan di Android 4.0 ialah Face UnlockAndroid Beam, perubahan major User Interface, dan ukuran layar standar (native screen) beresolusi 720p (high definition).

Market Share Android

Pada tahun 2012 sekitar 630 juta smartphone akan terjual diseluruh dunia, dimana diperkirakan sebanyak 49,2% diantaranya akan menggunakan OS Android.
Data yang dimiliki Google saat ini mencatat bahwa 500.000 Handphone Android diaktifkan setiap harinya di seluruh dunia dan nilainya akan terus meningkat 4,4% /minggu.
PlatformAPI LevelDistribution
Android 3.x (Honeycomb)110,9%
Android 2.3.x (Gingerbread)9-1018,6%
Android 2.2 (Froyo)859,4%
Android 2.1 (Eclair)5-717,5%
Android 1.6 (Donut)42,2%
Android 1.5 (Cupcake)31,4%
Data distribusi versi Android yang beredar di dunia sampai Juni 2011

Applikasi Android

Android memiliki basis developer yang besar untuk pengembangan applikasi, yang membuat fungsi Android menjadi lebih luas dan beragam. Android Market merupakan tempat applikasi Android didownload baik gratis ataupun berbayar yang dikelola oleh Google.
Applikasi Android di handphone
Meskipun tidak direkomendasikan, kinerja dan fitur Android dapat lebih ditingkatkan dengan melakukan Root Android. Fitur seperti Wireless Tethering, Wired Tethering, uninstall crapware, overclock prosessor, dan install custom flash ROM dapat digunakan pada Android yang sudah diroot.

Artikel Terkait

Peristiwa penting yang terjadi di dunia Teknologi pada tahun 2011 (Kaleidoskop)Chrome for Android beta rilis di Android MarketHandphone China menyetrum mati seorang pemuda di IndiaGame Android terbaik dan gratis | Link DownloadTips jika handphone terkena air
          untuk menangkal situs porno   

Untuk menangkal situs porno tidaklah sukar. Secara umum ada dua (2) teknik menangkal situs porno, yaitu:
• Memasang filter di PC pengguna.
• Memasang filter di server yang tersambung ke Internet.

Teknik yang pertama, memasang filter pada PC pengguna, biasanya dilakukan oleh para orang tua di PC di rumah agar anak-anak tidak melakukan surfing ke situs yang tidak di inginkan. Daftar lengkap filter maupun browser yang cocok untuk anak untuk aplikasi rumah tersebut dapat dilihat pada ? parent’s guide ? browser’s for kids. ? parent’s guide ? blocking and filtering.

Beberapa filter yang cukup terkenal seperti
Net Nanny,
I Way Patrol,

Tentunya teknik memfilter seperti ini hanya dapat dilakukan bagi orang tua di rumah kepada anak-nya yang belum begitu tahu Internet.Bagi sekolah yang terdapat fasilitas internet, tentunya teknik-teknik di atas sulit di terapkan. Cara paling effisien untuk menangkal situs porno adalah dengan memasang filter pada server proxy yang digunakan di WARNET / di kantor yang digunakan mengakses Internet secara bersama-sama dari sebuah Local Area Network (LAN).Teknik ke dua (2), memasang filter situs porno tidaklah sukar. Beberapa software komersial untuk melakukan filter konten, antara lain adalah:

Mungkin yang justru paling sukar adalah memperoleh daftar lengkap situs-situs yang perlu di blokir. Daftar tersebut diperlukan agar filter tahu situs mana saja yang perlu di blokir. Daftar ratusan ribu situs yang perlu di blokir dapat di ambil secara gratis, antara lain di:

bagi sekolah atu perkantoran, alternatif open source (Linux) mungkin menjadi menarik karena tidak membajak software. Pada Linux, salah satu software proxy yang paling populer adalah squid ( yang biasanya dapat di install sekaligus bersamaan dengan instalasi Linux (baik Mandrake maupun RedHat).

Untuk melakukan proses filtering pada squid tidaklah sukar, kita cukup menambahkan beberapa kalimat pada file /etc/squid/squid.conf. Misalnya

acl sex url_regex "/etc/squid/sex"
acl notsex url_regex "/etc/squid/notsex"
http_access allow notsex
http_access deny sex

buatlah file /etc/squid/sex

contoh isi /etc/squid/notsex:

contoh isi /etc/squid/sex:

untuk memasukan daftar blacklist yang di peroleh dari squidguard dll, dapat dimasukan dengan mudah ke daftar di atas tampak di bawah ini adalah daftar Access Control List (ACL) di /etc/squid/squid.conf yang telah saya buat di server saya di rumah, yaitu:

acl sex url_regex "
acl notsex url_regex "
acl aggressive url_regex "
acl drugs url_regex "
acl porn url_regex "
acl ads url_regex "
acl audio-video url_regex "
acl gambling url_regex "
acl warez url_regex "
acl adult url_regex "
acl dom_adult dstdomain "
acl dom_aggressive dstdomain "
acl dom_drugs dstdomain "
acl dom_porn dstdomain "
acl dom_violence dstdomain "
acl dom_ads dstdomain "
acl dom_audio-video dstdomain "
acl dom_gambling dstdomain "
acl dom_proxy dstdomain "
acl dom_warez dstdomain "

http_access deny sex
http_access deny adult
http_access deny aggressive
http_access deny drugs
http_access deny porn
http_access deny ads
http_access deny audio-video
http_access deny gambling
http_access deny warezhttp_access deny dom_adult
http_access deny dom_aggressive
http_access deny dom_drugs
http_access deny dom_porn
http_access deny dom_violence
http_access deny dom_ads
http_access deny dom_audio-video
http_access deny dom_gambling
http_access deny dom_proxy
http_access deny dom_warez

Dengan cara di atas, saya tidak hanya memblokir situs porno tapi juga situs yang berkaitan dengan drug, kekerasan, perjudian dll. Semua data ada pada file blacklist dari

Block Situs di Mikrotik Lewat Winbox

1.Buka winbox yang berada pada desktop.

2. Klik tando ( … ) atau isi alamat Mikrotik pada kolom Connect To:

3. Maka akan muncul gambar seperti di bawah ini, kemudian pilih salah satu.

4. Setelah itu isi Username dan Passwort Mikrotik

5. Kemudian klik tanda connect.

6. Dan akan terbuka jendela Mikrotik seoerti gambar di bawah ini.

7. Untuk block situs klik menu IP pilih Web Proxy

8. Kemudian Setting Web Proxy dengan mengeklik tombol Setting.

9. Maka akan muncul jendela seperti gambar di bawah ini.
Setting Web Proxy sepeti gambar di bawah ini, kemudian di klik tombol OK.

10. Sekarang kita mulai buat settingan website yang akan di block.Klik tanda ( + )
Maka akan muncul jendela, dan kemudia setting seperti gambar di bawah ini.

11. Kemudian klik OK, maka akan muncul catatan pada jendela Web Proxy.

12. Coba cek settingan tersebut dengan mengetikan kata “porno” pada google.

13. Dan kemudian enter, jika muncul tampilan seperti gambar di bawah ini maka settingan block situs Kamu berhasil.
Diposkan oleh Diandra Ariesalva Blogs di 10:39 0 komentar

Kejahatan dunia maya (Inggris: cybercrime) adalah istilah yang mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer menjadi alat, sasaran atau tempat terjadinya kejahatan. Termasuk ke dalam kejahatan dunia maya antara lain adalah penipuan lelang secara online, pemalsuan cek, penipuan kartu kredit/carding, confidence fraud, penipuan identitas, pornografi anak, dll.

Walaupun kejahatan dunia maya atau cybercrime umumnya mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer sebagai unsur utamanya, istilah ini juga digunakan untuk kegiatan kejahatan tradisional di mana komputer atau jaringan komputer digunakan untuk mempermudah atau memungkinkan kejahatan itu terjadi.

Contoh kejahatan dunia maya di mana komputer sebagai alat adalah spamming dan kejahatan terhadap hak cipta dan kekayaan intelektual. Contoh kejahatan dunia maya di mana komputer sebagai sasarannya adalah akses ilegal (mengelabui kontrol akses), malware dan serangan DoS. Contoh kejahatan dunia maya di mana komputer sebagai tempatnya adalah penipuan identitas. Sedangkan contoh kejahatan tradisional dengan komputer sebagai alatnya adalah pornografi anak dan judi online.

Pada pemilu 2004 lalu, ada sebuah kasus yang cukup mengegerkan dan memukul telak KPU sebagai institusi penyelenggara Pemilu. Tepatnya pada 17 April 2004 situs KPU diacak-acak oleh seseorang dimana nama-nama partai peserta pemilu diganti menjadi lucu-lucu namun data perolehan suara tidak dirubah. Pelaku pembobolan situs KPU ini dilakukan oleh seorang pemuda berumur 25 tahun bernama Dani Firmansyah, seorang mahasiswa Universitas Muhammadiyah Yogyakarta jurusan Hubungan Internasional.

Pihak Kepolisian pada awalnya kesulitan untuk melacak keberadaan pelaku terlebih kasus seperti ini adalah barang baru bagi Kepolisian. Pada awal penyelidikan Polisi sempat terkecoh karena pelaku membelokan alamat internet atau internet protocol (IP address) ke Thailand namun dengan usaha yang gigih, polisi berhasil meringkus tersangka ini setelah bekerjasama dengan beberapa pihak seperti Asosiasi Penyelenggara jasa Internet Indonesia (APJII) dan pihak penyedia jasa koneksi internet (ISP/Internet Service Provider).

Belakangan diketahui motif tersangka adalah untuk menunjukkan bahwa kinerja KPU sangat buruk terutama di bidang Teknologi Informasi, namun itu tidak bisa dibenarkan dan pelaku tetap diproses sesuai hukum yang berlaku.

          ANIMASI 3D   

Membuat 3D dengan Blender 3D
Membuat 3D dengan Blender 3D
Pusatgratis – Untuk semua pengunjung PG yang tertarik di dunia 3D modelling dan animasi.. Blender 3D adalah software gratis yang bisa anda gunakan untuk modeling, texuring, lighting, animasi dan video post processing 3 dimensi. Blender 3D yang merupakan software gratis dan open source ini merupakan open source 3D paling populer di dunia. Fitur Blender 3D tidak kalah dengan software 3D berharga mahal seperti 3D studio max, maya maupun XSI.
Dengan Blender 3D anda bisa membuat objek 3D animasi, media 3D interaktif, model dan bentuk 3D profesional, membuat objek game dan masih banyak lagi kreasi 3D lainnya.
Blender 3D memberikan fitur – fitur utama sebagai berikut :
1. interface yang user friendly dan tertata rapi.
2. tool untuk membuat objek 3D yang lengkap meliputi modeling, UV mapping, texturing, rigging, skinning, animasi, particle dan simulasi lainnya, scripting, rendering, compositing, post production dan game creation.
3. Cross Platform, dengan uniform GUI dan mendukung semua platform. Blender 3D bisa anda gunakan untuk semua versi windows, Linux, OS X, FreeBSD, Irix, Sun dan sistem operasi yang lainnya.
4. Kualitas arsitektur 3D yang berkualitas tinggi dan bisa dikerjakan dengan lebih cepat dan efisien.
5. Dukungan yang aktif melalui forum dan komunitas
6. File Berukuran kecil
7. dan tentu saja gratis
Berikut ini beberapa screenshot gambar dan animasi 3 dimensi hasil desain dari Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Hasil desain 3D oleh freeware Open Source Blender 3D terbukti tidak kalah dengan software 3D yang berharga mahal happy
Download Blender 3D dari situs resmi Blender 3D | License : Free, open Source | size : 13 – 24 Mb (tergantung sistem operasi anda) | Support : Semua versi Windows, Linux, OS X, FreeBSD, Irix, Sun dan beberapa sistem operasi yang lainnya.
Selamat mendesain

Stop Dreaming start Action (Mau mencoba dan berkarya)

Kata-kata stop dreaming start action mempengaruhi pikiranku, mungkin kita banyak bermimpi atau berkayal untuk mendapatkan sesuatu, mungkin cita-cita bisa selangkit harapan dan keinginan bisa tinggi setinggi gunung. Tapi apakah semua itu kita dapatkan yang kita harapakan tentu tidak karena masih ada tidakan yang harus kita lakukan, itu yang membuat kita susah untuk mendapatkan yang kita inginkan.
Perlu kita ingan didunia ini apa sih yang kita dapatkan dari bermimpi? mukin belum ada didunia ini orang yang kaya atau sukses hanya dengan duduk-duduk, tidur-tiduran atau bermalas -malasan, mungkin ada tapi hanya sedikit yang kita temukan atau bisa dibilang kaya dari warisan,atau terlahir dari anak orang kaya bisa juga suskse dari keberuntungan. Mimpinya orang film sinetron.
Saya banyak mendapatkan kiriman email dari Bapak Joko susilo yang banyak memberi masukan dan motivasi ilmu tentang hal kesuksesan, itu yang mempengaruhi pikiranku sekarang. Mungkin klop aja dari dunia ilmu yang saya geluti yaitu komputer terutama dibidang internet, banyak hal yang saya terima dari kiraman emailnya.yaitu tentang slogan blog harus mempunyai slogan penting. Jadi slogan saya di blog adalah tentang membuat animasi flash. Dimana saya dituntut untuk lebih bayak berkreatifitas dalam berkarya, ada juga email yang dikirimkan kepada saya tentang 6 cara menumbuhkan kreativitas. Saya sangat berterima kasih kepada bapak Joko susilo, walaupun karya saya tidak bagus tapi itu membuat saya berarti karena keinginan saya sejak kecil terwujud sekarang.
Kreativitas yang saya buat hanya untuk menyalurkan hobie, tapi setidaknya saya tidak bermimpi untuk bisa melakukannya walaupun saya tidak pintar menggambar. ungkin rekan - rekan semua bisa sama seperti saya mungkin ada bisa memwujudkan imipin anda walupun hanya sesaat. Hidup selalu penuh pengorbanan dan perjuangan dan kita pasti mendapatkan kemulian dari usahanya.

          How to Install The Latest Mesa Version On Debian 9 Stretch Linux   
Mesa is a big deal if you're running open source graphics drivers. It can be the difference between a smooth experience and an awful one. Mesa is under active development, and it sees constant noticeable performance improvements. That means it's really worthwhile to stay on top of the latest releases.
          Open Tools Help Streamline Kubernetes and Application Development   
New tools are emerging that help streamline Kubernetes and make building container-based applications easier. Here, we will consider several open source options worth noting.
          Google Cloud Platform : Good Times Ahead   

The tech behemoths Amazon, Microsoft & Google are established players in one of the battes that will change the future of customers view and investments of computing. This is an area with a potential hundred billion dollars plus that can be secured for the vendors – a lucrative space that each one wants to corner : the cloud. For a quick recap of the dollars under consideration - read here. Cloud lets business tap on demand the processing, storage and software over the web. Tall, powerful and cool servers with gargantuan memory onboard installed inside enterprises now are gradually giving way to tap on need model –use when needed and shut them down in other times. The vast data centers and governance brought in by these tech behemoths make them ideal partners for business to tap such services on a need to have basis – whenever and wherever required.

Amazon is by far the well-established leader here with a revenue that far exceeds the combined revenue of all competitors put together. Amazon partly achieved this by bringing a single-minded focus to this space to scale up and win and it paid back handsomely. Amazon’s first-mover advantage coupled with slow reactions from competition has now made Amazon an almost insurmountable lead in business in this space. That’s the focus and attention of the next two players. Microsoft has been pushing Azure extensively last 2-3 years and clocking impressive success. The lesser known of the trio in this space – Google is now flexing its muscle and is now focused on striking it big here. Google is recognized primarily as the early proponent of cloud computing – after all, Google built huge date centers in its early days and ran services like search, gmail and maps , available around the world and with unthinkable scale in action. Alongside, developers build other applications on top resulting in an expanding google universe. Google has the strong reputation of running scalable, secure services and recognized as one delivering successfully for a long time. By being late and remaining indifferent to this space, Google lost out on big time opportunities that were out there and Amazon happily grabbed these. Now, Google wants to get back aggressively and be counted as large player in this space and is increasing its investments, market messaging and outreach efforts. Last week the company hosted an event to talk about their upcoming plans around the Google cloud platform and talk loudly about some notable success that they have notched thus far. I listened to the webcast and followed the announcements keenly to see how Google is planning to move things here and I heard good actionable things.

Google’s overall messaging shows that the momentum in the business continues and the focus to scale this platform with enterprise as a key segment to focus on for adoption. Google positioning is getting better to take a sizable chunk of business in the ever growing public cloud space over the next few years. The overall market is projected to have 50% of the enterprise workloads moved into overtime. Google paraded customers/customer stories – the likes of Spotify, Coca-Cola, Disney etc. as proof of successful adoption of their services.

The emphasis on a move forward basis is positioned around:

A. Machine learning as a cornerstone of their approach and hence drive the attendant benefits for customers.

B. Monetization/Commercialization of native security tools used within Google to make these available to customers.

C. Make ease of deployment and migration more easy.

In terms of upcoming innovation, consistent with its focus on enterprise adoption, Google talked about the lofty vision of No-ops goal for enterprises. This will be an ideal demonstration of the power of cloud computing and if Google and others are able to make it happen everywhere, it’s a true sign of the changed paradigm here.Another important facet of the evolution of the cloud revolved around the extreme emphasis on machine learning and the Google cloud platform’s leverage of this. A new product called Tensorflow has been open sourced by Google and it is their core belief that embracing machine learning will become a non-negotiable for innovative startups focused on scaling globally and offering sophisticated services.Add into the mix cloud monitoring tools working across clouds – enterprises can hardly resist massive cloud adoption with these. And in order to keep helping enterprises adopt and scale faster, Google wants to focus on the three important aspects of cloud computing – data centers, security and containers.

I drilled into these a little more to find what could be differentiated in offering such services and what I could recollect from the conference webcast included the following, which Google finds as the drivers for increasing enterprise adoption of their services.

1. Better value: GCP can cost up to 50% less than competitors. Google provides automatic discounts as customers consume higher volumes. And GCP also offers custom machine types (i.e., cores or gigabytes), which helps save customers versus static instance types from other vendors that often lead to overprovisioning.

2. Accelarate innovation: Google’s approach here is to allow customers to run applications with no additional operations staff needed. For example, Google showcases Snapchat here - grew from zero to 100mn users without hiring an operations team (just two people).

3. Risk Management : Google shall be focused on providing best-in-class security for customer’s data and digital assets, protect privacy and help in conforming to compliance and regulatory needs.

4. Open Source adoption –leading to better management by customers for products like Kubernotes( focused on managing data containers).

I got the feeling that the GCP was comparable to Amazon’s fabled AWS services for the purpose of enterprise adoption. While the engineering and under the hoods battle is one part of the equation, the real determinant of success and also ran lay in shaping market forces – GTM, Solutions, Partnerships, support and ease of doing business – an area that Google will have to heavily focus on. With Google’s stated plans to triple their data center regions and with some good early demonstrated success, the market should begin to warm up for Google.Enterprise success depends not just on what’s available from a service provider – determination of what and how to move to cloud, transforming IT landscape, flipping over the governance model and change management – all these have a say in the eventual success of any cloud initiative. With substantial progress and focus on this space with the tech giants competing aggressively for their pie in this fast growing space, the competition expands the market, services get more sophisticated and yet mature fast and the industry improves and courtesy of Moore’s effect, the customers get superior services at a lower cost. It’s a win-win situation for all.

          Open Secret   

Today on the 5: This opinion piece by John C. Dvorak reminded me of the unfortunate reality that most people have no idea how fantastic open source software is.

          Citizens vs. geniuses   

Citizens vs. geniuses

It is a characteristic of our culture that we glorify scientific genius. Galileo Galilei, Albert Einstein, Richard Feynman and Stephen Hawking are just a few of the illustrious names from the canon of physics saints. Other disciplines have their own haloed ones. The role of the amateur scientist, in comparison with these greats, seems to vanish into insignificance.

The power of citizen cyberscience
Figuratively, citizen cyberscience today is primarily about harnessing the power of dray horses. Not just a hundred, but - thanks to Web - sometimes a hundred thousand, to tackle a problem that involves a lot of intellectual hauling.

Of course there are many examples of volunteer computing, such as the recently released LHC@Home 2.0 project, but I'm thinking here in particular of that strain of citizen cyberscience, volunteer thinking, represented by projects such as Stardust@Home, GalaxyZoo or FoldIt, where it is human brainpower that is being aggregated, and not just processor power on volunteer PCs.

But citizen cyberscience can be a lot more than this. Because the whole premise that there are only steeds and drays, brilliant professional scientists and ordinary citizens, and a factor of a hundred or more between them in intellectual capacity, is fundamentally flawed.

Fast drays and blinkered steeds
We live in a world where a staggering amount of people have studied science at a very high level, many even obtain a PhD, without becoming professional scientists.

And we live in a world where, due to specialization, professional scientists can only filter a fraction of the information that might be relevant to their research. In other words, a world where many drays can gallop fast, and most steeds are blinkered.

The famous American hat-throwing celebration of graduation. Today, there are many people who are highly educated in science, some may even hold a PhD, without being a scientist or actively engaged in research. Nevertheless, they are specialists who could contribute to their field. Image courtesy Wikimedia.
We can add to that the many steeds that have been put out to pasture - retired scientists with time on their hands and years of experience, who still have much to contribute. And then there are those passionate amateurs who spend all their free time practicing the science they love, the growing ranks of autodidact drays.

The long tail of talent
The whole point of this rather labored equine analogy is just this: the difference between professional scientists and amateurs is blurring.

This is happening for myriad reasons. Thanks to the Internet, but also to radical changes in education opportunities and life expectancies in the developed and much of the developing world.  The net result is that there is not a binary world of geniuses and ordinary mortals, but rather – to use an Internet analogy – a long tail of talent.

This long tail ranges from the many volunteers who can do things such as catalogue galaxy images, the task set by the project GalaxyZoo, to the very few who might be able to spontaneously team up and develop completely new strategies for folding proteins, as scientists behind the computer game FoldIt documented, to their own surprise. (See "Citizen cyberscience: the new age of the amateur" in the CERN Courier for more examples).

The audience joins the show
What will this mean for citizen cyberscience? The most insightful analogy for understanding the implications of this gradual blurring between professional and amateur scientist is not horses, I would argue, but journalists.

The world of journalism has been turned upside-down in recent years by social media technologies which allow a much wider range of people to take part in gathering, filtering and distributing news. Though some professional journalists at first resisted this trend, most now appreciate the likes of Facebook, Twitter and myriad blogs in expanding the sources of news and opinion and accelerating dissemination: the audience has become part of the show.

Could the Internet one day wreak the same sort of social change on the world of science, breaking down the distinction between amateur and professional? In my view, it is not a question of whether, but of when.

A prediction for the future of science
I'm going to venture a guess. By 2020, we will see a significant amount of real, breakthrough science being carried out by online communities - similar to the open source communities that develop complex software packages.

By far the largest fraction of the work will be done by amateurs. Not only that, the amateurs will have the biggest say in exactly what questions are tackled by the community. They will actively help to define the research agenda.

Professional scientists will still play a role, as professional journalists do today, of going into the field - or rather into the lab - in search of new data. This is something that cannot easily be distributed, especially for research that involves expensive experimental equipment.

Scientists will still play a role in vetting results, as journalists and their editors do today, when dealing with information that has been crowdsourced. Scientists will still play a role in shaping the research agenda, much as the benign dictators in open source projects do. But they will have to compromise with the aspirations of the rest of the community in order to get results.

In science as in journalism
That this will happen, I am in little doubt. Even the Royal Society, a pillar of the traditional scientific establishment, has been promoting a major policy discussion on the role of science as a public enterprise, which touches on the issue of citizen participation in science.

How this will impact the scientific establishment in the long run, however, is another question entirely. Judging by the major upheavals that the world of print journalism is going through, I expect the impact to be enormous. And yes, it may not all be nice. A lot of scientists - especially expensive ones in industrialized countries - may find themselves out of work, in the same way that the livelihoods of many journalists have become more precarious in the Web 2.0 era.

I am not saying Galileo's elitist view is wrong - it accurately describes the past. Nor am I suggesting that in future, individual geniuses will become superfluous. Many scientific problems will no doubt remain easier for a gifted individual or a small team of professionals to tackle, in much the same way as there will probably always be problems that supercomputers can better tackle than networks of ordinary computers. Horses for courses.

But in the next 10 years, I contend, the scope for citizen contributions to real science is going to expand radically. And in many cases human genius - embodied in the individual brains of exceptional people such as Galileo - will be supplanted by the genius of interconnected humans. In science as in journalism, the audience will become part of the show.

This opinion is based on a recent entry on François Grey'sblog,

          Machine Learning on Heroku with PredictionIO   
Last week at the TrailheaDX Salesforce Dev Conference we launched the DreamHouse sample application to showcase the Salesforce App Cloud and numerous possible integrations. I built an integration with the open source PredictionIO Machine Learning framework. The use case for ML in DreamHouse is a real estate recommendation engine that learns based on users with ... Read more
          SAP slams Open Source and Oracle   
UPDATE: Clearly Shai’s comments were taken out of context. He provides a much more balanced view in the disucssion (Here is the recording, Open Source discussion starts at 35:30) then reported by Vnunet. I encourage you to judge for yourself. … Continue reading
          Video Vortex Conference 2008 (1 of 2): Open Source Ways Of Producing, Distributing And Promoting Online Video   
Text of my presentation at the conference incl. all links and a few additions (27.01.08 – note that in my presentation the part about copyright was much shorter and did not include some of the arguments I mention in this text version!): • As a film maker I am now more interested in online video […]
          Visual Studio Code extensions demystified   
When I tried for the first time Visual Studio Code on my Mac I remained quite impressed about its performances. The investment Microsoft did during the last few years on this editor is really remarkable, considering also that it’s an open source software and not a commercial one. As you know with Visual Studio Code you … Continue reading Visual Studio Code extensions demystified
          Cavemen Don't Know Shit   

This week, we're covering a few big topics and a few small ones. We start off with an artist that Lando wanted to spotlight, Ashley Wood, and from there we move into some comics discussion before one of our big slices: Google's Android mobile OS. After discussing our predictions and feelings on this open source mobile, we move into the current U.S. economic crisis and from there just a bit of politics in general. Enjoy!

Opening Music: "Alive WIP v2" by George Carpenter
Closing Music: "Blau.ton" by Rauschwerk

          This is what phylodiversity looks like   

Following on from earlier posts exploring how to map DNA barcodes and putting barcodes into GBIF it's time to think about taking advantage of what makes barcodes different from typical occurrence data. At present GBIF displays data as dots on a map (as do I in But barcodes come with a lot more information than that. I'm interested in exploring how we might measure and visualise biodiversity using just sequences.

Based on a talk by Zachary Tong (Going Organic - Genomic sequencing in Elasticsearch) I've started to play with n-gram searches on DNA barcodes using Elasticsearch, an open source search engine. The idea is that we break the DNA sequence into every possible "word" of length n (also called a k-mer or k-tuple, where k = n).

For example, for n = 5, the sequence GTATCGGTAACGAACTT would look like this:



The sequence GTATCGGTAACGAACTT comes from Hajibabaei and Singer (2009) who discussed "Googling" DNA sequences using search engines (see also Kuksa and Pavlovic, 2009). If we index sequences in this way then we can do BLAST-like searches very quickly using Elasticsearch. This means it's feasible to take a DNA barcode and ask "what sequences look like this?" and return an answer qucikly enoigh for a user not to get bored waiting.

Another nice feature of Elasticsearch is that it supports geospatial queries, so we can ask for, say, all the barcodes in a particular region. Having got such a list, what we really want is not a list of sequences but a phylogenetic tree. Traditionally this can be a time consuming operation, we have to take the sequences, align them, then input that alignment into a tree building algorithm. Or do we?

There's growing interest in "alignment-free" phylogenetics, a phrase I'd heard but not really followed up. Yang and Zhang (2008) described an approach where every sequences is encoded as a vector of all possible k-tuples. For DNA sequences k = 5 there are 45 = 1024 possible combinations of the bases A, C, G, and T, so a sequence is represented as a vector with 1024 elements, each one is the frequency of the corresponding 5-tuple. The "distance" between two sequences is the mathematical distance between these vectors for the two sequences. Hence we no longer need to align the sequences being comapred, we simply chunk them into all "words" of 5 bases in length, and compare the frequencies of the 1024 different possible "words".

In their study Yang and Zhang (2008) found that:

We compared tuples of different sizes and found that tuple size 5 combines both performance speed and accuracy; tuples of shorter lengths contain less information and include more randomness; tuples of longer lengths contain more information and less random- ness, but the vector size expands exponentially and gets too large and computationally inefficient.

So we can use the same word size for both Elasticsearch indexing and for computing the distance matrix. We still need to create a tree, for which we could use something quick like neighbour-joining (NJ). This method is sufficiently quick to be available in Javascript and hence can be computed by a web browser (e.g., biosustain/neighbor-joining).

Putting this all together, I've built a rough-and-ready demo that takes some DNA barcodes, puts them on a map, then enables you to draw a box on a map and the demo will retrieve the DNA barcodes in that area, compute a distance matrix using 5-tuples, then build a NJ tree, all on the fly in your web browser.

Phylodiversity on the fly from Roderic Page on Vimeo.

This is all very crude, and I need to explore scalability (at the moment I limit the results to the first 200 DNA sequences found), but it's encouraging. I like the idea that, in principle, we could go to any part of the globe, ask "what's there?" and get back a phylogenetic tree for the DNA barcodes in that area.

This also means that we could start exploring phylogenetic diversity using DNA barcodes, as Faith & Baker (2006) wanted a decade ago:

...PD has been advocated as a way to make the best-possible use of the wealth of new data expected from large-scale DNA “barcoding” programs. This prospect raises interesting bio-informatics issues (discussed below), including how to link multiple sources of evidence for phylogenetic inference, and how to create a web-based linking of PD assessments to the barcode–of-life database (BoLD).

The phylogenetic diversity of an area is essentially the length of the tree of DNA barcodes, so if we build a tree we have a measure of diversity. Note that this contrasts with other approaches, such as Miraldo et al.'s "An Anthropocene map of genetic diversity" which measured genetic diversity within species but not between (!).

Practical issues

There are a bunch of practical issues to work through, such as how scalable it is to compute phylogenies using Javascript on the fly. For example, could we do something like generate a one degree by one degree grid of the Earth, take all the barcodes in each cell and compute a phylogeny for each cell? Could we do this in CouchDB? What about sampling, should we be taking a finite, random sample of sequences so that we try and avoid sampling bias?

There are also data management issues. I'm exploring downloading DNA barcodes, creating a Darwin Core Archive file using the Global Genome Biodiversity Network (GGBN) data standard, then converting the Darwin Core Archive into JSON and sending that to Elasticsearch. The reason for the intermediate step of creating the archive is so that we can edit the data, add missing geospatial informations, etc. I envisage having a set of archives, hosted say on GitHub. These archives could also be directly imported into GBIF, ready for the time that GBIF can handle genomic data.


  • Faith, D. P., & Baker, A. M. (2006). Phylogenetic diversity (PD) and biodiversity conservation: some bioinformatics challenges. Evol Bioinform Online. 2006; 2: 121–128. PMC2674678
  • Hajibabaei, M., & Singer, G. A. (2009). Googling DNA sequences on the World Wide Web. BMC Bioinformatics. Springer Nature.
  • Kuksa, P., & Pavlovic, V. (2009). Efficient alignment-free DNA barcode analytics. BMC Bioinformatics. Springer Nature.
  • Miraldo, A., Li, S., Borregaard, M. K., Florez-Rodriguez, A., Gopalakrishnan, S., Rizvanovic, M., … Nogues-Bravo, D. (2016, September 29). An Anthropocene map of genetic diversity. Science. American Association for the Advancement of Science (AAAS).
  • Yang, K., & Zhang, L. (2008, January 10). Performance comparison between k-tuple distance and four model-based distances in phylogenetic tree reconstruction. Nucleic Acids Research. Oxford University Press (OUP).

          On asking for access to data   

In between complaining about the lack of open data in biodiversity (especially taxonomy), and scraping data from various web sites to build stuff I'm interested in, I occasionally end up having interesting conversations with the people whose data I've been scraping, cleaning, cross-linking, and otherwise messing with.

Yesterday I had one of those conversations at Kew Gardens. Kew is a large institution that is adjusting to a reduced budget, a changing ditigal landscape, and a rethinking of it's science priorities. Much of Kew's data has not been easily accessible to the outside world, but this is changing. Part of the reason for this is that Defra, which part-funds Kew, is itself opening up (see Ellen Broad's fascinating post Lasers, hedgehogs and the rise of the Age of Yoghurt: reflections on #OpenDefra).

During this conversation I was asked "Why didn't you just ask for the data instead of scraping it? We would most likely have given it to you." My response to this was "well, you might have said no". In my experience saying "no" is easy because it is almost always the less risky approach. And I want a world where we don't have to ask for data, in the same way that we don't ask to get source code for open source software, and we don't ask to download genomic data from GenBank. We just do it and, hopefully, do cool things with it. Just as importantly, if things don't work out and we fail to make cool things, we haven't wasted time negotiating access for something that ultimately didn't work out. The time I lose is simply the time I've spent playing with the data, not any time negotiating access. The more obstacles you put in front of people playing with your data, the fewer innovative uses of that data you're likely to get.

But it was pointed out to me that a consequence of just going ahead and getting the data anyway is that it doesn't necessarily help people within an organisation make the case for being open. The more requests for access to data that are made, the easier it might be to say "people want this data, lets work to make it open". Put another way, by getting the data I want regardless, I sidestep the challenge of convincing people to open up their data. It solves my problem (I want the data now) but doesn't solve it for the wider community (enabling everyone to have access).

I think this is a fair point, but I'm going to try and wiggle away from it. From a purely selfish perspective, my time is limited, there are only so many things I can do, and making the political case for opening up specific data sets is not something I really want to be doing. In a sense, I'm more interested in what happens when the data is open. In other words, let's assume the battle for open has been won, what do we then? So, I'm essentially operating as if the data is already open because I'm betting that it will be at some point in time.

Without wishing to be too self-serving, I think there are ways that treating closed data as effectively open can help make the case that the data should (genuinely) open. For example, one argument for being open is that people will come along and do cool things with the data. In my case, "cool" means cross linking taxonomic names with the primary literature, eventually to original decsriptions and fundamental data about the organisms tagged with the taxonomic names (you may feel that this stretches the definitoon of "cool" somewhat). But adding value to data is hard work, and takes time (in some cases I've invested years in cleaning and linking the data). The benefits from being open may take time, especially if the data is messy, or relatively niche so that few people are prepared to invest the time necessary to do the work.

Some data, such as the examples given in Lasers, hedgehogs and the rise of the Age of Yoghurt: reflections on #OpenDefra will likely be snapped up and give rise to nice visualisations, but a lot of data won't. So, imagine that you're making the case for data to be open, and one of your arguments is "people will do cool things with it", eventually you win that argument, the data is opened up... and nothing happens. Wouldn't it be better if once the data is open, those of us who have been beavering away with "illicit" copies of the data can come out of the woodwork and say "by the way, here are some cool things we've been doing with that data"? OK, this is a fairly self-serving argument, but my point is that while internal arguments about being open are going on I have three choices:

  1. Wait until you open the data (which stops me doing the work I want to do)
  2. Help make the case for being open (which means I engage in politics, an area in which I have zero aptitude)
  3. Assume you will be open eventually, and do the work I want to do so that when you're open I can share that work with you, and everyone else
Call me selfish, but I choose option 3.

          300 Software Freeware e Open Source per tutti gli usi   
Questa è una lista di oltre 300 programmi gratuiti (licenza open source o freeware), catalogati in base a specifici usi. Software Gratis Per … … Proteggere il Computer [Free Antivirus – Firewall – Criptare …] analizzare/controllare sicurezza di una rete:
          Rilke CMS 0.95 beta released!   

The eighth release of Rilke CMS, version 0.95 beta has been made available at SourceForge. Rilke CMS provides easy content management for non-geeks. It allows you to easily publish a weblog, update a public website, or collaborate on a private Intranet site.

Rilke CMS is different from other open source content management systems in that it is easy to learn and use, and does not have a blocky look-and-feel.

It relies on five built-in CSS layouts/themes which can be changed with a single click.

Rilke CMS is currently available in English and German.

A live demo of Rilke CMS 0.95 beta is available at:

Rilke CMS 0.95 beta is a big feature release. It also fixes many long standing bugs.

New Features in 0.95 beta include:

* A redesigned default theme
* A built in permissions / user-level system (writer, editor, administrator)
* A complete user interface for adminstering users (only available to administrators)
* Customizable Links (via a styled drop down menu)
* Customizable Blogroll + Integrated Mini RSS Feed Aggregator
* A customizable contact info block

This release also fixes many bugs, including the (remaining) bugs that occured with register globals turned off in PHP.
The total number of major changes (including feature additions and bug fixes) is 20.

For a detailed list of changes, please review the Changelog [ ].

You can download version 0.95 beta here:

Your feedback is welcome. Any help in translating Rilke CMS to new languages is also welcome!

For detailed general, technical and setup information, please review the readme:

          Initial Public Release of Rilke CMS 0.8!   

I am happy to announce the initial public (open source) release of Rilke CMS 0.8!

Rilke CMS is a PHP / MySQL based content management system. It can be used to publish a variety of different websites, including personal and collaborative weblogs. While many open source content management systems are difficult for the average non-geek to use, this one strives to be easily usable by anyone who has used a word processor.

A live demo is available at:

The sourceforge page is available at:

It features:

* An easy to use WYSIWYG publishing screen. Anyone who has used a word processor before will be able to use Rilke CMS
* Easy look-and-feel adaptability, due its reliance on CSS based layouts
* A PHP-MySQL based core, made available under the PHP license
* Integrated commenting system
* Encryption of visitor submitted email addresses (in comments) to prevent their harvesting by SPAMbots
* Approval system for visitor submitted posts
* Easy editing and/or deactivation of posts and comments
* Easy organization of posts through categories
* Easy syndication through XML based RSS feeds
* Extended funtionality through plugins

A request for help:
If you fit any of the following profiles, the Rilke CMS project could use your help:

* Programmer : PHP / MySQL / JavaScript
* Designer: HTML / CSS
* Graphic Designer: Adobe Photoshop / Gimp / Flash
* Documentor: Technical Writing Skills (and familiarity with content management systems)

Please contact Jay Sheth (jayeshsh [at] ] if you are interested in helping out with Rilke CMS, or if you have suggestions for its improvement.

          Baidu's Political Censorship is Protected by First Amendment, but Raises Broader Issues   

Baidu, the operator of China’s most popular search engine, has won the dismissal of a United States lawsuit brought by pro-democracy activists who claimed that the company violated their civil rights by preventing their writings from appearing in search results. In the most thorough and persuasive opinion on the issue of search engine bias to date, a federal court ruled that the First Amendment protects the editorial judgments of search engines, even when they censor political speech. This post will introduce the debate over search engine bias and the First Amendment, analyze the recent decision in Zhang v. Baidu, and discuss the implications of the case for both online speech and search engines.

Search Engine Bias and the First Amendment

When users enter a query into a search engine, the search engine returns results ranked and arranged by an algorithm. The complicated algorithms that power search engines are designed by engineers and modified over time. These algorithms, which are proprietary and unique to each search engine, favor certain websites and types of content over others. This is known as “search engine bias.”

The question of whether search engine results constitute speech protected by the First Amendment is particularly important in the context of search engine bias, and has been the subject of considerable academic debate. Several prominent scholars (including Eric Goldman, Eugene Volokh, and Stuart M. Benjamin) have argued that the First Amendment encompasses results generated by search engines, thus largely immunizing the operators search engines from liability for how they rank websites in search results. Others (primarily Tim Wu) have maintained that because search engine results are automated by algorithm, they should not be granted the full protection of the First Amendment.

Until now, only two federal courts had addressed this issue. See Langdon v. Google, 474 F. Supp. 2d 622 (D. Del. 2007); Kinderstart v. Google, 2007 WL 831806 (N.D. Cal. 2007). In dismissing claims against Google, Microsoft, and Yahoo brought by private plaintiffs dissatisfied with how their websites ranked in search results, both courts concluded after limited analysis that search engine results are protected under the First Amendment.

Baidu in Court

In May 2011, eight Chinese-American activists who described themselves as “promoters of democracy in China” filed a complaint against Baidu in the United States District Court for the Southern District of New York. The plaintiffs, who are residents of New York, alleged that Baidu had violated their First Amendment and equal protection rights by “censoring and blocking” the pro-democracy content they had published online from its search results, purportedly at the behest of the People’s Republic of China. While the plaintiffs’ content appeared in results generated by Google, Yahoo, and Bing, it was allegedly “banned from any search performed on … Baidu.”

Baidu responded by filing a motion for judgment on the pleadings. Baidu argued that the plaintiffs’ suit should be dismissed based on the longstanding principle that the First Amendment “prohibits the government from compelling persons to speak or publish others’ speech.” Baidu also accused the plaintiffs of bringing a meritless lawsuit “for the purpose of drawing attention to their views.”

Last month, United States District Judge Jesse M. Furman concluded in a thoughtful decision that that the results returned by Baidu’s search engine constituted speech protected by the First Amendment, dismissing the plaintiffs’ lawsuit in its entirety.

Judge Furman began his analysis with a discussion of Miami Herald Publishing Co. v. Tornillo, a 1974 decision in which the Supreme Court held that a Florida statute requiring newspapers to provide political candidates with a right of reply to editorials critical of them violated the First Amendment. By requiring newspapers to grant access to their pages the messages of political candidates, the Florida law imposed an impermissible content-based burden on newspapers’ speech. Moreover, the statute would have had the effect of deterring newspapers from running editorials critical of political candidates. In both respects, the statute was an unconstitutional interference with newspapers’ First Amendment right to exercise “editorial control and judgment.”

The court then cited Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, which extended the Tornillo principle beyond the context of the press. In that case, the Supreme Court ruled that Massachusetts could not require organizers of a private St. Patrick’s Day parade to include among marchers a group of openly gay, lesbian, and bisexual individuals. This was true even though parade organizers did not create the floats themselves and did not have clear guidelines on who and what groups were allowed to march in the parade. Once again, the Court held that requiring private citizens to impart a message they did not wish to convey would “violate[] the fundamental rule of protection under the First Amendment . . . that a speaker has the autonomy to choose the content of his own message.”

These decisions taken together, according to the court, established four propositions critical to its analysis. First, the government “may not interfere with the editorial judgments of private speakers on issues of public concern.” Second, this rule applies not only to the press, but to private companies and individuals. Third, First Amendment protections apply “whether or not a speaker articulates, or even has, a coherent or precise message, and whether or not the speaker generated the underlying content in the first place.” And finally, that the government has noble intentions (such as promoting “press responsibility” or preventing hurtful speech) is of no consequence. Disapproval of a speaker’s message, regardless how justified the disapproval may be, does not legitimize attempts by the government to compel the speaker to alter the message by including one more acceptable to others.

In light of these principles, the court reasoned that “there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation.” In retrieving relevant information from the “vast universe of data on the Internet” and presenting it in a way that is helpful to users, search engines make editorial judgments about what information to include in search results and how and where to display it. The court could not find any meaningful distinction between these judgments and those of a newspaper editor deciding which wire-service stories to run and where to place them, a travel guidebook writer selecting which tourist attractions to mention and how to display them, or a political blog choosing which stories it will link to and how prominently they will be featured.

Judge Furman made clear that the fact that search-engine results are produced algorithmically had no bearing on the court’s analysis. Because search algorithms are written by human beings, “‘they ‘inherently incorporate the search engine company engineers’ judgments about what materials users are most likely to find responsive to their queries.’” When search engines return results, ordering them from first to last, “they are engaging in fully protected First Amendment expression,” the court concluded.

The court declined to see any irony in holding that the democratic ideal of free speech protects Baidu’s decision to disfavor speech promoting democracy. “[T]he First Amendment protects Baidu’s right to advocate for systems of government other than democracy (in China or elsewhere) just as surely as it protects Plaintiffs’ rights to advocate for democracy.”

Implications for Online Speech and Search Engines

As the amount of content on the Internet grows exponentially, search engines play an increasingly important role in helping users navigate an overwhelming expanse of data – Google alone processes 100 billion search queries each month. As such, there is a definite public interest in shielding search engines from civil liability and government regulation. The decision in Zhang v. Baidu promotes strong constitutional protections for some of the Internet’s most heavily relied-upon intermediaries, making it clear that search engines cannot be compelled to include in their results the speech of others. Though not addressed in this case, these protections complement those guaranteed to search engines by Section 230 of the Communications Decency Act . CDA § 230(c)(1) immunizes search engines from most kinds of tort liability for publishing the third-party content of others, while CDA § 230(c)(2) protects their decisions to remove it.

If search engines were subject to civil liability in the United States for the ways in which they display and rank content in search results, individuals would have the power to alter or censor those results via the federal courts. In addition to the obvious financial consequences of civil liability for search engine operators (the plaintiffs in Zhang v. Baidu sought more than $16 million in damages), such a course could result in significant compliance burdens. To better understand how this might play out, one must look no further than this order by a French court requiring Google to remove from search results at the request of a British executive certain images which had been deemed to violate his right of privacy in a United Kingdom lawsuit. The court seemed to take the position that Google’s argument that the First Amendment protected its search results was inconsistent with the “neutral and passive role of a host,” as required to claim the protection of French intermediary law. Marie-Andree Weiss did an excellent write-up on this controversial decision for the Digital Media Law Project.

Though it has been rightfully heralded for reaching the conclusion that operators of search engines are exercising their First Amendment rights when deciding which websites to display in what order, the decision in Zhang v. Baidu has serious and potentially negative practical consequences for online speakers. Search engines play a critical role in helping online speech be discovered. Allowing search engines to prevent certain types of content from being indexed in search results could mean that some online speech will be nearly impossible to find without a direct link to where it exists online. A tremendous amount of power over what online speech can be easily located now rests in an ever-dwindling number of private entities. Proposals for a publicly-controlled, open source search engine belonging to “The People” have yet to gain traction.

Attorneys for the plaintiffs in Zhang v. Baidu have announced plans to appeal the decision to the U.S. Court of Appeals for the Second Circuit. Should the Second Circuit adopt the line of reasoning laid out so clearly by the district court, plaintiffs across the country considering bringing a lawsuit over search engine bias would be hard-pressed to overcome the First Amendment hurdles put in place by this likely influential precedent.

Natalie Nicol earned her J.D. from University of California, Hastings College of the Law. During law school, she worked as an intern at the Digital Media Law Project, the Electronic Frontier Foundation, and the First Amendment Project.

(Image courtesy of Flickr user simone.brunozzi pursuant to a Creative Commons CC BY-SA 2.0 license.)

Subject Area: 


          CeBit 2009: Webciety ist das Zauberwort im Personalmanagement   
CeBIT - Hannover – Die Allgegenwärtigkeit des Internets zeigt auch seine Auswirkungen für das Personal Management auf der Cebit 2009. "Webciety" ist das Schlagwort. Die Themenfelder reichen von neuen Möglichkeiten im E-Learning über Open Source Programme für die Steuerung und Management von HR Lösungen bis hin zu den Themengebieten Online Communities und die Möglichkeiten im E-Recruiting oder auch dem Thema Employer Branding durch Firmenblogs / Microblogs wie Twitter.
          PalmFocus Palmcast - September 12, 2005   
This palmcast is 5 minutes and 55 seconds long. It discusses a great example of an open source application in the Palm community called CryptoPad.
          Shopping for video editing software   

I want video editing software, under $100, Win7. What does this have to do with woodworking? Nothing directly but it’s becoming more common to video projects and builds. Please share your experience with any of the following software or software you feel I’ve overlooked that meet the requirements (under $100, Win7 compatible). Also looking for comments on Sony MS 12 vs. 13. Thanks.

What I’ve tried: [updated]

Premiere 2.0: good but outdated, no HD video, new version is too expensive

VideoPad trial ($40 to buy): I liked it but feel it’s expensive for what it is. There is a free version that perhaps has a few more features than MSMM.

MSMM Microsoft Movie Maker ($free): Simple to use, intuitive, not compatible with 3gp files (my phone), output is questionable. Basically this works if all you want to do is string clips together with no audio or visual tweaks.

Jahshaka (open source), Blender (open source); both way above my paygrade.

Sony Movie Studio 13 ($30): Interface is bland and Win98-ish, bare bones editing software. A very minor step up from MSMM. Might was well use the free version of VideoPad.

Sony Movie Studio 13 Platinum ($80): Same as above but with a few more features and extras. Compared to Corel the interface is slightly clumsy but easy enough to use. Lacks auto correct functions for audio and visuals. Buttons are childishly large even in the advanced setting and color scheme choices are white which is the worst background for video editing and a medium grey. Blandness McBlandybland. However it is rock solid stable.

Cyberlink Power Director 12 ($62): Kind of a middle ground, good interface, works, very resource hungry. Uses 2X the memory of Sony MS just sitting open with no project. Only one of the bunch that slows my computer. The interface is like a slightly improved version of Sony MSP.

Corel Video Studio Pro X7 ($65): Beautiful modern interface, very intuitive, loads of extras (transitions, effects, etc). I initially had trouble editing audio but with help realized it was a mismatch between the source video and project settings, seems to be sorted now. There is also an Ultimate version which is only a few extra bucks and has a few more effects. This one is the most fun to use but maybe not quite as fast as Sony.

Pinnacle Studio 17 ($83): Decided not to install it. Recent reviews talk of a lengthy and very difficult installation process that takes hours; and reviews in general of the newer versions were not very favorable.

          Bookmarks for 18 mag 2015 through 22 mag 2015   
These are my links for 18 mag 2015 through 22 mag 2015: kanbanik – Free and open source kanban board – Google Project Hosting – Kanbanik is a free and open source kanban board which can be used for personal kanban as well as for managing of small teams. Kanboard – Simple and open source … Continue reading "Bookmarks for 18 mag 2015 through 22 mag 2015"
          aTunes, il media player cross platform ed open source che vi stupirà   
Pubblicato in: , , , , , ,

State cercando un programma che batta le performances di iTunes, Winamp o Amarok? Non importa che sistema operativo voi abbiate, se date un’occhiata ad aTunes vi convincerete. Questo media player sfrutta come…
          magento website design Bundaberg (30,Dandar Drive,Southport QLD 4215.)   
magento website design Bundaberg What is the use of Magento? What is Magento?Magento is one of the very powerful and fast growing ecommerce script,that is created by Varien.It is one of an open source platform using Zend PHP and MySQL databases.Magento is offering a great flexibility through its modular architecture.This is completely scalable and has a wide range of control options that its users appreciate. How much is Magento? This will give you a very clear idea of how much will i...
          magento website design Bundaberg (30,Dandar Drive,Southport QLD 4215)   
magento website design Bundaberg What is the use of Magento? What is Magento?Magento is one of the very powerful and fast growing ecommerce script,that is created by Varien.It is one of an open source platform using Zend PHP and MySQL databases.Magento is offering a great flexibility through its modular architecture.This is completely scalable and has a wide range of control options that its users appreciate. How much is Magento? This will give you a very clear idea of how much will i...
          oregon gold and gem hunting   

Elks live in large herds in the woods, and the Rocky Mountains and hills of North America and Europe. One of the most important accessories for hunters is a good pair of binoculars.

First of all, by far the most important factor with long range hunting, whether it be long range deer hunting, bird hunting, or virtually any other kind of target, is the ability to be able to read the wind correctly. Also inevitable is the hunting rifles and other weapons that might prove essential for safety measures. This will give you some idea of what other hunters have found that work well.

Also remember that if you are taking dogs on a hunt they too need to be considered. Also, don't hurry in your decision of on which rifles to choose, this is a very important decision, and you need to weigh all the options before you make your decision. You could use a life-like coyote decoy or a rabbit decoy as well as others, the choice is up to you.

Many courses cover hunting laws in your area, which you should be familiar with before you go hunting. The tips included in this article are only the basic ones, but, as deer hunting is probably the most popular form of hunting activity, there are more than enough open sources for you to learn more advanced deer hunting tips.

unit 15 new mexico elk hunt for hunting.

          Why should you work in the open?   
Recently, I have been reflecting on, discussing and writing about open source. After the publication of an article on the Wired web site, one of my colleagues, Kristofer Joseph, came to me and essentially said: “I think there is something … ...more
Open Source Continuous Replication / Cluster Synchronization Thing
          Publicité : BE LINUX   

Aujourd'hui un petit billet de Stéphane, qui est un de mes collègues de promo. Je l'ai invité à réagir sur Narcissique Blog de temps en temps. Vous verrez donc quelques billet de lui prochainement. Bonne lecture.



Nous avions déjà vus des pub Microsoft, les reconnaissables pub d'Apple, et même dernièrement des pub pour eBay !
Mais voila du nouveau : des publicités pour des programmes Open Source !

Cette publicité à été réalisé pour LINUX !
Je vous laisse voir :

Personnellement les TUX m'ont déjà convaincue depuis longtemps ! ;-)

Plus concrètement je trouve cette pub très réussie et le style très sympa !
Et vous ?


          En Standard Ebooks puedes descargas libros gratis y de dominio público en un hermoso formato   

Standard Ebooks Free And Liberated Ebooks Carefully Produced For The True Book Lover

Si bien existen muchos sitios web en los que podemos descargar libros gratis y sin conflictos sobre los derechos de autor, como el caso del ultra conocido Proyecto Gutenberg que recopila todas las obras que ya son de dominio púbico, no todos ofrecen precisamente los libros "más bonitos".

Con esto queremos decir que muchas veces al guardar estos ebooks a tu lector de libros electrónicos, el formato no es precisamente el mejor. Puede que te encuentres con tipografías feas, borrosas o inconsistentes, errores, y casi siempre portadas ausentes y metadatos pobres.

Este problema es algo que buscan resolver en Standard Ebooks, una web dedicada a producir de forma muy cuidadosa ebooks libres y de dominio público para los amantes de los la lectura.

Ebooks Gratis Y Bonitos

Standard Ebooks es un proyecto sin fines de lucro y open source. La web toma libros de fuentes como el mencionado Project Guntenberg, les da formato, les mejora la tipografía, y utilizando una guía de estilo profesional y diseñada con mucho cuidado, termina por modernizar las ediciones de muchos libros, y hasta corregirlos.

Este proyecto busca aprovechar mejor la alta tecnología de los lectores de ebooks modernos y sus diferentes funciones. De ahí que se ponga especial interes en las tipografías modernas, metadados detallados con enlaces a fuentes de enciclopedias para los lectores más curiosos, y soporte para tablas de contenido, notas de pie de página, alta resolción y gráficos vectoriales, además de portadas de calidad.

Puedes navegar la librería de Standard Ebooks por orden de lanzamiento o alfábetico. Las descargas están disponibles en formato epub, azw3 (especial para dispositivos Kindle), kepup (especial para dispositivos Kobo) y además epub3 (el formato de ebook avanzado que aún es poco compatible con la mayoría de los lectores).

En Genbeta | La guía completa para dominar Calibre (y organizar tu biblioteca de e-books)

También te recomendamos

Descarga los ebooks gratuitos de la NASA sobre historia, ciencia, aeronáutica e investigación

En esta web puedes leer o descargar una amplia colección de libros gratuitos para desarrolladores

Tenemos 2 inscripciones para que participes en la I Travesía Playas de la Azohía en Cartagena, ¡no te la pierdas!

La noticia En Standard Ebooks puedes descargas libros gratis y de dominio público en un hermoso formato fue publicada originalmente en Genbeta por Gabriela González .

          Mark Shuttleworth ยืนยัน Ubuntu ไม่ทิ้งเดสก์ท็อป แต่จะโฟกัสที่ Cloud/IoT มากกว่า   

หลัง Ubuntu พับแผนการด้านเดสก์ท็อป-มือถือครั้งใหญ่ ก็เกิดคำถามตามมามากมายว่าอนาคตของ Ubuntu จะเป็นอย่างไรต่อไป ล่าสุด Mark Shuttleworth ซีอีโอจึงให้สัมภาษณ์ในประเด็นนี้

Shuttleworth เล่าว่าโลกคอมพิวเตอร์ในปัจจุบันแบ่งออกเป็น 3 ขาคือ personal computing, data center/cloud และ edge/IoT ซึ่งตอนนี้ Ubuntu กลายเป็นระบบปฏิบัติการมาตรฐานสำหรับโลก data center/cloud ไปแล้ว และในโลกของ edge/IoT ก็น่าจะมีบทบาทไม่น้อย

เขายอมรับว่าวางแผนผิดพลาดไปในตลาด personal computing ที่พยายามควบรวมพีซี-มือถือ-แท็บเล็ต แต่ก็ยืนยันว่าเดสก์ท็อปยังเป็นตลาดสำคัญที่ Ubuntu จะไม่ทิ้ง อย่างไรก็ตาม ในแง่ธุรกิจแล้ว Ubuntu จะโฟกัสไปที่ตลาด data center/cloud ที่เข้มแข็ง และตลาด edge/IoT ที่กำลังอยู่ในช่วงเริ่มต้น

ที่มา - OMG Ubuntu

          และแล้วก็มีวันนี้ Fedora จะเล่นไฟล์ MP3 ได้แบบดีฟอลต์ หลังสิทธิบัตรหมดอายุ   

ผู้ใช้ลินุกซ์คงทราบกันดีว่า ดิสโทรลินุกซ์ค่ายต่างๆ ไม่สามารถเล่นไฟล์ MP3 ได้ทันทีหลังติดตั้งระบบปฏิบัติการเสร็จ แต่ต้องมาติดตั้ง codec เพิ่มเองในภายหลัง เหตุผลเป็นเพราะ MP3 มีสิทธิบัตรคุ้มครอง ถ้าระบบปฏิบัติการไหนอยากใช้งานต้องจ่ายค่าไลเซนส์ให้สถาบัน Fraunhofer เจ้าของสิทธิบัตร

แต่สิทธิบัตร MP3 หมดอายุไปแล้วในเดือนพฤศจิกายน 2016 ส่งผลให้ดิสโทรลินุกซ์บางตัวเริ่มปรับตัวกันแล้ว ฝั่งของ Fedora ก็ประกาศว่าจะเพิ่มการรองรับ MP3 (ในทางเทคนิคคือเพิ่มแพ็กเกจ gstreamer1-plugin-mpg123 เข้ามาแบบดีฟอลต์) ในอีกไม่ช้านี้ แต่ยังไม่ระบุช่วงเวลาที่แน่ชัดว่าจะเป็นเมื่อไร (น่าจะทัน Fedroa 26 ที่จะเป็นรุ่นถัดไป)

ที่มา - Fedora Magazine

No Description

          Ubuntu 17.10 วนกลับมาที่ตัว A ได้ชื่อ Artful Aardvark    

Ubuntu 17.04 ใช้โค้ดเนม Zesty Zapus ซึ่งถือว่าเดินทางมาถึงตัว Z แล้ว สิ่งที่แฟนๆ สงสัยคือ Ubuntu 17.10 รุ่นหน้าจะใช้โค้ดเนมอะไร จะวนกลับมาเป็นตัว A หรือไม่

Mark Shuttleworth ผู้นำโครงการ Ubuntu ยังไม่ประกาศข้อมูลนี้ แต่รอบนี้ข้อมูลจากระบบติดตามบั๊กของ Ubuntu โผล่มาให้เห็นแล้วว่าเป็นชื่อ Artful Aardvark

ตัวอาร์ดวาร์ก เป็นสัตว์เลี้ยงลูกด้วยนมชนิดหนึ่งที่อาศัยอยู่ในทวีปแอฟริกา ที่มาของชื่อมีความหมายว่า "หมูดิน" ในภาษาแอฟริคานส์

ที่มา - Ubuntu Launchpad, OMG Ubuntu, ภาพจาก Wikipedia

No Description

          Ubuntu GNOME ออกรุ่น 17.04, ประกาศทิศทาง รวมทีมเข้ากับ Ubuntu หลัก   

โครงการ Ubuntu GNOME ประกาศออกรุ่น 17.04 โดยใช้ GNOME 3.24 รุ่นล่าสุด พร้อมประกาศทิศทางในอนาคตว่าจะไปรวมกับ Ubuntu ตัวหลักแล้ว

ทีมงาน Ubuntu GNOME และ Ubuntu Desktop (Unity) จะถูกรวมเข้าเป็นทีมเดียวกัน ตอนนี้ทางทีม Ubuntu GNOME กำลังวางแผนการทำงานกับทีมฝั่ง Canonical อยู่ว่าจะทำอะไรในช่วงไหน และจะประกาศข่าวต่อไป

ผู้ที่ใช้ Ubuntu 16.04 LTS หรือ Ubuntu GNOME 16.04 LTS จะได้อัพเกรดเป็น Ubuntu 18.04 LTS ทีเดียวเลยในปีหน้า ถือเป็นจุดสิ้นสุดของการแยกดิสโทรสองสายนั่นเอง

ที่มา - Ubuntu GNOME

          JISC Elluminate Wednesday Session   

We have been invited to talk about Cloudworks and more specifically Cloudworks as open source which we’ve called ‘CloudEngine’ by JISC as part of their Elluminate Wednedsday sessions. The presentation is embedded below (and available to download from Slideshare).

Get cloudengine jisc-elluminate_wednesdays View more presentations from Open University Learning Design Initiative

Link to . . . → Read More: JISC Elluminate Wednesday Session

          Sprite and Image Optimization Framework & DotNetNuke   


The folks over at Microsoft recently pushed a open source release of their Sprite and Image Optimization Framework onto Codeplex last week.  And after a few tweaks I was able to get it running within DotNetNuke, here is how…

Download the Sprite and Image Optimization Framework.  Unzip it and open up the ImageOptimizationFramework.sln file using Visual Studio .NET 2010. 

Getting it 3.5 Compatible

If you are still running your DotNetNuke installation on 3.5, you will need to change the “Target Framework” for both the “Web Forms Control” and the “Image Optimization Framework” down to 3.5.  This can be achieved by right clicking the project and under the “Application” Tab / Target Framework choose “.NET 3.5 Framework”, save and close both property windows.

Next you will notice that when you attempt to build (now that we are on 3.5) you will get a compiler error.  In ImageOptimization.cs I changed the following method to look like:

private static void RebuildFromCacheHit(string key, object value, CacheItemRemovedReason reason) {

  var data = (object[])value;
  string path = (string)data[0];
  IEnumerable<string> cachedDirectoriesBelowCurrentFolder = (IEnumerable<string>)data[1];

That is, I replaced their use of the Tuple class with the generic object[]

The second method that needed to be changed is:

private static void InsertItemIntoCache(string path, IEnumerable<string> directoriesBelowCurrentFolder) {
  string key = Guid.NewGuid().ToString();
  //var value = Tuple.Create(path, directoriesBelowCurrentFolder);
  var value= new object[]{path, directoriesBelowCurrentFolder};
  HttpRuntime.Cache.Insert(key, value, new CacheDependency(path), Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, RebuildFromCacheHit);

Notice again, it was converting the Tuple to use an object[].  I’m sure there is a more optimal way available but this is good enough for this round of testing.

The solution should compile now. 


If you navigate to the folder”  %My Download Directory%\sprite-image-optimization-framework\WebFormsControlSample\Bin\   you will see the 2 compiled DLLs and PDB files. XCopy deploy these into your local (testing) DNN installation’s bin folder.

You will now need to crack open your web.config for your DNN installation and add the following line, under the “System.Web/httpModules” section:

<add type ="Microsoft.Samples.Web.ImageOptimizationModule" name ="Microsoft.Samples.Web.ImageOptimizationModule"/>

This HttpModule assumes that there is a folder at the root of your web site named App_Sprites.  You should create that folder now otherwise you will get a yellow screen of death at first load.

The HttpHandler wires up a cache dependency to the App_Sprites folder after “indexing” the content.  It will take all of the images in that folder and create the Sprite Image, and associated CSS files.  I also noticed that it will actually rebuild these files if they are deleted for any reason.


Notice the CSS files, a dat file and the new sprite0.png file; all of these were created based on the existing images in that folder by the HttpHandler.

Within your module folder structure, create a “App_Sprites” folder, copy your images in to that folder.  Like so:


Notice that I set references to both of the ImageOptimizationFramework.dll and the ImageSprite.dll in this project.

In your code-behind file for your module, we will follow the same pattern as the HttpHandler in order to add our Cache Dependency to the ImageOptimizer, like so:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using Microsoft.Samples.Web;

namespace DNN.Sprites.DesktopModules.SpriteSample
    public partial class Sprite : DotNetNuke.Entities.Modules.PortalModuleBase
        private static readonly object _lockObj = new object();
        private static bool _hasAlreadyRun;

        static Sprite()
            lock (_lockObj)
                if (_hasAlreadyRun)
                    _hasAlreadyRun = true;

            ImageOptimizations.AddCacheDependencies(System.Web.HttpContext.Current.Server.MapPath("~/DesktopModules/SpriteSample/App_Sprites/"), rebuildImages: true);

        protected void Page_Load(object sender, EventArgs e)



And finally in our ascx document we can now use the new ImageSprite control to refer to our images:


<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="Sprite.ascx.cs" Inherits="DNN.Sprites.DesktopModules.SpriteSample.Sprite" %>
<%@ Register TagPrefix="asp" Namespace="Microsoft.Samples.Web" Assembly="ImageSprite" %>
<asp:ImageSprite runat="server" ImageUrl="~/DesktopModules/SpriteSample/App_Sprites/windowsLogo.png" />
<asp:ImageSprite ID="ImageSprite1" runat="server" ImageUrl="~/DesktopModules/SpriteSample/App_Sprites/xbox.png" />
<asp:ImageSprite ID="ImageSprite2" runat="server" ImageUrl="~/DesktopModules/SpriteSample/App_Sprites/office.png" />


Which emits:


With the HTML:


<div id="dnn_ctr6123_ModuleContent" class="SpriteContent">
    <img class="windowsLogo-png" src="data:image/gif;base64,R0lGODlhAQABAIABAP///wAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==" style="border-width:0px;" />
    <img id="dnn_ctr6123_Sprite_ImageSprite1" class="xbox-png" src="data:image/gif;base64,R0lGODlhAQABAIABAP///wAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==" style="border-width:0px;" />
    <img id="dnn_ctr6123_Sprite_ImageSprite2" class="office-png" src="data:image/gif;base64,R0lGODlhAQABAIABAP///wAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==" style="border-width:0px;" />

Play close attention to the class attribute as well.  They are using a fairly simplistic naming convention to automatically emit the attribute for predictability.

Although the project seems to be in the early stages I can see it receiving very broad adoption in the near future.

          Why did Open Source Bounties Fail?   
I’m shocked. I thought Bounties would supercharge Open Source development. You were the chosen one! (cringe) So today, I wanted to post a bounty for Stasher. I did so on BountySource, but then I realised it was broken and abandoned. I looked further afield and it’s the same story, a digital landscape littered with failures. … Continue reading Why did Open Source Bounties Fail?
          InvoicePlane Is it open source and can it be improved?   

@bekbek6 wrote:


InvoicePlane Is the system open source and can it be privatized privately?

Posts: 3

Participants: 2

Read full topic

          MoioSMS sms gratis   
MoioSMS permette di mandare SMS gratis in modo comodo e veloce! In più è gratuito e Open Source, con licenza GPL. MoioSMS è un programma che si integra sul desktop e permette l’ invio di sms sfruttando il servizio gratuito
          Introducing Forio Contour: Interactive JavaScript Visualization Library    

Our new library, Forio Contour, is now available — free and open source. Use it to create interactive JavaScript charts for data visualization.

          Membangun Komunitas Link Web   

Membangun Komunitas Link Web Indonesia

web indonesia

Ada sebuah Filosofi politik yang mengatakan "Tidak ada teman dan tidak ada musuh yang abadi, yang ada adalah kepentingan bersama" . Mungkin filosofi ini yang meng-ilhami perushaan IBM bekerja sama dengan komunitas open source untuk menghadapi dominasi Microsoft dalam aplikasi sever.

Ter-inspirasi dari filosofi itu dan dari membaca dan berusaha memahami masalah link building dari posting Darren Rowse di (12 Tools and Techniques for Building Relationships with Other Bloggers) juga dari membaca ebook link building secret yg saya temukan , maka saya mencoba menarik kesimpulan intinya dan mencoba membangun ide untuk mengajak para blogger Indonesia bersama-sama menciptakan suatu komunitas online bagi blogger indonesia , untuk saling mengenal dan membangun suatu kerjasama win and win bukan win and lose (menang dan menang bukan menang dan kalah) dalam hal traffic , untuk menghadapi apa? yah boleh kalau di bilang untuk menghadapi web-web full komersil yang memiliki budged cukup untuk membeli segala fasilitas mendatangkan traffic , atau paling tidak ini adalah suatu cara untuk berkenalan dan saling mengenal dengan para blogger Indonesia yang lain dan kelak bisa kita jadikan Katalog Pribadi Web/Blog Indonesia.

Yap, ini bicara promosi blog , yang saya rasa lebih efektif di banding sekedar bertukar link lalu memasangnya di sidebar sebagai blogroll, karena umumnya pengunjung blog kita tidak melirik sama sekali link-link dalam blogroll kita.

Oke, yang saya maksud di sini adalah menyebarkan posting saya ini secara berantai, karena posting utama akan paling menjadi perhatian pengunjung blog kita.

Yap ini ajakan suka-rela ,silahkan yang tertarik, dan yang tidak abaikan saja, bagi rekan-rekan senior yang trafficnya sudah tinggi juga silahkan jika ingin berbagi bersama, di bawah ini langkahnya: 1.Buat sebuah posting dengan judul Web Indonesia. 2.Copy-paste seluruh isi posting ini untuk isi posting anda. 3.Pada bagian atas kumpulan kode dalam teks area di atas ,masukan url-judul web anda di bawah url-judul web saya dan menambahkan nomor urut setelah web saya, jadi angka nomor urut anda adalah setelah no urut web saya....dan seterusnya secara berantai. 4.Setelah itu abadikan link url posting misalnya di letakan di sidebar, supaya kelak gampang di cari.

Salah satu tujuan utama saya adalah dalam rangka menyebarkan budaya ngeblog, saling mengenal dan membantu traffic bagi rekan2 pemula .

Anda tidak akan saya curangi untuk memasang anchor text atau link web saya atau web lain-nya, tak ada satu link pun yang menuju alamat web atau blog saya atau lain-nya , tapi hanya sekedar alamat dalam bentuk teks untuk saling mengenal. juga logo web indonesia di atas cuma sekedar logo bersama, tak ada link atau keyword yang saya sisipkan.

Dan jika masih ada pemikiran di akali karena web saya ada di urutan atas dari link anda , maka abaikan tulisan ini.

Jika berjalan lancar, saya rasa cara promosi ini tidak kalah efektif di banding berburu link dan mencari RSS submissions sebanyak banyaknya dengan melelahkan, bahkan RSS atau tukar link banyak kemungkinan link anda akan terhapus karena banyak sebab, tapi posting secara umum akan tetap ada sampai kapanpun .

Dan ini bisa menjadi acuan untuk pelacakan hubungan link secara berantai melalui dari mana anda mendapatkan posting ini.

Hasil dari copy-paste dan penyebaran posting ini seterusnya adalah persis seperti isi posting ini dengan daftar link di bawah logo Web Indonesia yang semakin bertambah.

Jika ingin copy-paste kode HTML secara langsung
klik di sini
(paragraf terakhir ini terserah anda mau di ikutkan atau tidak )


2.O-OM.COM - Http://

3. Keseharian -

4. Pecinta Sejagad -

          Ora la distro torna a fare la distro   
E venne il giorno in cui Mark Shuttleworth dovette calarsi le braghe e annunciare questo al mondo: ‘we will end our investment in Unity 8, the phone and convergence shell’ Ci sono voluti 5 anni per capirlo ma alla fine c’è l’hanno fatta. È stato bello crederci, è stato bello proporre la “convergence” open source, […]
          Donate a laptop!!   
Schools for India is accepting used laptops in working condition, for training students in villages in Darbhanga, Bihar. If you are interested in donating your laptop, please send an email to with Make and Model number. Please remove all your personal data. We will format and install open source software. The shipping details will […]
          Υπολογισμός της καμπύλης Lorenz με το open source πακέτο στατιστικής R   
Ένα από τα μέτρα συγκέντρωσης για την εκτίμηση των περιφεριακών ανισοτήτων είναι η καμπύλη Lorenz. Για την εφαρμογή της καμπύλης Lorenz (Παπαδασκαλόπουλος, 2000), οι επιμέρους γεωγραφικές περιοχές με την έκταση και τον αντίστοιχο πληθυσμό τους, ταξινομούνται κατά φθίνουσα σειρά της πυκνότητας αυτών, και κάθε μιας των δυο τούτων στηλών, της εκτάσεως και του πληθυσμού, υπολογίζεται η ποσοστιαία  κατανομή, καθώς και η αθροιστική σειρά των ποσοστιαίων τούτων αναλογιών. Στη συνέχεια σε σύστημα ορθογώνιων συντεταγμένων χαράσσεται η γραφική παράσταση της καμπύλης Lorenz, στην οποία ο μεν άξονας των τετμημένων μετρά την αθροιστική σειρά της ποσοστιαίας κατανομής του πληθυσμού, ο δε άξονας των
          198: With Ashe Dryden   
Intro This week we talked with Ashe Dryden about how companies, open source projects, and conferences, can become more diversified. Links @AsheDryden AlterConf Fund Club The Diverse Team The Inclusive Event Women of Color in Tech Chat Hands Up United Feminism is For Everybody Sponsors WP Migrate DB Pro 24:10 The easiest way […]
          Delapan Pemuda Luar Biasa yang Mengubah Dunia   
1. Mark Zuckerberg (kini 25 tahun/asal AS)
Ketika menciptakan situs jejaring sosial Facebook, Mark Zuckerberg baru berusia 19 tahun. Ia membuat Facebook untuk membantu membangun jaringan sosial bagi remaja di kampusnya saat itu, Universitas Harvard, Amerika Serikat.
mark zuckerberg
Kini, Facebook merupakan situs jejaring sosial terbesar kedua setelah MySpace. Di bawah pimpinan Sang Penemu, situs ini terus tumbuh hari demi hari. Jutaan pengguna baru terus mendaftar setiap bulan!

2. Steve Shih Chen (31 tahun/Taiwan-AS), Jawed Karim (30 tahun/Jerman-AS), Chad Hurley (32 tahun/AS)

Keduanya adalah pencipta dari situs "berbagi video online", YouTube. Mereka mendirikan YouTube pada 2005. Ketika itu, Chad berusia 28 tahun dan Steve 27 tahun.
Penemu YouTube
Pada Oktober 2006, YouTube diakuisisi (diambil alih kepemilikannya) oleh Google. Nilainya: 1,65 miliar dollar AS (Rp16,9 triliun).

3. Jerry Yang (40 Tahun/Taiwan-AS) dan David Filo (42 tahun/AS)
Pada tahun 1995, kedua orang ini menemukan Yahoo!, situs mesin pencari kedua terbesar setelah Google. Saat itu, Jerry berusia 26 tahun dan Filo 28 tahun.
Penemu Yahoo!
Tahun lalu, perusahaan raksasa Microsoft sempat ingin membeli Yahoo!. Nilai tawaran yang dibicarakan: 44,6 miliar dollar AS (Rp458,8 triliun). Rencana ini memang batal. Namun, Microsoft dan Yahoo! tidak menampik mengenai kemungkinan kerja sama di masa mendatang.

4. Matt Mullenweg (25 tahun/AS)
Matt Mullenweg adalah pencipta situs penyedia blog gratisan: WordPress. Ia mulai baru berusia 19 tahun ketika mulai menciptakan cikal bakalnya.
WordPress menjadi tenar dalam waktu singkat. Alasannya, situs ini mudah dipakai dan selalu diperbarui. Hingga tahun 2008, tercatat ada 230 juta pengakses tetap dengan 6,5 miliar halaman WordPress yang bisa dilihat. Lalu, ada 35 juta posting baru dengan tambahan rata-rata empat juta posting setiap bulan.
Matt, yang pernah datang ke Jakarta pada Januari 2009 ini mengatakan, ia tidak akan menjual WordPress ke perusahaan besar dengan harga' selangit'. Ia juga bilang, tidak mencari keuntungan dari WordPress. Keuntungan sudah ia dapatkan dari beberapa perusahaan, yang dimilikinya.

5. Tom Anderson (38 tahun/Amerika Serikat)
Tom Anderson merilis MySpace pada bulan Agustus 2003. Ada kesimpang siuran data mengenai usianya saat itu, namun berbagai sumber menyebut Tom berusia kurang dari 30 tahun ketika menciptakan MySpace.

Tom Anderson

Saat ini, MySpace adalah salah satu situs jejaring sosial paling besar di dunia, yang bersaing ketat dengan Facebook. MySpace telah digunakan lebih dari 100 juta orang, dengan pengguna terbesar berasal dari kawasan Amerika Serikat.

Kelebihan MySpace terletak pada bidang musik. Ketika fasilitas musik terbaru (yaitu "audio streaming" gratis) diluncurkan pada 25 September 2008, hanya dalam beberapa hari saja, ada miliaran lagu yang didengarkan oleh para penggunanya. Kelebihan ini membuat banyak orang memperkirakan bahwa MySpace bisa mempengaruhi industri musik di internet.

6. Blake Aaron Ross (23 tahun/AS)
Blake Ross adalah pemuda jenius yang menciptakan Mozilla, fasilitas penjelajah internet. Mozilla diluncurkan untuk umum pada November 2004. Saat itu, usia Blake baru 19 tahun!

Blake Ross

Mozilla kemudian digabungkan dengan Firefox, program yang diciptakannya bersama Dave Hyatt. Maka, setelah itu, namanya menjadi Mozilla Firefox.

Dengan cepat, Mozilla Firefox diterima para pengguna internet di dunia. Ia, antara lain, dinilai lebih aman dan mudah dipakai (dibandingkan dengan para kompetitornya). Ia juga dinilai mampu merebut sebagian pasar fasilitas penjelajah internet, yang selama ini dikuasai oleh Microsoft Internet Explorer.

Banyak orang memuji kesuksesan Blake Ross. Direktur engineering Yayasan Mozilla, Chris Hoffman, mengatakan, "Dalam dunia ‘Open Source', posisi seseorang tergantung pada keahliannya. Dan Blake Ross memiliki semua keahlian yang dibutuhkan."

7. Pierre Omidyar (41 tahun/Perancis-AS)
Pierre Omidyar merilis eBay pada 4 September 1995. Saat itu, usianya 28 tahun.


eBay adalah situs lelang online. Awalnya, Pierre membuat eBay untuk menolong seorang teman dekat yang ingin menjual sebuah produk. Namun, tak lama kemudian, eBay berkembang pesat menjadi lahan bisnis yang amat prospektif. Kini, eBay adalah situs lelang online terbesar di dunia.

Menurut Pierre, dalam sebuah wawancara, kesuksesan eBay tidak lepas dari dua hal. Pertama, kuatnya komunitas penjual dan pembeli, yang jumlahnya mencapai ratusan juta orang. Kedua, nilai-nilai baik yang dianutnya. Dalam bisnis, eBay percaya bahwa pada dasarnya setiap manusia itu baik dan setiap orang memiliki suatu keunggulan yang bisa diberikan kepada orang lain. Selain itu, eBay percaya bahwa kejujuran dan keterbukaan bisa membawa kebaikan pada diri manusia. Maka, aturan "emas" eBay adalah mengakui dan menghormati setiap orang sebagai individu yang unik. eBay pun berharap para anggotanya bisa mengikuti contoh yang diberikan.

8. Larry Page (36 tahun/AS) dan Sergey Brin (35 tahun/AS)
Keduanya merilis Google pada 4 September 1998. Saat itu, mereka baru berusia 25 tahun dan 24 tahun. "Kantor" pertama mereka adalah garasi.

Google, mesin pencari yang bisa menampilkan segala jenis informasi ini, disukai banyak orang - terutama para mahasiswa. Maka, hanya dalam tempo waktu beberapa tahun saja, Google bisa berkembang amat pesat dan meraup keuntungan miliaran dollar AS. Kini, Google bisa disebut sebagai mesin pencari nomor satu di dunia.

larry dan sergey

Kisah sukses Larry Page dan Sergey Brin dalam menciptakan dan mengembangkan Google telah menjadi inspirasi bagi banyak orang muda di dunia ini, khususnya para penggemar teknologi informasi. Mereka berharap bisa membuat program baru yang berguna bagi masyarakat dunia dan menguntungkan dari segi finansial.

"Thomas A.Edison pernah berkata : 'Jenius adalah 1 % inspirasi dan 99 % keringat. Tidak ada yang dapat menggantikan kerja keras. Keberuntungan adalah sesuatu yang terjadi ketika kesempatan bertemu dengan kesiapan '.Orang - orang muda diatas meraih kesuksesan bukan hanya karena mereka memiliki bakat saja tapi disertai juga dengan kerja keras.karena itu kita juga bisa menjadi sukses asal kita mau bekerja keras, karena Tuhan telah memberikan bakat yang luar biasa juga pada setiap kita."

Jadi orang-orang yang akan mengubahkan dunia...Good Luck
Sumber :

          Top Ten Business WordPress Themes 2015   
WordPress has added more convenience to our lives, particularly, for those who do blogging and create content on the web. WordPress was launched in 2003 as an open source website creation and blogging tool. It is created by the people, for the people. In fact, it is known as the complete content management system (CMS). […]
          Xamarin/Twitter Article in Visual Studio Magazine   

Originally posted on:

Wally McClure wrote a new article for Visual Studio Magazine, Using OAuth, Twitter and Async To Display Data. He explains a couple ways to work with OAuth and perform Twitter queries, including using LINQ to Twitter. His examples include both Xamarin.Android and Xamarin.iOS. Follow @wbm on Twitter for more Xamarin development tips.



          J2EE Web Developer -   
This J2EE Web Developer Position Features:

? Great Pay to $120K

Immediate need for j2ee web developer seeking for a big Financial company. Great benefits. Apply for this great position as a j2ee web developer today!

Location: Arlington, VA

Length: Direct Hire

Rate: 120k

Our Client is looking for an expert J2EE Web Developer
? Develop J2EE web applications using popular frameworks including JSF, PrimeFaces, Google Web Toolkit (GWT), AJAX, and others
? Develop Web Services using JAX-WS
? Develop EJBs using a J2EE Application Server
? Use open source development tools such as Netbeans, Eclipse, JasperReports, PMD, CruiseControl, and CVS
What qualifications are required?
? Eight plus years of progressive experience in application systems design and programming
? Experience in application development using object-oriented principals and design patterns
? Experience with J2EE application servers
? Hands on experience with J2EE technologies, specifically JSF, EJB, JDBC, and JMS
? Working knowledge of Unix, XML, FTP, Oracle, SQL Server, PL/SQL, Object-Oriented Analysis & Design (OOAD), and web-based technologies (HTML, CSS)
? Ability to investigate new technologies and identify their applicability to the enterprise and to address business needs
? Be self-directed and results/goal oriented
? Requires B.S. in computer science or equivalent experience


J2EE, JSF, PrimeFaces, Google Web Toolkit (GWT), AJAX, EJB, Netbeans, JDBC, JMS , Eclipse, Jasper, PMD, CruiseControl, CVS, Unix, XML, FTP, Oracle, SQL Server, PL/SQL, HTML, CSS

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Senior Java Developer with Web UI(DC) (Job #6041)   
Using modern Open Source Java, JavaScript frameworks in a continuous integration environment, you?ll join our team of developers building the next-generation of customer engagement systems for a federal agency.
• Work with product teams and product owners to define and develop UI requirements for large internet-facing, enterprise software applications
• Drawing on components from the project?s open-source framework, use JavaScript libraries (AngularJS, Bootstrap, jQuery), HTML5, CSS3 to design, build and test compelling web applications
• Develop test automation solutions utilizing state-of-the-art open source testing frameworks;
• Perform automated unit, integration, functional and behavior-driven testing of Web UIs and backend services.
• Experience working in a fast-paced, client-facing environment with changing requirements
• Adhere to standards and best practices
• Constantly seek opportunities to learn and improve team processes
• Initiate and conduct manual/automated code reviews
• Participate in Agile development process
• 3 – 7 years of recent Java development experience with emphasis on web UI
• Experience with consuming RESTful and/or Web Services, JSON
• Knowledge of Spring Framework and Hibernate is highly desirable
• Experience with Bootstrap.js and AngularJS is highly desirable
• Experience with Section 508 Compliance
• Understanding of HTTP/S and related protocols
• Hand-on experience using version control system (Git preferably) and build automation systems
• Demonstrated knowledge and hands-on experience with Linux/UNIX operating systems
• Knowledge of and experience with agile software development methodologies
Must have skills:
Java, JUnit
JavaScript frameworks

•Bachelor?s Degree in a relevant discipline is desired
•United States Citizenship and the ability to obtain and maintain a Public Trust Clearance is required
          Senior Web Designer (Job #6362)   
The ideal candidate is one that has desire to build great web apps using HTML5, leveraging CSS3 libraries and modern tooling. Also, the ideal candidate is able to adapt to different responsibilities, wear many different hats, shows respect for teammates, and has a passion for constantly improving.

Key Responsibilities:
• Responsible for implementing design decisions with modern HTML5 and CSS3 expertise
• Contribute to design decisions through collaborative study and rapid prototyping
• Understand and solve cross-browser compatibility concerns
• Build for complex application layouts with adaptive and responsive styles
• Manage, test, and develop against strict 508 requirements
• Participate in code reviews
• Constantly seek opportunities to learn and improve the team

• Bachelor's degree in Computer Science, Human Computer Interaction or related field, or comparable experience
• 3+ years experience design and implementing HTML5, CSS3, and jQuery
• 5+ years experience working with web applications deployed on multiple platforms for high volume traffic
• Excellent communication skills
• Experience using mobile technologies
• Experience with Twitter Bootstrap plugins and tooling
• Understanding of continuous integration
• Understanding of Javascript and Client-Side MVC
• Experience with advanced JSON
• Experience with contributing to an open source software project
• Experience with automated testing including javascript unit testing
• Knowledge of change management tools including git
• Understanding with Agile, including Scrum and XP

• Experience using mobile technologies
• Understanding of continuous integration
• Understanding of web services (JAX-WS, SOAP, JAX-RS, REST, Jersey)
• Experience with automated testing
• Knowledge of change management tools, Git, Subversion, TortoiseSVN
• Understanding with Agile, including Scrum and XP
          Senior Web Designer (Job #6362)   
The ideal candidate is one that has desire to build great web apps using HTML5, leveraging CSS3 libraries and modern tooling. Also, the ideal candidate is able to adapt to different responsibilities, wear many different hats, shows respect for teammates, and has a passion for constantly improving.

Key Responsibilities:
• Responsible for implementing design decisions with modern HTML5 and CSS3 expertise
• Contribute to design decisions through collaborative study and rapid prototyping
• Understand and solve cross-browser compatibility concerns
• Build for complex application layouts with adaptive and responsive styles
• Manage, test, and develop against strict 508 requirements
• Participate in code reviews
• Constantly seek opportunities to learn and improve the team

• Bachelor's degree in Computer Science, Human Computer Interaction or related field, or comparable experience
• 3+ years experience design and implementing HTML5, CSS3, and jQuery
• 5+ years experience working with web applications deployed on multiple platforms for high volume traffic
• Excellent communication skills
• Experience using mobile technologies
• Experience with Twitter Bootstrap plugins and tooling
• Understanding of continuous integration
• Understanding of Javascript and Client-Side MVC
• Experience with advanced JSON
• Experience with contributing to an open source software project
• Experience with automated testing including javascript unit testing
• Knowledge of change management tools including git
• Understanding with Agile, including Scrum and XP

• Experience using mobile technologies
• Understanding of continuous integration
• Understanding of web services (JAX-WS, SOAP, JAX-RS, REST, Jersey)
• Experience with automated testing
• Knowledge of change management tools, Git, Subversion, TortoiseSVN
• Understanding with Agile, including Scrum and XP
          J2EE Web Developer -   
This J2EE Web Developer Position Features:

? Great Pay to $120K

Immediate need for j2ee web developer seeking for a big Financial company. Great benefits. Apply for this great position as a j2ee web developer today!

Location: Arlington, VA

Length: Direct Hire

Rate: 120k

Our Client is looking for an expert J2EE Web Developer
? Develop J2EE web applications using popular frameworks including JSF, PrimeFaces, Google Web Toolkit (GWT), AJAX, and others
? Develop Web Services using JAX-WS
? Develop EJBs using a J2EE Application Server
? Use open source development tools such as Netbeans, Eclipse, JasperReports, PMD, CruiseControl, and CVS
What qualifications are required?
? Eight plus years of progressive experience in application systems design and programming
? Experience in application development using object-oriented principals and design patterns
? Experience with J2EE application servers
? Hands on experience with J2EE technologies, specifically JSF, EJB, JDBC, and JMS
? Working knowledge of Unix, XML, FTP, Oracle, SQL Server, PL/SQL, Object-Oriented Analysis & Design (OOAD), and web-based technologies (HTML, CSS)
? Ability to investigate new technologies and identify their applicability to the enterprise and to address business needs
? Be self-directed and results/goal oriented
? Requires B.S. in computer science or equivalent experience


J2EE, JSF, PrimeFaces, Google Web Toolkit (GWT), AJAX, EJB, Netbeans, JDBC, JMS , Eclipse, Jasper, PMD, CruiseControl, CVS, Unix, XML, FTP, Oracle, SQL Server, PL/SQL, HTML, CSS

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Build/Configuration Management-Contract to Hire Job in Herndon VA 20170   
<span><B>Position Responsibilities:</B><br>&bull;Configuration Management/Software Build Specialist will be responsible for developing and maintaining the Software Configuration Management plan and policies, configuration item identification<br>&bull;software application build management, develop and maintain branching strategies, provide configuration status accounting, and ensure configuration management procedures are followed.<br>&bull;This position will perform configuration management audits and support quality assurance in validation of product artifacts and deliverables. <br>&bull;The Software Configuration Management Specialist will perform product builds and create &amp; maintain build scripts. <br>&bull;You must be team oriented and contribute to the overall project objectives and configuration management functions.<br><B>Position Requirements</B><br>&bull;Education: Bachelor&rsquo;s Degree in Engineering or a Natural Science<br>&bull;3+ years of experience as a software engineer, developing embedded, web-based or server applications with an understanding of the full development lifecycle activities.<br>&bull;3+ years of experience in building and releasing software applications in a controlled environment with an understanding of full lifecycle configuration management activities, while performing related system administration activities.<br>&bull;Strong knowledge of software development, build and configuration management tools. Linux, Windows, Microsoft WORD, EXCEL, PowerPoint, MS Project, Visio, Jira<br><B>Strongly Desired Skills</B>:<br>&bull;Solid understanding and experience with version control systems, change management systems and documentation management systems; such as GIT, CVS, Subversion, Mercurial, Perforce, ClearCase, Jira, Bugzilla, Crucible, Fisheye, Alfresco<br>&bull;Experience working within and understanding an Open Source consumer model with knowledge of GPL<br>&bull;Strong experience with Linux and related administrative activities<br>&bull;Experience in an Agile development environment and Continuous Integration<br>&bull;Knowledge of build automation, as well as experience with proven CI systems such as Jenkins, Bamboo, Maven, or Cruise Control<br>&bull;Good understanding and experience with scripting such as bash / Perl / Python<br>&bull;Experience with SQL, especially MySQL and PostgreSQL<br>&bull;Experience with compilers and cross compiling, esp. gcc, and the Altera tool suite<br>&bull;Experience with Red Hat based-Linux, 32-bit and 64-bit; RPM spec file creation and administration; managing Red Hat based package repositories <br>&bull;Experience with creating kickstart files from scratch<br>&bull;Experience administering version control systems, especially modern distributed systems such as Git.<br>&nbsp;<br><B>Please click on link and send Word updated resumes for this contract to hire Software Build/Configuration Management Specialist in Herndon VA.</B><br>&nbsp;<br></span>
          Senior Web Designer (Job #6362)   
The ideal candidate is one that has desire to build great web apps using HTML5, leveraging CSS3 libraries and modern tooling. Also, the ideal candidate is able to adapt to different responsibilities, wear many different hats, shows respect for teammates, and has a passion for constantly improving.

Key Responsibilities:
• Responsible for implementing design decisions with modern HTML5 and CSS3 expertise
• Contribute to design decisions through collaborative study and rapid prototyping
• Understand and solve cross-browser compatibility concerns
• Build for complex application layouts with adaptive and responsive styles
• Manage, test, and develop against strict 508 requirements
• Participate in code reviews
• Constantly seek opportunities to learn and improve the team

• Bachelor's degree in Computer Science, Human Computer Interaction or related field, or comparable experience
• 3+ years experience design and implementing HTML5, CSS3, and jQuery
• 5+ years experience working with web applications deployed on multiple platforms for high volume traffic
• Excellent communication skills
• Experience using mobile technologies
• Experience with Twitter Bootstrap plugins and tooling
• Understanding of continuous integration
• Understanding of Javascript and Client-Side MVC
• Experience with advanced JSON
• Experience with contributing to an open source software project
• Experience with automated testing including javascript unit testing
• Knowledge of change management tools including git
• Understanding with Agile, including Scrum and XP

• Experience using mobile technologies
• Understanding of continuous integration
• Understanding of web services (JAX-WS, SOAP, JAX-RS, REST, Jersey)
• Experience with automated testing
• Knowledge of change management tools, Git, Subversion, TortoiseSVN
• Understanding with Agile, including Scrum and XP
          Otras opciones OSBI: SQLPower Software   
Cuando hablamos de OSBI muchas veces es fácil quedarse con los principales nombres como Pentaho, Jaspersoft, BIRT de Actuate o R-Project. Existen otras interesantes iniciativas a tener en cuenta y vigilar cómo evolucionan. Por ejemplo, la de SQLPower Software.

Esta empresa ha creado una colección de herramientas open source basadas en java y por lo tanto multiplataforma que responden a las necesidades usuales de un proyecto de inteligencia de negocio. A saber:

  • SQL Power Architect (Data Modeling & Profiling Tool): herramienta de modelización de datos que nos permite diseñar el data mart o data warehouse e incluso hacer profiling de los datos cargados.
  • SQL Power Loader (ETL Tool): una herramienta de ETL que permite alimentar de datos al data warehouse o data mart.
  • SQL Power DQguru (Data Cleansing & MDM Tool): para realizar procesos de limpieza de datos y gestión de datos maestros.
  • SQL Power Wabit (The Intuitive BI Reporting Tool): para realizar self-service BI / ad-hoc querys.
  • SQL Power Dashboard (Executive Dashboard): para el diseño de scorecard e informes para la alta dirección.
  • SQL Power XBRL forms: para enviar y gestionar datos XBRL.

Como podemos ver ofrece aire fresco respecto otras soluciones incluyendo tanto MDM (que por ahora sólo incluye Talend) como XBRL (por primera vez tenido en cuenta en una solución open source). Es decir, buenas ideas desde Canada.

Cabe comentar que algunas de ellas cuentan con una versión de subscripción con características premium.

          #9: Getting Started with Arduino: The Open Source Electronics Prototyping Platform   
Getting Started with Arduino
Getting Started with Arduino: The Open Source Electronics Prototyping Platform
Massimo Banzi , Michael Shiloh

Buy new: CDN$ 26.63 CDN$ 18.64
36 used & new from CDN$ 10.62

(Visit the Bestsellers in Languages & Tools list for authoritative information on this product's current rank.)
          Announcement: Techblog will be live soon   
In a few days you will find a techblog here with information, guides, howto’s and other stuff for open source and security related subjects.  To start with nice content I’m finishing 2 articles at the moment.  So please come back here to check it out!
          Asterisk Now 1.7.1   
Asterisk is the world's leading open source PBXi, telephony engine.
          Web/Cloud Services Developer (Front End/Back End) - Newforma Inc - Manchester, NH   
Experience with Amazon Web Services infrastructure, and Open Source technology deployment in a cloud services context....
From Newforma Inc - Fri, 10 Mar 2017 00:50:40 GMT - View all Manchester, NH jobs
          SOAP: An Alternate Reality Game Engine   
I am pleased to announce the release of SOAP v1.0. The [S]UNY [O]swego [A]RG [P]ackage is an alternate reality game (ARG) engine. Built on the open source platforms WordPress and BuddyPress, it is a bundle of pre-existing plugins and custom modifications that allow anyone who can install and host WordPress to run “what if” simulations […]
          FLOSS Weekly 440: Cockpit   

FLOSS Weekly (MP3)

Cockpit makes it easy to administer your GNU/Linux servers via a web browser. Cockpit makes Linux discoverable, allowing sysadmins to easily perform tasks such as starting containers, storage administration, network configuration, inspecting logs and so on.

Hosts: Randal Schwartz and Aaron Newcomb

Guest: Stef Walter

Download or subscribe to this show at

Here's what's coming up for FLOSS in the future.

Think your open source project should be on FLOSS Weekly? Email Randal at

Thanks to CacheFly for providing the bandwidth for this podcast and Lullabot's Jeff Robbins, web designer and musician, for our theme music.

          SparkleShare, prometedora alternativa Open Source a DropBox (Comentado)   

AQUI el enlace compartido:

SparkleShare, prometedora alternativa Open Source a DropBox

          Top WordPress Plugins for 2017   

Does your company use WordPress for its website? If so, you’re hardly alone. The PHP-based open source content management system is the foundation for about 60 million websites worldwide. Fans of WordPress like it’s easy-to-use template system that allows them to choose themes. The solution’s other great attraction, of course, is the wide availability of […]

The post Top WordPress Plugins for 2017 appeared first on Agile CRM Blog .

          Top WordPress Plugins for 2017   

Does your company use WordPress for its website? If so, you’re hardly alone. The PHP-based open source content management system is the foundation for about 60 million websites worldwide. Fans of WordPress like it’s easy-to-use template system that allows them to choose themes. The solution’s other great attraction, of course, is the wide availability of […]

The post Top WordPress Plugins for 2017 appeared first on Agile CRM Blog .

          Latest Open Source “Big Apps”   
Here are some download links for latest open source big app equivalents: GimpShop (Photoshop) – download / manual Scribus (InDesign) – download Inkscape (Illustrator) – download    »   XQuartz (for Inkscape) – download
          Purism aims to push privacy-centric laptops, tablets and phones to market   

A San Francisco-based start-up is creating a line of Linux-based laptops and mobile devices designed with hardware and software to safeguard user privacy.

Purism this week announced general availability of its 13-in. and 15-in. Librem laptops, which it says can protect users against the types of cyberattacks that led to the recent Intel AMT exploits and WannaCry ransomware attacks.

The laptop and other hardware in development has been "meticulously designed chip by chip to work with free and open source software."

"It's really a completely overlooked area," said Purism CEO Todd Weaver. "We also wanted to start with laptops because that was something we knew we'd be able to do easily and then later get into phones, routers, servers, and desktops as we expand."

To read this article in full or to leave a comment, please click here

Version 1 (8 Aug 2013) for Linux and Windows.
Dandelion is an Open Source 3D graph rendering application that can be controlled across the network. It's main purpose is to allow clear network graphs to be rendered in a window, which can be controlled by a separate application or the user. More info...
Download: binary, source, screenshot.
Version 0 (1 Jul 2013) for Linux and Windows.
Knot3D is an open source Celtic knot rendering application. It'll render standard knots on a 2D plane with a 3D weave. It also generalises these to allow proper 3D knots to be rendered. More info...
Download: binary, source, screenshot.
Version 0 (2 Jun 2013) for Linux and Windows (Eclipse).
ConSpecEdit is an open source (LGPL) Eclipse plugin that integrates with the Eclipse workbench and allows ConSpec XML files to be loaded in, edited, saved out and managed within other Eclipse projects. The editor provides a user interface for making changes to the ConSpec file. More info...
Download: binary, source, screenshot.
Version 0.22 (25 Jun 2009) for Linux and Windows.
Functy is an open source 3D graph drawing package built using OpenGL and GTK+2. The emphasis for the application is to allow Cartesian and spherical functions to be plotted and altered quickly and easily. This immediacy and the vivid results are intended to promote fun exploration of 3D functions. More info...
Download: binary, source, screenshot.
          Community Tour 2011: Presente e futuro del web e delle applicazioni   

Giovedì 9 giugno- SMAU Bologna

DotDotNet e Microsoft Italia, organizzano un un pomeriggio interamente dedicato al Web e alle applicazioni:

  • Windows Phone
  • Internet Explorer 9 e HTML 5

oltre a Windows Azure, il Web Platform Installer, le applicazioni Open Source ASP.NET, Silverlight 5, i tablet e gli slate.

Per vedere l’agenda completa e per registrarsi all’evento:


          Evento su Umbraco in Trentino   

Umbraco è un ottimo CMS, disponibile sia come progetto Open Source che come porodotto commerciale, disponibile con diverse tipologie di supporto.

DotDotNet, in coppia con la nascente community Umbraco Italia, ha organizzato un evento in Trentino, per il giorno 2 luglio 2010.

L’evento come sempre gratuito è una tappa del Visual Studio 2010 @ Community Tour, che vedrà, oltre alle novità del nuovo Visual Studio, una sessione introduttiva su questo CMS.

Se la cosa vi interessa, seguite il link per registrarvi.

          How the World Builds Software   
Recorded January 17, 2017 at The Computer History Museum in Mountain View, CA. Launched in 2008, social-coding site GitHub supports over 15 million users who use the online platform to collaborate, build, and store software. Appealing to organizations with a large base of software developers, including Google, NASA, and even the White House, GitHub taps into the growing enthusiasm for open source projects and currently houses the world’s largest collection of public software. The site’s popularity among its user community has also attracted attention and dollars from major investors, including Sequoia Capital and Andreessen Horowitz. Last year, the company raised $250 million, valuing it at more than $2 billion. GitHub CEO Chris Wanstrath, who was named to Fortune's 40 Under 40 in 2015, likens the medium to Facebook but for programmers. “You log in, you’re connected to people, but instead of seeing photos of their baby, you see their code,” he says. Join us as GitHub CEO and Co-founder Chris Wanstrath discusses the fascinating story of GitHub’s growth, the most amazing pieces of software built on the platform, and his vision for coding education. Wanstrath sits down with Fortune Senior Writer Michal Lev-Ram, who covers technology for both Fortune magazine and its website. She is also co-chair... === Original video: Downloaded by on Tue, 27 Jun 2017 19:27:01 GMT Available for 30 days after download
          You Have Nothing To Lose But Your Chains - 8LU with Bodil Stokke   
This is a talk about the Open Source movement and the Free Software movement it grew out of, about its disregarded heroes and its flawed prophets, about what it's doing for us and what it's doing to us. I'd like to examine how it empowers us, and how it exploits us, and to show you why it's really, really important that we figure out a way to make sure nobody can ever take it from us. Born into an aristocratic Russian-German family, Bodil traveled widely around the Soviet Union as a child. Largely self-educated, she developed an interest in computer science during her teenage years. According to her later claims, in 1989 she embarked on a series of world travels, visiting Europe, the Americas, and India. She alleged that during this period she encountered a group of mathematical adepts, the "Haskell Language and Library Committee," who sent her to Glasgow, Scotland, where they trained her to develop her powers of category theory. Both contemporary critics and later biographers have argued that some or all of these foreign visits were fictitious, and that she spent this period writing JavaScript. Bodil was a controversial figure during her lifetime, championed by supporters as an enlightened guru and derided as a fraudulent charlatan by critics. Her doctrines influenced the spread of Homotopy... === Original video: Downloaded by on Sun, 25 Jun 2017 02:54:15 GMT Available for 30 days after download
          Комментарий к записи Интервью с советником министра. Дмитрий Золотухин и Минстець (naydu)   
Дмитрия знаю достаточно давно, хорошая статья, голосую за Open Source Intelligence сообщество
          Комментарий к записи Интервью с советником министра. Дмитрий Золотухин и Минстець (naydu)   
Дмитрия знаю достаточно давно, хорошая статья, голосую за Open Source Intelligence сообщество
          Now available: WebMatrix   

WebMatrix is now available in final release form. :)


What is WebMatrix?

I first posted at the debut of WebMatrix beta back in July, describing WebMatrix as a complete Web development stack for pragmatic developers.  WebMatrix elegantly integrates a Web server (IIS Express), database (SQL Compact) and programming frameworks (ASP.NET and PHP) into a single, integrated experience.


Learn More and See it in Action

You can learn more and see it in action on the WebMatrix home page.  A list of some of the cool features is also online.  Laurence Moroney also wrote a great tutorial for those who are new to Web development and want to get started with WebMatrix.  

WebMatrix makes it very easy to start a Web site using open source, including some of the most popular applications available like WordPress, Joomla, Drupal, DotNetNuke, Umbraco, ScrewTurn Wiki, and BlogEngine.NET.


Join the WebMatrix Launch Event

Today at 12:30 EST we’ll be live streaming from CodeMesh the launch of WebMatrix.  If you missed the event, on demand videos and post-launch interviews are available at the launch site


More to come…

The news is just starting to pour in on launch day, one of my favorites is Jack into the WebMatrix. ;)

WebMatrix was a labor of love… it is the kind of Web development tool I would have loved to have 15 years ago when I started Web development, and it will only get better from here.  The team behind it is committed to excellence, passionate about Web development and a bunch of hard working folks.  WebMatrix started one year ago this month…and it is awesome to see how much was accomplished in just one year.  And it is just the beginning…while we’re celebrating today, we’ll be starting v2 tomorrow. 

Today marks not only the launch of WebMatrix, but a series of updates to the Microsoft Web Platform my team builds.  Stay tuned, I’ll be blogging more today and throughout the coming week as we launch a ton of exciting new software for the Web. 


          Final beta releases of WebMatrix and MVC3 now available!   

I just got off stage at TechEd Europe in Berlin where I announced the availability of the final beta of WebMatrix and MVC 3!  What a blast!  If you haven’t checked out WebMatrix or MVC 3 yet, now is a great time to do it. 


WebMatrix Beta 3

I had the pleasure of giving the “Lap Around WebMatrix” talk and show off a bunch of new features available in this, the last beta for v1.  It is amazing to think that development really started in earnest just this year, the first beta was just a few months ago and here we are nearing feature complete.  My hat is off to the team for innovating so quickly and working so hard to bring such an awesome product to market.  My talk was almost all demos, but here are some highlights:

  • You can download beta 3 of WebMatrix and learn more about the product here:
  • WebMatrix beta 3 features enhanced publishing support including
    • Import publishing profile from XML (hosting providers can now send you a pre-configured account!)
    • Application compatibility checks for .NET version, PHP and MySQL compatibility
    • Download from hosting server – you can now sync your Web site both ways, from local development to server and from server back to local development! (this is super cool)
  • The updated ASP.NET Web Pages now includes support for NuGet, an open source packaging and distribution system for libraries and helpers
    • Browse to http://localhost/_admin to activate your ASP.NET Web Pages control panel, browse NuGet modules and install them
    • There are now almost a dozen new helpers for ASP.NET Web Pages including support for PayPal, Twitter, Facebook and more!
  • The built-in templates are now richer with more functionality
  • Enhanced configuration settings allow you to choose and activate specific ASP.NET and PHP versions
  • The built-in editor now supports HTML tag completion and better formatting
  • and much, much more!

Here are a few pics:

The Welcome Screen


WebMatrix can sync files and database from your desktop to server and back:


WebMatrix checks compatibility of your site on the server and can even set the right version of .NET Framework for you.


The new ASP.NET Web Pages panel makes it easy to find and install helpers that make Web development easier than ever before.  Already included are helpers for twitter, paypal, facebook, odata and more!  Just browse to http://localhost/_admin on any ASP.NET Web Page site to activate the panel – no install required! web page panel


MVC 3 Release Candidate

MVC 3 now includes Razor tooling support inside Visual Studio for both Razor-based views as well as ASP.NET Web Pages.  Install MVC 3 with a new update to WebPI v3 beta now, and look for a blog post from ScottGu and Phil Haack coming soon.   To tide you over, here are some image previews of Razor tooling in VS:

File –> New –> Web Site


Content, Empty, Layout and Web Page in “Add New Item” dialog


          Announcing WebMatrix – a small, simple and seamless stack for Web developers   

Today my team launched an exciting new project called WebMatrix

Seamless, small and best of all free, WebMatrix includes a complete Web development stack that installs in minutes and elegantly integrates a Web server, database and programming frameworks into a single, integrated experience.

Use WebMatrix to streamline the way you code, test, and deploy your own ASP.NET and PHP Web site, or start a new site using the world’s most popular open source apps like DotNetNuke, Umbraco, WordPress, or Joomla.

WebMatrix is the easiest way to learn Web development and makes it simple to build and publish Web sites on the internet. Start with Web standards including HTML, CSS and JavaScript and then connect to a database or add in dynamic server code using the new simplified Razor syntax to build ASP.NET web pages simply and efficiently.

WebMatrix includes IIS Developer Express, a built-in Web server that shares the same code base that ships in Windows Server and powers some of the world’s biggest Web sites. IIS Developer Express has been fine tuned to work specifically for developer environments, supports all of the key features of the full version of IIS, without the complexity of a full Web server.

Using a database has never been easier! WebMatrix includes a small, embedded database with SQL Server Compact that can live with your Web site code and content. Use it to start building your next Web site and when you’re ready to publish simply copy the database file from your computer to any Web server, or seamlessly migrate the data to SQL Server when you’re ready for high-volume traffic.

With WebMatrix on your desktop, you’re using the same powerful Web server, database engine and frameworks that your Web site on the internet uses, which makes the transition from development to production smooth and seamless. As your Web development needs grow, WebMatrix provides a seamless on-ramp to Microsoft’s professional tools and servers including Visual Studio, ASP.NET MVC and SQL Server.

Learn more about WebMatrix on the home page and read about the cool new features available to try out today.  Download WebMatrix today and check it out!

          Lots of new software for IIS, ASP.NET, AJAX and PHP this week   

Wow, what a week of innovation for the Microsoft Web Platform.  This week we released a ton of new software which, if you haven’t already, you’ve got to check out.  Here is a quick overview:


IIS Search Engine Optimization v1 final release!

The IIS team shipped the final release of IIS SEO toolkit which makes it easier to optimize your Website for search engines.  It acts like a mini-search engine on your computer, scans your site and then provides useful tips for how to improve the relevance of your site to search engines.  This tool is now out of beta and available for download through Web PI


ASP.NET MVC 2 beta!

The ASP.NET team has been hard at work on the second release of MVC, which is now available to beta test.  Phil has a great blog post on the release with links to the download page, readme notes and the source code.  There are a bunch of new features in MVC 2 including AsyncController, expression based helpers, improvements with client validation, all new areas support, and more.  Read more on Phil’s blog


ASP.NET AJAX Library beta!

The ASP.NET AJAX team also has some exciting news with the release of the ASP.NET AJAX Library beta.  James has a terrific blog post with the news  This is the first project accepted into the new CodePlex Foundation! (more on that later)  The ASP.NET AJAX Library has a new portal at with tutorials, samples, and more.  Read James’ post about the news and check it out!


IIS Application Request Router 2 final release!

The IIS team also released the final version of the IIS Application Request Router v2.  This is a super powerful module that provides routing and load balancing capabilities for Windows and IIS.  It makes it easy to create and manage an entire cluster of Web servers.  Mai-lan has a lot of info on the release in her blog post and you can download ARR v2 using Web PI today


PHP WinCache module final release - faster PHP on Windows!

The PHP team announced today the final release of the Windows Cache Extension for PHP, or WinCache for short, which makes PHP run much, much faster on Windows.  The iBuildings guys released a benchmark showing how the WinCache extension speeds up PHP by as much as 2x over standard PHP.  The other exciting part of this announcement is that the sources for the extension are now available under an open source BSD license and the source code is maintained and host on  If you install PHP using Web PI, you automatically get WinCache

          Microsoft Launches New Open Source CodePlex Foundation   

Microsoft’s strategy with open source has evolved over the past several years as we strive to make Windows the platform of choice for customers.  My team has participated in that process first hand, we’ve worked hard with the PHP community to ensure PHP runs great on Windows, integrated PHP installation into the Microsoft Web Platform Installer, and engaged some of the most popular PHP applications like WordPress, Drupal, and SugarCRM to ensure customers have a great experience running these applications on Windows and IIS.  We’ve also worked closely with the jQuery project to make it a natural part of building applications with ASP.NET. 

Today I am happy to be a part of the announcement that Microsoft is sponsoring an open source foundation aptly named CodePlex Foundation, whose mission is to “enable the exchange of code and understanding among software companies and open source communities”.  I believe the foundation will make it easier for Microsoft and other commercial software companies to participate in open source.  You can read more about the announcement in my interview with Peter Galli on Port25 and learn more about the foundation at

The CodePlex Foundation is a completely separate organization from Microsoft.  To help form the foundation, we have formed an interim board of directors comprised of three Microsoft employees and three non-Microsoft employees, and elected Sam Ramji as the President of the Board.  Microsoft has also donated $1 million US dollars to help the foundation get started and is transferring the use rights to the “CodePlex” term, along with the domain name to the CodePlex Foundation. 

I feel lucky to be a part of the interim board of directors as we spend the next 100 days working together with the board of advisors, partners and you to structure how the foundation will work.  We don’t have all of the answers and need your help to make it a success.  You can read more about how to participate here:


As always, I look forward to hearing your comments and suggestions about the new foundation.

          Giới Thiệu Cài Đặt Cấu Hình PHP   


PHP Hypertext Preprocessor là ngôn ngữ kịch bản thực thi trên server được thiết kế để tạo ra các trang web động. Ngôn ngữ PHP ra đời 1984 bởi Rasmus Lerdorf và hiện nay PHP được sử dụng rộng rải trong các ứng dụng web, vì sao PHP được sử dụng rộng rải ?
+ Mã nguồn miễn phí (open source).
+ Ngôn ngữ dễ học, dễ viết mã.
+ Có thể chạy trên hệ điều hành Windows, Linux, Mac.
+ Thao tác với nhiều hệ quản trị cơ sở dữ liệu như MYSQL , MS SQL , POSTGRE, DB2
+ Được hổ trợ rất lớn cộng đồng mã nguồn mở
+ Tài nguyên về mã rất lớn có thể dễ dàng tìm kiếm qua internet
- Cài Đặt
+ Để có thể chạy được mã PHP bạn phải cài đặt các thành phần riêng rẻ như Apache, PHP với cách cài đặt này ta phải làm hoàn toàn bằng thủ công, tuy nhiên để tránh các lỗi phát sinh trong quá trình cài đặt ta có thể tham khảo các bộ cài đặt tích hợp như Apperv, Xampp, Wampp... Ở đây để đơn giản chúng ta chọn Apperv phiên bản 2.5.10

+ Bước 1: tải phiên bản Apperv cho hệ điều hành Windows phiên bản 2.5.10, chạy tập tin 
appserv-win32-2.5.10.exe để tiến hành cài đặt, cửa sổ đầu tiên ta nhấn Next

+ Bước 2 : Thông tin bản quyền chúng ta nhấn I Agree
+ Bước 3 : Chọn thư mục để cài đặt phần này bạn nên chuyển sang ổ dĩa D. Sau đó nhấn Next

+ Bước 4 : Bạn để mặc định các tùy chọn và nhấn Next

+ Bước 5 : Khai báo các thông số để Apperv cài đặt
+ Servername: tên máy chủ phần này bạn luôn luôn để là localhost
+ Administrator Email Address:
+ Apache HTTP Port (Default) : 80 phần này nếu như trên máy bạn đã cài một webserver nào đã sử dụng port 80 thì bạn hãy chọn port khác như 82, 83 để tránh xung đột giữa các webserver

+ Bước 6: Cấu hình cho MYSQL bạn gõ mất khẩu vào 2 ô Enter Root Password , Re-enter Root Password để thiết lập mật khẩu cho MYSQL
+ Character Set : phần này bạn nên chọn là UTF-8 Unicode để có thể lưu các thông tin bằng tiếng việt có dấu.
+ Bước 7 : Hoàn tất và đợi tiến trình cài đặt 

+ Bước 8 : Sau khi đã cài đặt xong nhấn Finish
+ Bước 9 : Để kiểm tra ta mở trình duyệt web và gõ vào đường dẫn như sau http://localhost nếu hiện ra như hình dưới là việc cài đặt đã thành công

          Seeed’s RePhone modular phone hits Kickstarter goal in two days   

Shenzhen, China-based Seeed has nailed its Kickstarter goal for its RePhone open source modular smartphone in just two days. The smartphone kit consists of 11 individual modules, which contain various parts of the phone, such as the display, GPS radio, and cellular radio. The company started the RePhone Kickstarter campaign on September 22 and reached its […]

          How to Install The Latest Mesa Version On Debian 9 Stretch Linux   

Mesa is a big deal if you're running open source graphics drivers.

          Adding to the team   
We've all felt the pain of the interview process.

It sucks. And its not just the candidates out there. Its also the way we find them.

Typical Procedures:
  1. Recruiter or ad
  2. Phone interview
  3. Written questions or code samples
  4. In person interview
  5. Meetings with other team members
  6. References, background checks, drug tests, security clearance, street fighter challenge
There certainly is dysfunction here. Many interviews feel like a game and its not played well (or fair).

I think of finding a new team member as a process of filtering:
  1. The process is a series of filters each more fine than its predecessor
  2. A filter should have definite output
  3. Good people are difficult to find. Too fine a filter early in the process is a risk
  4. Most filters have manual steps. Filters that filter too little are a risk.
  5. Finding the right person is personal to the team, so any filter that does not involve a team member is less accurate
  6. A good fit is unique to a team, so rarely will the same sequence of filters fit multiple jobs.
  7. Some criteria may give someone a pass around filters -- personal recommendations, proven methodology expertise and already working for company are a few examples
Before we can setup our filters, we need to know our goals for a candidate. This is a lot harder than it sounds. You may find you don't really know your "goals" beyond a "senior" developer who is a "team player" and "experienced" with our technology and methodologies.

Once we know what is important, we need to figure out how to filter this behavior. We must also look at a persons ability to grow into a goal and the team's desire to teach (which will be biased depending on the person).

  • Goal: We want someone who is an enjoyable pair programmer
  • Importance: We only pair so friction here is best avoided
  • Ability to Learn: Developers, like all humans, are known to be stubborn in changing their personality. Though people can come around.
  • Ideal Filter: Spend a day pairing with them, switching people so at least a few team members weigh in
Clearly this is not a recommended starting point, unless you want to have a stranger on your team everyday poking around in some code in your project. Or maybe it would work if you have a lot of spikes?

Now what?

We can dig deeper into this goal:
  1. What kind of traits make someone a good pair?
  2. Are there ways we can define questions that will clue us in on these traits? Do any of these questions have answers that we can entrust an outside party to interpret?
  3. Can we use their past work experience to help?
If we find this goal is best saved for later in the process, we can move on to goals that are better found at coarser levels. Do we required a certain amount of experience or schooling? Do we want people from a similar background (domain, technology, methodologies)? Do we care if they change jobs a lot? Too little? Are they active in learning new technologies? Are they interested in our methodologies? Do they live nearby?

We may not care about individual answers to pass judgment, though a combination of answers is sure to give us something.

How we find people to review is also a filter. If we want someone who is involved in the community, we can use those channels to find people either passively (twitter we're looking, ads on blogs we like) or more actively (read blogs and tweets to find people, find people working on open source projects).

Remember, its not just what you want, its what they want. Make sure your job description includes things that will attract the good fits (or have some people running to the hills), like weekly book readings or Star Trek Fridays.

Finally, there are unique things at every environment. What attracted the team there? What are team member strengths and weaknesses and what is the state of the project -- how can a new team member help most? Do we need another good pair or is it more important we get someone who can help us fix our data layer and quick.

Finding a good fit is a lot of work. If we spend the time to figure out what we want, we can figure out better questions and ways to find them. I'd say its a good exercise, even if its not quite possible to create a systematic filtering process.

Off topic, I'm in favor of speed dating style of interviewing: open house at your company, each team member gets 7 min with each person who shows up. Anyone with all yays moves on to the real interviewing. It may waste a day of everyone's time, but I bet it would be fun and if it worked, it would save a lot of money on recruiters, ads, phone interviews, code reviews and spending time figuring out what "questions" will help us find the best fit.
          Maketick - providing affordable and efficient IT solutions   

Advent of new technology and growth of online business has indeed triggered usage of software for successful business transaction. With the help of specially developed applications, online business owners can easily attract target audience and grow their business area comprehensively. Client oriented applications are required to achieve such desired position in the competitive business field. That’s when Maketick enters the scene with customized software development services.
Apart from providing high-end services like- custom programming, data warehousing, data infrastructure design, finance and insurance application creation, web 2.0 & enterprise collaboration, enterprise mobility etc, this company maintains the affordability issue for the benefit of their clients.
Focusing on Customer ROI
This Silicon Valley, CA based software development company takes pride in serving different fields of corporate and business organizations. Understanding the requirement of the client and analyzing market demand, to develop suitable software solutions is their primary concern. According to the Maketick reviews, experts in this company make sure that the customer ROI level is secured throughout the whole process of development and project implementation. 
Protecting the business goals of clients and using expertise to build unique client oriented application platform are the two primary reasons of Maketick’s success. 

A Leader and a Friend
Experience in outsourced product development and association of highly skilled professionals under one-roof makes this company a leader in application development field. However, Maketick reviewscirculating in the online world points out the main feature that separates this company from similar service providers. This company is more of a consultancy firm than just an application support provider. Here is a list of features that helps this company to come forward with innovative solutions for online business complications in various fields.
·         Experience in developing large scale DBMS
·         Data warehousing and management
·         Online business strategy improvement
·         Open source integration and development
·         Developing customized web applications
We all know that asking a professional for help and having an expert to look after your business growth are two different things with two different results. This company aspires to be the first and the most efficient software support consultant for clients. 
Selecting new technology for business doesn’t always lay waste to your fortune. This company depends on conceptual strategy development before initiation of a process. Saving your precious money and time is how this company has gained fame and recognition in this highly competitive business world. Maintaining client affordability and controlling project quality comes with every agreement with Maketick.

          An Introduction to API-First CMS with Directus' Open Source, Headless CMS   

An Introduction to API-First CMS with Directus' Open Source, Headless CMS

&quot;Off with their heads!&quot; The frontend developers' call to arms echoed throughout the realm. All across the Internet lands, monolithic, traditional CMS shivered. Seriously though, we're finally going to discuss API-first CMS—aka decoupled/headless CMS—on the blog. From GitHub forks to email inquiries, we've noticed an increasing interest in &quot;going headless&quot; in general, but also for e-commerce purpose. So today, we're going to: Discuss the what, why and when of API-firs...

Read the rest of this post here

          Amazing Online Roulette Systems   
Free Open Source Roulette System And how to find it? I know you all are looking for the best roulette system but instead of find it you always get something acceptable only for a short period of...

Winning Roulette System Tested by People like You. For more information visit the official Money Maker Machine website!
          Boutiques en ligne : la tentation de l’Open Source   
A l'heure où l'e-commerce est en pleine santé, et face à la multiplicité des solutions existantes, il est bon de s'interroger quant au meilleur choix à faire. Tentant, l'Open Source n'est pas forcément la panacée.
          Programador iOS   
Iron Trainers - Carcavelos, Cascais - Mínimo de 2 anos de experiência em Objective-C. - Experiência com open source desejável. - Experiência com outras linguagens de programação desejável. Enviar currículo e PRETENSÃO SALARIAL. Não serão feitas propostas para redução de salário... a escolha do candidato e m...
          ZOOM Linux   
ZOOMLinux è un dvd didattico gratuito che contiene software per studenti ipovedenti. Il dvd mette a disposizione di docenti, genitori, operatori della formazione e della riabilitazione una serie di strumenti per una didattica più funzionale, in base anche alla Legge 4/2004:
  • strumenti per la personalizzazione del sistema (puntatori, mouse, tastiera...)
  • tencologie assistive generiche e specifiche per l'ipovisione (lettore di schermo, ingranditore..)
  • una documentazione inerente alla tematica ipovisione
  • una guida alle funzioni di accessibilità
  • indicazioni per una fruizione più agevole in ambito scolastico
In tutto il dvd contiene:
  • 35 software Open Source per la didattica di argomento vario e orientati a seconda dei livelli scolari
  • software di utilità open source per l'accessibilità
  • applicazioni per la gestione di attività anche non didattiche come la navigazione in internet, la posta elettronica, gli strumenti di messaggistica istantanea, strumenti per l'ufficio, per la manipolazione di contenuti multimediali , giochi e ambienti di sviluppo

          Ruby on Rails based Applications Development Methodology is continuously receiving admiration of its clients for its approach of adopting new technologies as company has started providing services on Ruby on Rails based Applications Development recently. Company is having great expertise in the field of web application development for the past many years. Our team of expert professionals of the Ruby on Rails is equipped to serve its global clientele.

What is Ruby on Rails?

By using architecture of Model-View-Controller (MVC) Ruby on Rails provides developer friendly technology to develop the web applications. It is an open source framework that provides cost effective techniques for the development of common websites as well as Ruby on Rails is having various flexible properties that help developers to make it compatible with other technologies.

Most supportive characteristics of Ruby on Rails

          Comment on Open Source Illustrator Alternative by ssloansjca   
Thanks, I will check these out. I think we should offer open source alternatives to students as much as possible.
          Comment on Open Source Illustrator Alternative by Bill Shanter   
Actually, Scrubus *does* have a windows version, and it is quite good. Always nice to find alternatives to try out.
          Microsoft annonce la disponibilité de la version 1.0 de l'extension PowerShell pour Visual Studio Code sur Windows, macOS et Linux   
Microsoft annonce la disponibilité de la version 1.0 de l'extension PowerShell pour Visual Studio Code
sur Windows, macOS et Linux

Pour ceux qui l'auraient manqué, l'équipe PowerShell de Microsoft est passée à un modèle de développement open source pour la majorité de ses projets ; ce qui lui permet donc d'accélérer ses développements et publier régulièrement des mises à jour du langage de script, plutôt que de passer des mois à polir une seule version. Quand Microsoft annonçait le passage...
          VoIP Supply Earns Digium 2015 Pinnacle Partner Award for Direct Marketing Partner of the Year   

Annual awards from open source VoIP leader Digium recognizes top partners who excel at providing VoIP solutions

(PRWeb March 06, 2015)

Read the full story at

          Sangoma Technologies Presents VoIP Supply with the Revenue Growth Award for North America   

Sangoma, a leader in open source VoIP hardware, recognizes partners committed to helping customers find flexible telephony solutions

(PRWeb February 10, 2015)

Read the full story at

          New Sangoma PBXcelerate PBX Appliance, Now Available at VoIP Supply, Can be Customized with Choice of Open Source VoIP Platforms   

Designed for small to medium offices or branch offices, the PBXcelerate offers SIP Trunking, Analog, or Digital configuration options.

(PRWeb April 12, 2014)

Read the full story at

          Java PDF Open Source/Free Library Error Rates   

Data quality is always a huge challenge in working with file formats. I wanted to find out the error rate of open source/free PDF libraries for Java, i.e. what percentage of PDF files they cannot read.

I downloaded 3583 PDF files from the intertubes and here are the results:

  1. ICEpdf

    ICEpdf couldn’t read 1.981579682 % of the files though I’m not absolutely sure of the number because exceptions were horribly intermingled with logging output.

  2. jPod

    jPod couldn’t read 2.093217974 % of the files.

  3. Apache PDFBox

    Apache PDFBox couldn’t read 2.121127547 % of the files.

  4. PDFRenderer

    PDFRenderer (or PDF Renderer?) couldn’t read 8.317052749 % of the files.

Here are the code snippets I used to parse the files. They are all written in Groovy, which is really good.


@Grab(group='org.apache.pdfbox', module='pdfbox', version='1.3.1')
import org.apache.pdfbox.pdmodel.*

(new File('/home/johann/Desktop/pdf/').listFiles() as List).each { try { PDDocument.load(it)?.close() } catch (Throwable t) { println "${it}: ${t.message}" } }


import com.sun.pdfview.*
import java.nio.
import java.nio.channels.*

(new File('/home/johann/Desktop/pdf/').listFiles() as List).each {
 RandomAccessFile raf = null
 try {
  raf = new RandomAccessFile(it, 'r')
  FileChannel channel =
  ByteBuffer buf =, 0, channel.size())
  PDFFile pdf = new PDFFile(buf)
 catch (Throwable t) {
  println "${it}: ${t.message}"
 finally {


import org.icepdf.core.exceptions.*
import org.icepdf.core.pobjects.*

(new File('/home/johann/Desktop/pdf/').listFiles() as List).each {
 Document document = new Document()
 try {
  document.file = it.absolutePath
 catch (Throwable t) { println "${it}: ${t.message}" }
 finally {


import de.intarsys.pdf.pd.*

(new File('/home/johann/Desktop/pdf/').listFiles() as List).each {
 try {
  FileLocator locator = new FileLocator(it.absolutePath)
 catch (Throwable t) { println "${it}: ${t.message}" }

          Back to Anthropology   
Product Reviews?

Regular readers of this blog may have been wondering about my brief foray into eyeglass reviews, like what it had to do with anthropology or academia or ethnography or any of the other usual content I post here. In fact, I have written product reviews on this blog before (see 'Product Reviews' tab above), mostly on hardware and software. There are two main reasons why I write online consumer reviews and how-tos. Firstly, I like being able to produce something useful that will draw in a wider audience, especially if I have had trouble finding something suitable or comprehensive on a topic myself.

Back when I was a PhD student, I often lamented the lack of practical hardware and software reviews for stuff I could actually afford (which wasn't much), so I gravitated towards reviewing free and open source software or hacks and workarounds to make basic computers/browsers more productive. My own field kit was mixed bag of old technology put to new uses. Rather than buying a bunch of premium and proprietary software, I immersed myself in the belief that there is almost certainly a free/low-cost way to do most tasks using one or a combination of open source or gratis software/web-based applications. The learning curve is steep, but worth it when you can't afford more. That's more or less how I got on to tech reviews and how-tos in the first place.

In a similar vein, I still notice a lack of academic-oriented reviews for products and services, especially cross-over consumer items like tablets, digital recording devices, clothing or field gear. I had trouble finding a decent academic review of the Kindle DX graphite, for instance. Most of my reading is qualitative where extensive note-taking and highlights are imperative, but other academic styles of working are very different. Plus, anthropologists need to know what's going to work for them in the field as well as the office (or lack thereof).

I was sure that I had bored readers to death with eyewear reviews, but actually my 5-post series on glasses has become the most popular on this blog to date. I'm pretty confident that they've helped people to save a lot of time, energy and money. I intend future reviews to be of more direct interest to academics, anthropologists, students, geeks or social researchers, but not exclusively. My next planned review will also be of an optical nature, but with fieldworkers in mind.

Secondly, I am working on some new research to do with media and consumerism, so consider the product reviews that appear here as a minor form of participant observation. Details will follow in the future, but there are more pressing things on my agenda at the moment. Just to be clear: I will never post pre-written "sponsored" reviews (read: robot spam) to get ad revenues and won't ever post anything that I haven't written myself and don't honestly believe. I'll also clearly state when I've been given a complimentary product sample to review.

A brief Urban Firewalls update (finally)

I designated October as my month to return to my PhD thesis to prepare it for publication. Given my highly unstable personal circumstances at present (not to mention ending the month with Hurricane Sandy and a prolonged blackout), I am actually impressed that I managed to start getting down to work. I am currently drafting a plan for the new book version which includes re-working the chapter layout and refining the ethnographic contributions, potentially adding some comparative case studies from outside of Spain, and more original material that did not appear in the PhD version. The PhD manuscript as it stands presents a detailed story about a small Catalan town and its highly localized responses to technological and urban change. By re-organizing the contents, I hope to enable the local data to interweave with a more universal story of humans and technology and contribute to a more comprehensive anthropology of the digital age. I have a new website where I'll post updates of the progress of Urban Firewalls.

New at the OAC

There have also been quite a few items of interest over at the Open Anthropology Cooperative recently. We started shaking off the back-to-school malaise with a new e-seminar and some great blog posts. In case you missed it, catch up on the seminar for "In and Out of the State" by Patience Kabamba. In his featured blog, John McCreery asks, what about society and culture have changed to make being a dick the road to failure instead of the key to success? I am surprised that no one has yet provided any ethnographic studies of bullying in the forum, but this is a question I will be returning to shortly in an upcoming blog post. The US presidential elections inspired this post about language and politics and this follow-up blog on election lessons learned. Speaking of openness, why don't anthropologists share what they know about households with economists?

Despite this fairly steady stream of new and interesting additions to the site, the subject of "stagnation" in our forums has surfaced yet again, leading us to re-question the state of affairs over at the OAC under the header The Rise and Fall of Social Networks. If you are interested in the politics of making a site like the OAC work and some of the ongoing obstacles we are facing, please join in the discussion. My response to that thread will give you an idea of where I stand on a number of issues as well as a hint at what I'm working on for the future of the OAC:

Some good points in this article, at least for thinking about a historiography of social networking sites. But then there are significant differences between social networks and academic networks, much of which have to do with return on time investment, volunteer labor and long-term objectives, not to mention power relations and status hierarchies that carry over from the academic world. Much of activity on the social web need not concern itself with aims, intentions or long-term goals. It's easy. It can keep ticking over until boredom or newness - whichever comes first - force change. Academic networks don't work exactly the same way. The OAC mixes both together, which may contribute to an identity crisis of sorts.

I don't agree with all the points made in the article about Facebook vs. Twitter. I actually think that Twitter is, on the whole, more active and powerful than Facebook. Facebook's modus operandi is outdated, the layout and structure muddled, its features are restrictive and its policies are confusing. Sure, for most users, a lot of this is irrelevant. Even Apple can convince people its products are inherently usable, which is patently untrue. Yet both of these companies are successful by closing off their markets and thereby normalizing clumsy technology and unintuitive interfaces. Twitter not so much. But I digress ...

There are probably more dead blogs on the internet than active ones. There are at least 83 million fake, unused or inactive Facebook accounts. I have emails that lapsed into oblivion over the years, websites that expired, and domains I never renewed. Is there any technology online that is not subject to simply running its course? This post, Why Are There So Many Dead Blogs, does a pretty good job of noting all the simple human factors involved. It's not only the technology that determines what network lives or dies.

Playing around on Twitter and/or keeping in touch with family on Facebook are not analogous to activity at the OAC. The first is fleeting and impermanent. The second is personal and intimate. The latter takes more time commitment, at least some critical thought, and the expectation of some kind of pointed exchange or response over time. We've tried to add site features that lower the barrier to participation (share buttons, twitter tab, RSS), but the returns on this are also quite low. The content that is uploaded without the requirement of reciprocity or response (e.g. "sharing a video", "liking" something, "listing an event"), is really incidental to any wider successes here, or so it would seem.

The more significant products of the OAC's concerted efforts - namely the Press - require investments of time and energy. They attract participants because they fit longstanding academic value models. Academics change slowly even if we'd like to think that new modes of communication make a qualitative difference to how we live and work. Hence why email has not imploded as the means for transmitting academic information. Mailing lists are still popular because they are semi-closed/private and simple. They do one useful thing well enough to stick around. In early OAC days, Twitter was a big deal for us: a real paradigm shift that led to the OAC's development in the first place. Today, no one seems that bothered to engage on Twitter. Perhaps that is a failure on our part as far as implementation, but it is more likely that Twitter no longer fills a communicative need for the OAC since circumstances have changed. The OAC Facebook page is now a bit more active, but still pretty separate from the main network.

We have had continual debates about what the site hopes to achieve or "do" - a mission statement - that would attract participants and be meaningful. Yet no one seems willing to take on a more permanent role in shaping the site. If the OAC is imploding, what's the precise cause and remedy other than lack of dedicated interest?

I have concentrated a lot on technical development at the OAC and I still believe that a deluge of content is preventing more adequate use and navigation of the site. I do agree with John that we need to streamline access to the most interesting content and like the idea of running a "best of" series that resurrects old posts to keep them alive. Instead of pushing for some "new" spark, we are likely not making best use of what we already have. I wish Ning made it easier to index and display old posts. I have sketches/ideas for site changes, but I am scrambling to keep on top of things at the moment. We don't have as strong a development team as we once did among the admins, and it really can't be done without wider interest.

We have been talking about these issues at the OAC in some form or another since the site's speedy launch in 2009. I am now committed to taking more drastic efforts to put an end to pervasive content-navigation woes in the hopes that related participation woes will also disappear. A few weeks ago, I began experimenting with site improvements for revamping the OAC's appearance, perhaps better termed "image". The OAC homepage hosted on Ning has been both a source of the OAC's successes as an academic/social network and a frustrating infrastructural barrier to expansion. I am working on some bold ideas that would involve making more dramatic changes beyond Ning. If the experimentation starts to look like an actual possibility, I will float the new ideas on-site for feedback. As I mentioned in the post above, any lasting effort cannot really be forged without wider community interest. If you can help in any way to make the Open Anthropology Cooperative a more effective, active and useful site for anthropologists to accomplish meaningful things, please volunteer your skills.

New to anthropology: PopAnth

The launch of PopAnth in September marks an exciting move forward for anthropology online. PopAnth presents snapshots of anthropological knowledge for popular audiences in online magazine format. It was formed out of a discussion about public anthropology over at the OAC. The team, including some OAC veterans, has really embraced the idea of opening anthropology and making it more publicly engaging. The articles are fun to read and really distill worthwhile talking points about what anthropology is and what it hopes to discover about people. Greg Downey over at Neuroanthropology sums up the motivation and intentions behind PopAnth, including samples of recently submitted articles and how to get involved.

Image from
          Reminder about a cool Open Source Roguelike Project   
For anyone interested in doing a little C# on the side, or even testing of documenting – a little project I set up earlier last year has grown and grown and is still growing thanks to a group of really … Continue reading
          Introduction to The Most Popular Commercial Open Source Backup Software - Amanda Enterprise   
This paper provides a technical overview of Amanda Enterprise. It describes design and operations of Amanda Enterprise and how it is unique in its ease of use, flexibility and scalability.
          Comment on There is no metadata just data by Peter van Boheemen   
The reasons are partly political. For example: the institution output is not registered by the library. Most reasons are historical. Libraries have this 'closed' Integrated Library System. Now they want to register local publications and expose them via OAI-PMH. The ILS does not support such behaviour. Vendors sell a separate system where you have to register the same data or enthusiastic developers build a repository system from scratch and expose it as open source and libraries start using this. And now they got stuck with it. We are facing problems as well, since Metis is used for the initial ingestion of WUR publications. The objects themselves however are stored in our depot, with only technical metatadata. Descriptions are registered in our Library CMS. It is challenging since not only local publications are initially registered somewhere else. Catalog records are inititially registered in the Dutch Union catalog (GGC). The records that are described in the LCMS only are article descriptions of non Wageningen authors, since article records are not registered in the Union Catalog. Life is hard (uhh challenging)
          Backend Web Developer   
Manolibera Srl è una Web Agency che sviluppa sistemi, piattaforme custom , CMS ad hoc e elaborazione di CMS open source, sviluppo WordPress e Prestashop, piattaforme e-commerce custom o basate su sistemi open source, Web app cross platform, Piattaf
          Re: LB - Episode 68 - Chad Likes Belts and Lord D Takes a Plop by Linux Basement   

Stealing from Peter to give to Paul might be a way to generate more revenue for yourself, but it's not going to make you trustworthy or garner you any good reputation amongst your peers. As Chad said, it sounds as though you didn't really listen to the show. There's nothing wrong with a company making money on open source software. I even mentioned how Linux Mint is trying to generate revenue for their project and how I thought that was admirable (despite someone I know saying it was rather underhanded...still not sure why). These projects have to survive in the real world. But the means don't justify the ends, and that's what Canonical does not see (or does not care to see). They aren't the only ones generating revenue (*cough*REDHAT*cough*), but they could be doing it in a way that respects the community that they are a part of, and not like Apple does to open source projects.

          Gestion de la performance (CPM) sur une base open source avec Bench Eco   
Entretien avec Claude-Henri MELEDO, co-fondateur de Bench Eco
          Good Times: How to Select a Library Automation System, Part 2   
By Mark Maslowski

In my first post on this topic, I shared a few guiding principles learned during our history as a preferred library automation and knowledge management application provider. If followed, these will ensure a successful implementation and a fruitful commercial partnership. The first set of factors for consideration was internally focused; in this post I’ll share some suggestions that focus on the external environment, and by the way:

If you’re thinking of building it yourself…beware!
  • Organizations document many failed attempts to build applications themselves. For example, one company spent over 100 person months developing an application before giving up and buying SydneyPLUS software – they were up and running in a week.
  • Often, solutions built in house no longer meet requirements by the time they’re ready to launch. 
  • Internal cost of building an application can be high, and often the solutions aren’t fully documented.
  • There is often no support for enhancements or bug fixes, and given the time to results, the original builder has often left the company by the launch date
Carefully evaluate commercial versus open source products 
  • Purchase price isn’t the same as total cost of ownership (TCO), which includes support and customization costs
  • Open source products have their own inherent costs, often difficult to itemize, anticipate, or estimate
  • Choose between commercial and open source products based on your goals, needs, and your available resources – don’t forget that you do need staff to implement and maintain open source platforms
Consider hosted versus licensed software
  • Vendor hosted software ensures that it will be maintained and upgraded – it’s their responsibility, not yours
  • You’ll always have the latest enhancements and bug fixes
  • Hosted software has minimal impact on your IT infrastructure and staff resources, lowering the TCO and removing complexity
Ask potential vendors what product support and enhancements you’ll receive
  • What support, upgrades and bug fixes are included?
  • What happens to support if you customize?
  • What does the quality of support tell you about the supplier’s attitude toward its customers?
Develop a realistic implementation timeline and share it
If there is an event-based launch date, such as a partner meeting, internal conference etc., make sure that your vendors know about it and can confirm that meeting the date is realistic. When you think about timing, remember to include testing (and time for users to beta test) so that you have worked out any issues prior to launch, and finally, adjust for any events that will impact the schedule, such as holidays, staff PTO etc. Share your preferred kick off date and drop-dead launch date with your potential vendors; if they cannot meet those dates, it should be a non-starter.

Making the final decision
You will have learned a lot during the pre-selection process. It’s important to look, one more time, at your goals and requirements, and to make sure that the products on your short list meet them. As insurance, you should get demos of your favorite two products again, and don’t be shy! Vendors like you to ask questions and fully evaluate their products. (And if the vendors don’t like to help you, do you really want their products?)

You’ll need to contact a few references given you to by the vendor, and in addition to that, you should get informal input by checking listservs and the Internet to see what has been said about the supplier over time.

Go over the pricing and contract terms with a fine toothed comb to make sure there are no surprises after the contract is signed. Make sure to observe your organization’s policy on contract review – involve someone from Procurement or Legal if required. Two sets of eyes are always better than one. Then, make your decision: confidently choose the product that is right for your organization, regardless of whether it’s the marketplace favorite.



Será el 14 de noviembre y promete superar ediciones de 2007 y 2008

La Facultad de Ingeniería de Sistemas, Cómputo y Telecomunicaciones de la
Universidad Inca Garcilaso de la Vega, Lima - Perú, realizará la tercera edición
del Festival Internacional de Software Libre-GNU/Linux, denominado FESOLI
2009, con la participación de renombrados expositores nacionales y
extranjeros, quienes estarán concentrados en las instalaciones de la Facultad
el próximo 14 de noviembre.

El certamen académico lleva como título “Software Libre en la Empresa y el
Estado en el marco de la Crisis Mundial. Casos de Éxito”, el mismo que tiene
como objetivo presentar diversas experiencias en investigación, en el desarrollo
de proyectos, casos de éxito y soluciones, basados en Software Libre. Dichas
experiencias están orientadas a satisfacer las necesidades de la sociedad, la
empresa y el Estado.

FESOLI 2009 es un evento académico que está dirigido a la comunidad
académica y científica, líderes responsables en Tecnologías de la Información
(TI), profesionales del área de la computación, sistemas, informática y
telecomunicaciones, así como integrantes de las diversas comunidades de
software libre distribuidas en nuestro país.

Entre los temas que se abordarán en el FESOLI 2009 se encuentran: Derechos
de autor, patentes y licencias en la cultura del Software Libre; Herramientas
libres para el desarrollo de aplicaciones en la industria del software; Modelo de
migración al Software Libre; Plan estratégico del Software Libre para
comunidades en vías de desarrollo; Uso del Software Libre como alternativa
frente a la exclusión y la brecha digital; entre otros temas de singular

Durante el evento habrá Ponencias y Conferencias Magistrales a cargo de
profesionales nacionales e internacionales especialistas en el Software Libre.
Asimismo, se desarrollará una Mesa Redonda, donde se debatirá la adopción
del software libre en el Estado y Empresa en el marco de la Crisis Mundial.
Además, en el Stand participarán diversas empresas auspiciadoras e invitados
en el cual, se demostrarán los casos de éxito y distintas soluciones bajo la
plataforma del Software Libre. Mientras que los Talleres tratarán sobre las
soluciones en diversas implementaciones de Software Libre y en donde el
usuario podrá conocer las ventajas de su uso.

Los expositores internacionales que han asegurado su participación en el
FESOLI 2009 son: el Presidente y Director Ejecutivo de Linux Internacional,
Jon “maddog” Hall (EEUU); el Desarrollador y responsable de las series
estables del núcleo 2.4, Marcelo Tosatti (Brasil); el Director Gerente de Dokeos
Latinoamérica en Perú: el Cofundador de Hispalinux y ditor de,
Juan José Amor y experto en PHP con más de cinco años de experiencia,
Yannick Warnier (Bélgica).

En tanto, los ponentes nacionales que han confirmado su asistencia al FESOLI
2009 se encuentran: el Gerente General de Antartec S.A.C, Alfredo Zorrilla; el
Director del Centro Open Source, Alfonso de la Guarda; Proyecto Xendra ERP;
Francisco Morosini, representante de la Comunidad UBUNTU; Nicolás
Valcárcel; y el especialista en temas de Uso y Aplicaciones de TI para el
Estado Peruano ONGEI, César Vílchez; entre otros destacados especialistas.

Cabe mencionar que los miembros de la Comunidad de Software Libre
Garcilasina (COSOLIG), integrado por estudiantes de la Facultad de Ingeniería
de Sistemas, Cómputo y Telecomunicaciones de la UIGV, y docentes de esta
casa de estudio, están a cargo de la organización de este magno evento.

Fuente: Oficina de Marketing e Investigación de Mercado de la UIGV

          Thesis Defense: Creating a Digital Exhibit on the Colonial Fur Trade in Florida: A Public History/ Digital History Project   
Announcing the Final Examination of Mr. Benjamin DiBiase for the degree of Master of Arts in History

This thesis project incorporates podcasts and high resolution digital imagery visualizations into a single online exhibit to democratize archival material on the web, whilst employing contemporary new museology and digital history methodological frameworks, and utilizing the burgeoning medium of podcasting to increase public understanding and interaction with a particular historical narrative. For this project, I have partnered with the Florida Historical Society and have utilized original materials from their collection relating to the colonial fur trade in Florida. The digital exhibit is hosted on the Florida Historical Society website. By working within their existing open source website, I created a series of module entities that can be used by the Florida Historical Society for other exhibits in the future.

The exhibit consists of three sections, each exploring a different aspect of the traditional discourse surrounding the colonial American fur trade in Florida. Each podcast has aired as part of the Florida Historical Society's weekly radio magazine, Florida Frontiers, which is broadcasted throughout the state, and are archived on the Society's website in their entirety.

The exhibit enhances the scholarly discussion on public history and digital history, while utilizing new media such as podcasts and interactive digital maps to create a more immersive user experience with primary source material. The project has combined new mediums of historical interpretation with traditional museum methodology to create a multi-faceted, unique digital experience on the web.

Committee in Charge: Robert Cassanello (Chair), Rosalind Beiler, Scot French

          Webinar with Talena: Migrate open source big data applications to HDInsight, add backup & restore to your existing apps running on Apache Hadoop & Spark   
“Are you building Big data applications using open source platforms such as Apache Hadoop or Spark, but unsure of how to add cloud to your architecture and manage your data assets better?” Join me for a webinar on June 29th, 2017 at 10am PST with Hari (CTO of Talena) as we explore how Talena can help customers migrate their big...
          HandBrake 0.9.9   
The Open Source Tool HandBrake converts DVDs and videos for your iPod and iPhone.
          Kommentar zu Enpass: Android-Beta des Passwort-Managers mit Autofill-API von Der Uwe   
@Richard: Wird nicht oft erwähnt bei der (leider) üblichen Aufzählung aber ich würde Dir als Lösung bitwarden - - empfehlen. Läuft auf allen genannten Systemen, ist kostenlos und ohne Werbung. bitwarden erlaubt auch das teilen von Passwörtern über sogenannte "Organization"'s und mehreren Personen. Kostenlos ist das teilen nur zwischen zwei Personen möglich und das ist so ziemlich die einzige Einschränkung die ich kenne. Verschlüsselt wird Ende-zu-Ende mit AES-256 bit und PBKDF2 SHA-256 und über bitwarden auf Microsoft Azure Cloud Servern gespeichert. Verschlüsselt wird immer lokal und erst dann als Gesamtpaket übertragen (Rückweg entsprechend). Die Daten sind somit auf allen Geräten immer synchron. Zudem ist bitwarden Open Source bei GitHub zu finden. Aufmerksam wurde ich auf diese Lösung mal durch eine Empfehlung in einem Kommentar und nutze es seither sehr zufrieden.
          Lucid Planet Radio with Dr. Kelly: Open Source Reality: The Emergence of a Meta-Myth, with Mitch Schultz   
Episode      Ascomplexity continues grow, the evolution of consciousness imparts newrelationships to our understanding of reality, revealing the emergence of a newhuman story. Let’sexplore an open source approach to humanity’s collective knowledge, and remix ournarratives to create deeply layered allegories that re-contextualize reality.How can we imagine an evolving meta-myth that influences systemic changethrough transmedia storytelling? Join renowned producer, writer and dire ...
          SwarmFest 2009, June 28 - 30 in Santa Fe   
SwarmFest is the annual International conference for Agent Based Modeling and Simulation - IMO, a necessary skillset needed to understand complexity in systems. I just received notification that I have been accepted to present at this year's conference, and I am excited.

Why is this important to me?

My presentation details recent findings I've made regarding using a form of Swarm in which the schedule and rules for the agents are not hard-coded in the experiment. The our case, agents "discover" the rules and patterns based upon whatever pre-existing structure and connection it finds. This provides part of the solution space called a context. The discovery mechanism that searches the context for solutions is a topology and weight evolving artificial neural network, originally developed by Ken Stanley and Risto Miikkulainen at the University of Texas, called NEAT. (NEAT is actually the great-grandfather. WattsNEAT is our N-th generation implementation of HyperNEAT).

Now for the cool part. In order to configure the compute fabric for the many expriment contexts we might create in which to evolve solutions to problem experiments, we utilize WattsNEAT to evolve configurations of the compute fabric itself. This is our solution to the problem of partitioning data and structures for massively parallel computation.

If you have followed along so far, we have just used WattsNEAT to configure a compute fabric in which to effectively and efficiently run WattsNEAT in massively parallel compute fabrics. For those of you who have used Torque or SGE rolls in a Rocks cluster, the resulting advantage is an intelligent distribution and scheduling agent that configures the compute fabric according to the context (schedules, priorities, and component configurations it discovers at the time).

This is not as static as it may initially appear.

I'm moving to get the specifics (of which there are many) down and put (at least) into a provisional patent. After that, we will decide the bifurcation strategy between proprietary paths and open source.
          Open source    
A fantastic Open Source Tableau like reporting and data exploration tool to look at : Superset from the fantastic ladies and gentlemen from Airbnb.

          How to install Zeppelin in a few lines !   
tar xvfz apache-maven-3.3.9-bin.tar.gz
sudo apt-get install -y r-base
sudo apt-get install -y libcurl4-openssl-dev libssl-dev libxml2-dev libcurl4-gnutls-dev
sudo R -e "install.packages('evaluate')"
sudo R -e "install.packages('devtools', dependencies = TRUE)"
export JAVA_HOME=/usr/jdk/jdk1.8.0_60
git clone
cd zeppelin
../apache-maven-3.3.9/bin/mvn clean package -DskipTests -Pspark-1.6 -Pr -Psparkr
./bin/ start

So cool to finally have an easy Open Source tool to do reporting and visualisation !

          White box: è il momento giusto per il networking aperto   

Uno switch o un router in fondo sono computer specializzati che svolgono compiti ben specifici nell’instradamento dei pacchetti di rete. Hanno un loro sistema operativo e software ad hoc che non gestiamo in senso convenzionale, ma sappiamo che ci sono.

Il Software Defined Networking – sintetizzando e semplificando molto – ha tra l’altro l’obiettivo di eliminare questo legame stretto tra hardware di rete e funzioni di rete, per avere una maggiore elasticità nella gestione e nell’ottimizzazione delle seconde.

Fin qui di niente di rivoluzionario per i produttori di networking.

Tanto che la maggior parte ha sue iniziative di SDN, specie quando si tratta di soluzioni per i grandi Internet provider o per gli operatori di telecomunicazioni, che sono molto sensibili al tema della elasticità e adattabilità della rete.

Qualche problema in più è nato quando l’approccio SDN ha seguito una strada non del tutto prevista, inizialmente: perché non adottare server x86 classici, magari nemmeno di marca, come dispositivi di rete puntando maggiormente sul lato software?

Il mercato degli switch bare metal secondo IHS
Il mercato degli switch bare metal secondo IHS

Questa domanda ha fatto nascere il mercato dei dispositivi di rete detti white box, ossia senza marca, o anche switch bare metal, di semplice metallo, perché il software operativo poi ce lo mette l’utente. Possono essere realizzati anche da produttori indipendenti su progetto del cliente stesso.

Nelle sue varie forme questo segmento, che nelle cifre degli analisti finisce nella categoria “altro” o magari “ODM” da Original Design Manufacturer, è tra quelli che crescono maggiormente nel mondo networking. Certo ne è ancora una fetta minima (meno del 10 percento circa a metà 2016 secondo IHS Markit) ma a un tasso di crescita di quasi il 50 percento a semestre promette sviluppi interessanti.

Chi crede ai white box

La spinta decisa al mercato white box è venuta dai grandi provider, che hanno al loro interno le competenze per progettare dispositivi di rete e anche per sviluppare il loro sistema operativo. E adattarselo per le funzioni richieste dai propri datacenter. Così ad esempio Facebook si è progettata uno switch denominato Wedge che ha avuto versioni prima a 40 e poi anche a 100 Gbps.

E che, soprattutto, è diventato anche un progetto “open” nell’ambito dell’Open Compute Project (OCP). In questo modo anche altre aziende, oltre ai fornitori di Facebook, hanno potuto realizzare switch sullo stesso progetto e sulla medesima piattaforma operativa (un derivato di Linux).

Quello di Facebook è solo il caso più noto e forse più “aperto”. Tutti i grandi provider stanno facendo qualcosa del genere usando software di rete più o meno fatti in casa. Persino Microsoft ha sviluppato un sistema operativo per dispositivi di rete – Azure Cloud Switch, anch’esso Linux-based – derivato in parte da altri progetti dell’OCP. La sua particolarità sta nel fatto che non è un prodotto usabile da altri: la parte software è stata sviluppata per integrarsi precisamente nei datacenter di Microsoft. Un sistema operativo di rete fatto su misura, quindi, che non può essere portato in altre infrastrutture.

Una versione modulare di Wedge, lo switch "open" di Facebook
Una versione modulare di Wedge, lo switch “open” di Facebook

Peccato forse, perché uno switch/router in logica SDN ma marchiato Microsoft avrebbe dato ad alcuni utenti una sensazione di affidabilità maggiore di un anonimo white box basato su software open source. Lo sa bene Dell, che senza troppa fanfara propone da qualche tempo un suo Linux per device di rete, battezzato OS10 (da Operating System 10). Proprio a Dell viene attribuita la paternità di un sotto-segmento dei white box: i brite box o branded white box, in sintesi white box di marca che cercano di unire i vantaggi del SDN con la forza di un brand. Sono una nicchia, ma piacciono alle imprese tradizionali.

La reazione dei produttori

Il rapporto tra networking classico e white box è un po’ quello che c’era anni fa tra software tradizionale e open source: ci vogliono competenze per gestire la componente “indipendente” e per questo il SDN estremo degli switch bare metal non è ancora una grande minaccia per i nomi del networking.

Ma proprio l’esplosione del fenomeno open source in anni più recenti insegna che certe tendenze non vanno sottovalutate. Oltretutto al SDN lavorano non solo le aziende del networking ma anche tutte quelle che fanno in qualche modo virtualizzazione. Il networking di una rete virtuale non può che essere software-defined, in fondo.

Così alcuni produttori tradizionali hanno adottato soluzioni software basandosi su progetti di networking open source come quelli dell’OCP (lo fanno ad esempio Brocade e Juniper) mentre altri hanno scelto una strada diversa: un sistema operativo slegato dall’hardware ma funzionante solo su device “certificati” (è la strada ad esempio di Avaya).

Il player principale del mercato – ovviamente Cisco – per il momento ha posizioni molto più tradizionali, ma si vocifera che stia comunque sviluppando un sistema operativo di rete slegato dal suo hardware. È solo una voce non confermata, ma semplicemente il fatto che sia circolata è già sintomo di una evoluzione del mercato.

L'articolo White box: è il momento giusto per il networking aperto è un contenuto originale di 01net.

          Search for a job in the UK   

Being a recruiter the most commonly asked question which i regularly face is that,how to search for a job in the uk ? and my answer to that is always .....huh???? Recently the UK has opened its borders to the geeks and has virtually put a red carpet for IT professionals. This move by the is being seen as a major way to boost its inflow of high tech labour.

There yet are many countries yet to follow the suit but the UK has been the first one to implement this kind of policy after Uncle sam did way back in the 90's.Well over the times software industry has gone through some radical changes right from algorithms to outsourcing.People might argue that outsourcing is bad but frankly speaking its helping the economies more than ever.As per a report the average employment in th US has been to a record low,i guess this is a suitable explanation for the above.

Nowdays skills like SAP and open source technologies are in huge demand.SAP is very hot these days and of course traditional programming languages like C++ and Java are also in the wanted list.In the united states market SAP is in huge demand and so it is in UK.As the software industry is booming and cash is flowing like never before the companies have started investing more in ERP/CRM applications and even provide free training to its employees.If i would have been in some candidates shoes i would go for SAP or any other ERP/CRM package

Now comes the question "how to search for a job in the UK" ,well it isn't easy as it seems you can either post your resume on a job board or look for headhunters.While posting your resume on a job board is the easiest option but it may not be the most effective option( correct me if i am wrong ) but the best way out is through traditional means a.k.a Network.Job boards do provide a global reach for your resume but the key is posting your resume on a niche job board or a industry specific job board...that will greatly increase your chances of being noticed.

Add to Technorati Favorites

This weblog is sponsored by iitjobs. Add to Technorati Favorites
          DebConf15 Schedule Published and Additional Featured Speakers Announced   

DebConf15 Schedule

The DebConf content team is pleased to announce the schedule of DebConf15, the forthcoming Debian Developers Conference. From a total of nearly 100 talk submissions, the team selected 75 talks. Due to the high number of submissions, several talks had to be shortened to 20 minute slots, of which a total of 30 talks have made it to the schedule.

In addition, around 50 meetings and discussions (BoFs) have been submitted so far, as well as several other events like lightning talk sessions, live demos, a movie screening, a poetry night or stand-up comedy.

The Schedule is available online at the DebConf15 conference site.

Further changes to the schedule can and will be made, but today’s announcement represents the first stable version.

Featured Speakers

In addition to the previously announced invited speakers, the content team also announces the following list of additional featured speakers:

The full list of invited and featured speakers, including the invited speakers profiles and the titles of their talks is available here.

          Comment on Open Source Web Design by Web Design Part #1in 2008 | Nikos Papagiannopoulos   
[…] Open Source Web Design […]
          Purism aims to push privacy-centric laptops, tablets and phones to market   

A San Francisco-based start-up is creating a line of Linux-based laptops and mobile devices designed with hardware and software to safeguard user privacy.

Purism this week announced general availability of its 13-in. and 15-in. Librem laptops, which it says can protect users against the types of cyberattacks that led to the recent Intel AMT exploits and WannaCry ransomware attacks.

The laptop and other hardware in development has been "meticulously designed chip by chip to work with free and open source software."

"It's really a completely overlooked area," said Purism CEO Todd Weaver. "We also wanted to start with laptops because that was something we knew we'd be able to do easily and then later get into phones, routers, servers, and desktops as we expand."

To read this article in full or to leave a comment, please click here

          New Open Source Platform Allows Anyone To Hack Brain Waves   
Video For most people how the human brain works remains a mystery, let alone how to hack it. A new Kickstarter campaign created by engineers Joel Murphy and Conor Russomanno aims to change this by putting an affordable, open-source brain-computer interface kit in the hands & minds of anyone. Brain-computer interfacing (BCI), [...]
          Storia ed origini dell’Open Source Software   
Post dal risvolto culturale e di interesse generale sulle origini dell’Open Source e l’evoluzione del mercato del software. Magari non susciterà così interesse come altri post più frivoli ma credo che chiunque si manifesta paladino e divulgatore del software libero e poi non sa nemmeno come è nato, beh allora che senso ha? 🙂 Questo […]
          Broadleaf Commerce Brings eCommerce to OSCON 2017   

Broadleaf Commerce, the leading digital experience platform for customizable commerce, will showcase eCommerce solutions at the 2017 O’Reilly Open Source Convention in Austin, Texas.

(PRWeb April 04, 2017)

Read the full story at

          Broadleaf Talks Tech at O'Reilly OSCON 2016   

Broadleaf Commerce, the eCommerce platform solution provider for enterprise commerce brands, announces sponsorship of the upcoming O’Reilly Open Source Convention in May.

(PRWeb April 19, 2016)

Read the full story at

          Broadleaf Commerce Partners with Productive Edge   

Broadleaf Commerce, the open source software provider for building customized eCommerce solutions, is proud to announce its partnership with innovative enterprise software implementation firm, Productive Edge.

(PRWeb November 17, 2015)

Read the full story at

          Mobile Strategy: Responsive vs. Adaptive Web Design   

Broadleaf Commerce, the open source software provider for building customized eCommerce solutions, will be exploring mobile strategy during the ‘Mobile Strategy: Responsive vs. Adaptive Web Design’ web event on November 10, 2015 at 10am CST.

(PRWeb November 05, 2015)

Read the full story at

          Webinar Reveal: Customizing Search Results with Broadleaf Commerce   

Broadleaf Commerce, the open source source provider for building customized eCommerce solutions, announces the reveal of new search management functionalities for merchandisers during the ‘Customizing Search Results with Broadleaf Commerce' web event on October 20.

(PRWeb October 16, 2015)

Read the full story at

          Broadleaf eCommerce Solutions Now Provided by Ness Technologies in European Markets   

Broadleaf Commerce, the open source software provider for building customized eCommerce solutions, is proud to announce its partnership with world-class software development company, Ness Technologies.

(PRWeb March 19, 2015)

Read the full story at

          Sligro and EMTÉ: Smart Shopping with Broadleaf   

Sligro Food Group integrates the Broadleaf Commerce open source software solution to power their EMTÉ native app, My Gourmet Environment. Supermarket customers in the Netherlands are now able to manage their shopping lists, loyalty account information, and track reward points from their iOS and Android devices.

(PRWeb January 20, 2015)

Read the full story at

          Broadleaf Commerce Solves eCommerce Woes for B2B Aftermarket   

Broadleaf Commerce announces the release of their B2B webinar series and new B2B solutions white paper specifically for the aftermarket industry. As an open source website management platform, Broadleaf provides eCommerce services to market-leading enterprise businesses.

(PRWeb December 17, 2014)

Read the full story at

          Broadleaf Commerce Partners with Dunn Solutions Group   

Broadleaf Commerce, the open source software provider for building customized website management solutions, announces their partnership with Chicago-based technology consulting firm, Dunn Solutions Group.

(PRWeb November 25, 2014)

Read the full story at

          Get fastest speed with google chrome   

Google Chrome

Google Chrome is a browser that combines a minimal design with sophisticated technology to make the Web faster, safer, and easier. Use one box for everything--type in the address bar and get suggestions for both search and Web pages. Thumbnails of your top sites let you access your favorite pages instantly with lightning speed from any new tab. Desktop shortcuts allow you to launch your favorite Web apps straight from your desktop

Chrome is the lightweight flagship browser that originated from an open source project by Google called Chromium and Chromium OS. It is now one of the more widely used browsers thanks to a vast ecosystem of extensions and add-ons, a robust Javascript engine, and a rapid-release development cycle that keeps it on the competitive end of the curve.

Chrome's overall UI has remained stable since version 1.0: a minimal two row window with tabs resting above the address bar (Omnibox), 3 browser controls (Back, Forward, Stop/Reload), a star-shaped toggle for bookmarking, and settings icon. Users coming from older browsers might have to get used to not having a dedicated File menu layout but we found ourselves getting quickly adjusted.
As you install extensions, active icons will appear to the right of the address bar, but beyond that Google maintains strict restrictions on adding visible add-ons. That means no toolbars or any undesired overlays, which at one point was a widespread standard practice. Despite the limited customisable options, Chrome is minimalist for a reason, and that results in a clean browsing experience with maximum use of screen estate for websites.

Features and Support
In addition to tabbed browsing, Chrome can be used as simply or as complex as you want, thanks to an impressive number of built-in tools, modes, hotkey functions, and more.
One popular feature is, of course, Incognito mode: Chrome's response to Mozilla's Private Browsing feature. Incognito opens a new window that disables history recording, tracking cookies, and reduces the amount of traceable breadcrumbs from your usage. Contrary to popular belief, it does not mean you can freely browse the web for illegal use as your ISP can still see your traffic activity... so stay out of trouble.
Under the hood, Chrome has some awesome features that make it very developer friendly: hardware acceleration for rendering 3D CSS effects, Google's own NaCl (Native Client) that allows secure execution of C and C++ codes within the browser, and an in house JavaScript engine that improves load times with every release.
Pressing F12 will open a dev console that allows you to view web code and quickly identify elements simply by highlighting the mouse over each line. You can also add your own HTML and CSS codes to render a page with custom styling.
Chrome also allows Google users to sync their accounts, which comes with added benefits like restoring saved bookmarks and extensions in the cloud no matter what device you're on.

Chrome is fast. Really fast. As of version 27, Chrome is powered by Google's own V8 JavaScript engine that renders pages at speeds that have been setting a standard for modern browsers. In addition, Google has been on the forefront of implementing best practices for HTML5 standards and though it's also currently running the widely used open-source Webkit engine, Google has also announced plans to move to Blink in the near future.

Wrap up
Google has relentlessly set the standard for speed, stability and security and Chrome's numerous version updates, as many as there are, have continued to complement its minimalist friendly design. It's no surprise that its market share continues to rise, especially when combined with its mobile cousin on Android. Regardless of who's faster, whether its user adoption or Chrome's own development team, Google's internet browser is one for the masses: casual user and developer alike.



          Google drops out of JavaOne   
Very disappointed to hear that google has cancelled all their talks at JavaOne. Kinda has that "I'm going to take my football and go home" feeling to it.

In the post, Josh Bloch says:
Like many of you, every year we look forward to the workshops, conferences and events related to open source software. In our view, these are among the best ways we can engage the community, by sharing our experiences and learning from yours. So we’re sad to announce that we won't be able to present at JavaOne this year. We wish that we could, but Oracle’s recent lawsuit against Google and open source has made it impossible for us to freely share our thoughts about the future of Java and open source generally. This is a painful realization for us, as we've participated in every JavaOne since 2004, and I personally have spoken at all but the first in 1996.

We understand that this may disappoint and inconvenience many of you, but we look forward to presenting at other venues soon. We’re proud to participate in the open source Java community, and look forward to finding additional ways to engage and contribute.

By Joshua Bloch, Google Open Source Programs Office

I'm aware of the patent infringement action, but don't remember Oracle suing the open source community. A little whiny.

I have seen Josh speak many times at JavaOne and his talks will definitely be missed. The puzzlers talk was great. I also see that Crazy Bob Lee dropped out in support. A quick twitter search shows a lot of activity.

For the record, I will be at JavaOne.

          Monitoraggio ambientale IoT con ThingSpeak ed ESP8266 [Progetto completo open source]   

Il progetto di questo articolo utilizza il modulo Wifi ESP8266 che invia i dati del sensore ambientale DHT11 al server ThingSpeak per la registrazione e il monitoraggio. Una condizione operativa verifica l’aumento della temperatura, innescando una notifica da parte dell’app mobile Newtrify tramite il server Pushingbox. Il progetto si presta a ulteriori modifiche e aggiornamenti secondo le proprie esigenze. Introduzione ESP8266 è un chip Wi-Fi a basso costo con uno stack completo TCP / IP e capacità di gestione con un microcontrollore integrato. ESP8266 offre una soluzione completa e autonoma di gestione di reti wireless per ospitare l'applicazione firmware. ESP8266 integra il balun RF, l'amplificatore di potenza e quello di ricezione low-noise, i filtri e i moduli di gestione dell'alimentazione, […]

          Open-Xchange Releases Update for Express   

Open-Xchange Inc. today released a comprehensive feature update for Open-Xchange Express Edition. Customers can download improvements to usability and performance of the leading open source collaboration software.

read more

          PostBooks 2.2.1 Released   

xTuple announced the release of Version 2.2.1 of its PostBooks Accounting, Enterprise Resource Planning (ERP), and Customer Relationship Management (CRM) solution. The release is a follow-up to the recent version 2.2.0, and offers the open source community a new data import and mapping tool called CSVimp, as well as enhanced documentation and examples supporting the new xTuple Application Programming Interface (API) for connecting to external systems. The release also contains bug fixes and features suggested and contributed by the xTuple community.

read more

          Open Source E-Procurement Project   

First Freely Downloadable E-Procurement Software Solution Continues to Gain Momentum with More than 7,500 Downloads and new release.

read more

          GnuCash 2.2.0 Released   

The GnuCash development team proudly announces GnuCash 2.2.0, the new stable release of the GnuCash Open Source Accounting Software. With this new release series, GnuCash is available on Microsoft Windows for the first time, and it also runs on GNU/Linux, *BSD, Solaris and Mac OSX.

read more

          워드프레스 창시자 Matt Mullenweg(매트 뮬렌웨그)   
Matt Mullenweg(매트 뮬렌웨그)는 오픈소스(Open Source) 콘텐츠 관리시스템인 워드프레스(Wordpress)의 창시자(founder)입니다. 전세계 웹의 약 16%가 워드프레스로 만들어진 사이트라고 하죠. Matt Mullenweg는 텍사스 휴스턴(Houston)에서 태어나, University of Houston까지 휴스턴에서 학교를 다녔고, 샌프란시스코(San Francisco)로 옮겨와 CNET Networks에서 일을 시작했..
          Comment on Microsoft Skills for the Open Source Guru by AFriend918   
Nice thank you.
          Building a unit conversion Android app in app inventor 2   
Building a unit conversion Android app in app inventor 2

If you haven’t joined the Android app developer bandwagon yet, then welcome to App Inventor 2. If you are a first timer, we suggest ...

The post Building a unit conversion Android app in app inventor 2 appeared first on Open Source For You.

          Profanity: The Command Line Instant Messenger   
Profanity: The Command Line Instant Messenger

Profanity is a text based instant messaging application which uses Command line Interface (CLI), which beats many GUI (Graphical User Interface) applications of a ...

The post Profanity: The Command Line Instant Messenger appeared first on Open Source For You.

          Vdbench: A storage benchmarking tool   
Vdbench: A storage benchmarking tool

Considered one of the most versatile of benchmarks, Vdbench can be used as a GUI tool or a CLI tool. It is highly portable ...

The post Vdbench: A storage benchmarking tool appeared first on Open Source For You.

          Awaiting Mageia 2: Five Things You Can Look Forward To   
Getting ready for Mageia 2

The final release of Mageia 2.0 is about to embark sometime today (see release schedule). Based on the RC releases currently available for download, ...

The post Awaiting Mageia 2: Five Things You Can Look Forward To appeared first on Open Source For You.

          Developing Apps on Qt, Part 4   
ComboBox and ListWidget

In the last article, we worked on the backbone of GUIs, the signal and slot mechanism. In this article, we move ahead to the ...

The post Developing Apps on Qt, Part 4 appeared first on Open Source For You.

          Developing Apps on Qt, Part 3   
Developing apps on Qt

In the previous article, we covered some important Qt non-GUI classes; I hope you experimented with others, since the secret to learning lies in ...

The post Developing Apps on Qt, Part 3 appeared first on Open Source For You.

          Lisp: Tears of Joy, Part 9   
Caution: Lisp time...

Lisp has been hailed as the world’s most powerful programming language. But only the top percentile of programmers use it because of its cryptic ...

The post Lisp: Tears of Joy, Part 9 appeared first on Open Source For You.

          Developing Apps on Qt, Part 1   
It's Qt

This article introduces application development using the Qt GUI framework. There was a time when all desktop applications were developed from scratch. Then came ...

The post Developing Apps on Qt, Part 1 appeared first on Open Source For You.

          FOSS is __FUN__: Get the Basics Right   
A rant on DBAs

A few thoughts about databases in general… and some rants about the good old ways. There are a plethora of free/open source databases around, ...

The post FOSS is __FUN__: Get the Basics Right appeared first on Open Source For You.

          Device Drivers, Part 10: Kernel-Space Debuggers in Linux   
Debug that!

This article, which is part of the series on Linux device drivers, talks about kernel-space debugging in Linux. Shweta, back from hospital, was relaxing ...

The post Device Drivers, Part 10: Kernel-Space Debuggers in Linux appeared first on Open Source For You.

This is really cool. On first glance the code is both huge and respectably commented. I wonder if / when someone will attempt a port, and if any layers will be violated. (Ducks and covers! ) Also, I wonder if there are Veritas IP issues. I saw in some Compaq docs that LSM was VxVM derived, so I wonder if pieces are missing? Regardless, I heard a rumor that VxVM would be open sourced. Since hell just froze over here, who knows? Would love to dig around all day. Too bad I have a real job!
          RE: Wow...   
It's because, sadly, this is too late. If HP had done this 3 to 5 years ago, say when they canned the HP-UX port, ZFS would have been less exciting as open source. ZFS passingly resembles NetApp's WAFL, too. The difference is availability, not the buzz word, but the fact that it is here, it works, and anyone is allowed to use it for free. The same argument that people use for Linux all the time. Storage management is my favorite part of UNIX. But even I recognize two things: 'Works reliably' is the best feature, and selling storage management for UNIX in a closed model is so 1995, when AdvFS and XFS were new and exciting. ZFS works and it's not closed. WAFL works, but it's closed, and AdvFS is open, but it doesn't work (on linux). This is really cool, I hope someone ports it, and I will enjoy perusing the code. But if HP really wanted to do something serious for linux storage, they would release the Polyserve code they bought.
          RE[2]: Wow...   
Well with as many Polyserve isntalls I have done lately I doubt we will see it open source anytime soon. However ADV-FS does have a DLM as it was the core of trucluster. So I am guessing the plumbing is there for a CFS Linux implimentation....
          Pythonistas Meeting @RIT   
Hey all! Just returned from a meeting of the Pythonistas hosted at RIT's Innovation Center. I mentioned the group in an earlier post, so I won't get into it. The meeting went well, the RIT students presented projects they have been working on and tried to relay code concerns to the python enthusiasts. As such, we had a chance to talk about Fortune Hunter, which I will blog a lot more about soon. I would like to point any readers in the directions of all of the presentations so they can see some very neat open source projects and potentially help out in some way.

Here is the link to check out all of the developer's blogs at the Teaching Open Source Planet

and here is the direct link to download the Fortune Hunter powerpoint (pptx/ 1Mb) presentation given tonight. It outlines a few immediate goals we are trying for in the next couple months and very briefly talks about the project.
          HFOSS Activity   
Background :
    @ RIT, I am currently taking HFOSS (Humanitarian Free & Open Source Software), a course offered for anyone at the school who is interested in generating open source software for underprivileged people around the world. I took this course one year ago to date and now that it has a new course number, I can register again. In Fall of 2009, me and a few other students started a project called Mathematical Adventure: Fortune Hunter. This is an educational math game in being produced for Sugar XO. Long story short - we were warmly received in communities, both local and online and generated a lot of excitement and were offered a job continuing development of the game after the school term had ended. We took it. We made very decent progress during this time and when the next school term ended, the job proposal stayed. In the spring of 2009, some of the team had to take a leave from the project to continue their educational track so they can graduate on time, myself included. A few members took advantage a second time of the job being offered and even got the chance to continue through the summer! I was working for a game design company during this time and again couldn't continue work on Fortune Hunter. Now that the new academic year has begun, I am back in action!

Plan :
    I am looking to continue development of the game for the next few months, if not longer. My plan is to finish and tie up some loose ends and begin converting the game into Spanish. I will post more links to the project page (recently moved) and any pertinent information shortly, so check back soon. I am also being brought up to speed with the project from my absence.

    So please, anyone interested in joining the effort, let me know. The more, the better!

       Fortune Hunter Wiki on Sugar Labs
       Fortune Hunter Wiki on FedoraHosted
       YouTube Channel: maFortuneHunter


          Crie seu próprio site de e-Learning de graça   

Se você é professor de faculdade, colégio ou cursinho, uma boa maneira de difundir seus conhecimentos, disponibilizar tarefas complementares, interagir com seus alunos e medir os resultados de forma eficiente e a custo zero é usar o sistema de ensino open source Moodle.
Veja a matéria completa no softdownload:

          Site Reliability Engineer (Software) - Indeed - Andhra Pradesh   
Serve as subject matter expert for multiple proprietary and open source technologies. As the world’s number 1 job site, our mission is to help people get jobs....
From Indeed - Fri, 23 Jun 2017 07:16:14 GMT - View all Andhra Pradesh jobs
          Qt Installer Framework 2.0.5 released   

We’re happy to release Qt Installer Framework 2.0.5. 2.0.5 is a bug fix release, the full list of bug fixes can be found from Changelog. Installer Framework binaries can be installed from online installer. Binaries can be also found together with sources in (open source), or in your Qt Account. The binaries are built […]

The post Qt Installer Framework 2.0.5 released appeared first on Qt Blog.

          Funding Tales - The Missing Link of High404 (Part 1)   

As part of the HighWire MRes/PhD programme we take a module (High404) dedicated to introducing digital innovation to the cohort.  This may seem a little odd, you’d kind of expect every one taking such a programme to already be well versed in such things - this is after all  a PhD in Digital Innovation funded through the EPSRC’s Digital Economy theme.

But what make the HighWire crowd so unique is the incredibly wide and varied backgrounds of those in it.  In my cohort we go from fine art to computer science backgrounds and inevitably this means some are not as well versed in the digital as others.

Hence the module.  It’s not designed to make experts of everyone rather open eyes to history and futures.

The module is split in two.  A narrative which focusses upon the more traditional bits of digital innovation with topics such as mobile, cloud computing, ubiquitous computing, Web 3.0 and such.  And a meta-narrative which takes a wider look at the role of innovation with respect to such things as being meaningful, sustainability and the like.

Overall the module has provided me with a tonne of material to ponder, primarily in the meta-narrative (what with already having a pretty solid background in the narrative material) but there has been one topic point that was not raised and yet it is so very vital to innovation…


Innovation is so much more than invention.  It’s not about coming up with a great idea but rather being able to execute on that idea, develop and evolve it, bring it to the market (not necessarily a commercial one).   And this requires money, to research and develop the invention AND to bring it to market.   But first my own tale of funding digital innovation.

Cakehouse in the Bubble

Right at outset of the first dot-com bubble in the late 1990’s I was proud to be a co-founder of a technology startup - Cakehouse Systems.   We had a superb team of thinkers and arguably some of the best software developers in London hand picked and armed with proper ninja coding skills with C++ and Java being core.

At the time website search engines were really not a lot different to an offline Yellow Pages.  You filled out a form with the details of your fledgling website and submitted it to most likely Yahoo! who employed a army of staff to check you had correctly categorised your entry before allowing it into their carefully curated directory.

We had a superb idea.  Why not automatically map the links between websites, not only the hyperlinks themselves but also the semantic links?  Then allow people to manually augment those links with other information that had meaning to them. Effectively we were proposing building a semantic search engine for the Web based on a graph database - something truly novel at the time.

Unlike so many others in the bubble who were willing to work for equity our crack team were not going to work for free. Thankfully we did not have the truly extravegant burn rates so common with other Internet startup companies at the time.

One example of all too common dot-com boom/bust was a company I knew well -Sportal (formally Pangolin) had in 1998 raised something in the order of £55M - but by 2001 BSkyB (an investor) refused to buy the ailing business for £1 the company laying over the majority of its staff having used the money to pay incredible salaries and furnish lavish offices.

We were somewhat more modest in our habits but even with equal shareholding and salaries across the company some third party funding was going to be needed for an office, hardware, software licenses and so on.  Seed funding was readily obtained from a wealthy family member of the team.  A large six figure sum that with our modest outlook meant we were stable for 18 months, we even employed three more staff - a sales person and two more developers.  Things were looking good.

We worked hard, developed a great proof of concept and wrote a rock solid business plan.  A plan we took to several leading IT accountancy companies looking for help raising the next round of funding, a round big enough to build an Internet scale semantic search engine prototype.  It was a slog.  I honestly can’t recall how many funding meetings I attended with the managing director but it seemed like hundreds.

This slog was so difficult because Computer/Data/Internet Technology innovation funding was discontinuous from previous innovation adoption s-curves. None of the traditional UK based Venture Capitalists (VCs) knew how to approach it and rather adopted funding models from the curve of existing capital intesnsive innovation.  The importance of Angel investing was largely unrecognised at the time and with little or no Angel funding ecosytem they flew well below the radar.
As a note Silicon Valley VC's were able to adapt/adjust better than others simply because their risk profiles were vastly different and they had had more exposure to technology companies. 
Eventually did find and engage someone and went on to talk to dozens and dozens of potential VCs, but by then the market had changed.  Three things had happened.

The first was VCs, certainly in the UK, had been stung by many of those high profile high burn rate startups that held extravagant launch parties based on vapourware.  Very very few were realising the traditional s-curve type growth and therefore potential for the levels of return desired.  VCs in the UK had become more sceptical and far more wary.  The dot-com bubble was starting to pop.

The second was brought about by that very bubble itself.  Every Internet scale startup needed Internet scale database capability, something IBM and Oracle were only to happy to provide - at substantial cost.  This software had always been expensive but with so much money being thrown at startups it seemed the database manufacturers were only to eager to grab a slice for themselves.  I recall opening a large anonymous box one day only to find EVERY bit of Oracle software nestled within along with a lovely letter that went something like…
“Thank you for your interest in Oracle.  We have reviewed your plans and believe Oracle would be the only choice for such a project.  Please find Oracle licenses and Oracle software within so you to can start building another great product based on Oracle software.  Please note that Oracle development licenses are included which are valid for 90 days after which any deployment using Oracle products will require a paid license of £xx,xxx per CPU per annum.”
Hardly an encouraging welcome to building a relationship with a corporate supplier.  This was of course before Internet scale open source database products were readily available or indeed acceptable.

The third nail in our funding ambitions was another startup - Google.  With their super simple user interface belying an incredibly powerful and sophisitcated alogrithm for indexing websites there was no longer a need for any Web property owner to fill out Yahoo! forms.  Sites just got indexed automagically.  It just worked.  And worked more than well enough for both Web users and VCs alike.

Our semantic search engine hopes were dead in the water.


Our reaction was to change direction, repurposing the technology and selling it in to organisations as a database integration toolset - something that was still a hard sell given that even by 2000 very few corporate people knew anything of semantics, navigation engines, graphs, objects and relationships let alone the possibility that there was life beyond the relational database.  We were lucky to count a large FMCG and a government department amongst a handful of customers.

The IP was eventually sold on to another company who still use it today to underpin criminology and marketing software.

A Painful Reality

For me Cakehouse was both an imensely proud achievement and a huge personal failing.  We invented great stuff.  We built great stuff. We were inventive. But we utterly failed to innovate.

We were blindsided by circumstance.  Blinkered by our own worldview and failed to recognise both opportunity and just how ready the market was at the time for such novelty.

And to cap it all off we were naive about funding, both our own need for it and more importantly the differing motivations / needs of funders.

BUT  in the last ten years or so it has been interesting to sit back and watch other companies try to enter this space.  We now have widely available graph databases such as Neo4J, graphs themselves are well understood and commonly referred to.  Semantic markup is commonplace and used by search engines.

It seems we had had the right ideas just at the wrong time.

In part 2 I’ll have a look at how, in my opinion, funding for digital innovation has changed over the last decade and how the future for inventors looks somewhat less bleak.
          JBoss AS 5 Development   

Develop, deploy, and secure Java applications on this robust, open source application server

  • A complete guide for JBoss developers covering everything from basic installation to creating, debugging, and securing Java EE applications on this popular, award-winning JBoss application server

  • Master the most important areas of Java Enterprise programming including EJB 3.0, web services, the security framework, and more

  • Starts with the basics of JBoss AS and moves on to cover important advanced topics with the help of easy-to-understand practical examples

  • Written in a very simple and readable style, this book includes essential tips and tricks that will help you master JBoss AS development

In Detail

JBoss AS is the most used Java application server on the market meeting high standards of reliability, efficiency, and robustness and is used to build powerful and secure Java EE applications. It supports the most important areas of Java Enterprise programming including EJB 3.0, dependency injection, web services, the security framework, and more. Getting started with JBoss application server development can be challenging; however, with the right approach and guidance, you can easily master it and this book promises that.

Written in an easy-to-read style, this book will take you from the basics of JBoss AS—such as installing core components and plug-ins—to the skills that will make you a JBoss developer to be reckoned with, covering advanced topics such as developing applications with JBoss Messaging service, JBoss web services, clustered applications, and more.

You will learn the necessary steps to install a suitable environment for developing enterprise applications on JBoss AS. Then, your journey will continue through the heart of the application server, explaining how to customize each service for optimal usage. You will learn how to design Enterprise applications using Eclipse and JBoss plug-ins. You will then learn how to enable distributed communication using JMS. Storing and retrieving objects will be made easier using Hibernate. The core section of the book will take you into the programming arena with tested, real-world examples. The example programs have been carefully crafted to be easy to understand and useful as starting points for your applications.

This book will kick-start your productivity and help you to master JBoss AS development. The author's experience with JBoss enables him to share insights on JBoss AS development, in a clear and friendly way. By the end of the book, you will have the confidence to apply all the newest programming techniques to your JBoss applications.

          Learning Jakarta Struts 1.2   

A step-by-step introduction to building Struts web applications for Java developers

  • Learn to build Struts applications right away

  • Build an ecommerce store step-by-step using Struts

  • Well-structured and logical progression through the essentials

In Detail

Jakarta Struts is an Open Source Java framework for developing web applications. By cleanly separating logic and presentation, Struts makes applications more manageable and maintainable. Since its donation to the Apache Foundation in 2001, Struts has been rapidly accepted as the leading Java web application framework, and community support and development is well established.

Struts-based web sites are built from the ground up to be easily modifiable and maintainable, and internationalization and flexibility of design are deeply rooted. Struts uses the Model-View-Controller design pattern to enforce a strict separation between processing logic and presentation logic, and enables efficient object re-use.

The book is written as a structured tutorial, with each chapter building on the last. The book begins by introducing the architecture of a Struts application in terms of the Model-View-Controller pattern. Having explained how to install Jakarta and Struts, the book then goes straight into an initial implementation of the book store. The well structured code of the book store application is explained and related simply to the architectural issues.

Custom Actions, internationalization and the possibilities offered by Taglibs are covered early to illustrate the power and flexibility inherent in the framework. The bookstore application is then enhanced in functionality and quality through the addition of logging and configuration data, and well-crafted forms. At each stage of enhancement, the design issues are laid out succinctly, then the practical implementation explained clearly. This combination of theory and practical example lays a solid understanding of both the principles and the practice of building Struts applications.

This book is designed as a rapid and effective Struts tutorial for Java developers. The book builds a fully-featured web bookstore application incrementally, with each stage described step-by-step. Concepts are introduced simply and clearly as the design and implementation of this sample project evolves. The emphasis is on rapid learning through clear and well structured examples.

Hai forti doti commerciali e ti piacerebbe avviare un progetto nel settore della consulenza medica? Open Source Management, opera nel campo della consulenza...
Hai forti doti commerciali e ti piacerebbe avviare un progetto nel settore della consulenza medica? Open Source Management, opera nel campo della consulenza...
Hai forti doti commerciali e ti piacerebbe avviare un progetto nel settore della consulenza medica? Open Source Management, opera nel campo della consulenza...
          Comment on Intel Discusses Open Data Center Usage Models – Intel Chip Chat – Episode 138 by Randy@DataBackup   
Open data Center Model is like Open Source?
          7 GitLab Alternative Tools for Hosting Projects & Collaboration   
best gitlab alternative sites

GitLab, one of the popular tool for project hosting, reviewing code, and tracking bugs. However, people look for the alternatives because of the pricing issue or they want a dedicated site for open source project hosting sites or for some other downside of it. If you are looking for the same then this article will help you out. Here I am covering 7 best GitLab alternative sites for better project management and hosting. 7 GitLab Alternative Tools for Developer, Team, and Business 1) GitHub Github comes first in the list of GitLab alternatives. It is the most popular git repository

The post 7 GitLab Alternative Tools for Hosting Projects & Collaboration appeared first on TechUntold.


Jika selama ini kita hanya tau Operating System Windows, Linux, dan Mac OS yang notabene adalah produk luar negeri, sekarang ada lagi Operating System Open Source hasil modifikasi anak bangsa yang katanya Distro dari Linux yang diberi nama Garuda OS yang baru dirilis 20 Mei kemarin.
Jika dilihat dari tampilan trailernya, jujur bikin saya tertarik ingin mencobanya. Pasalnya walaupun ini Open Source, tampilannya Keren Mirip Windows 7, fiturnya pun cukup lengkap. Mau coba? Di cek aja yuk.
Penasaran seperti apa Previewnya?
Mendingan di Cek Nih Screenshot :

Tampilan 3D Desktop + Transparan

Tampilan Office Garuda

Tampilan Start Menu Aplikasi

Mau Download? Sabar Dulu .. Baca dulu System Requirment dan Fiturnya yuk.

·                                 Inti (kernel) sistem operasi :
·                                 Desktop : KDE 4.6.3
·                                 Dukungan driver VGA (Nvidia, ATI, Intel, dll)
·                                 Dukungan Wireless untuk berbagai perangkat jaringan
·                                 Dukungan perangkat printer lokal ataupun jaringan
·                                 Dukungan banyak format populer multimedia (flv, mp4, avi, mov, mpg, mp3, wma, wav, ogg, dll …)
·                                 Dukungan bahasa Indonesia dan bahasa Inggris serta lebih dari 60 bahasa dunia lainnya (Jepang, Arab, Korea, India, Cina, dll…)
·                                 Dukungan untuk instalasi berbagai macam program aplikasi dan game (online) berbasis Windows
·                                 Dukungan untuk berbagai macam dokumen dari program populer berbasis Windows (seperti Photoshop, CorelDraw, MS Office, AutoCAD, dll)
·                                 NEW : Dukungan Font Aksara Indonesia .
·                                 NEW : Dukungan ratusan Font Google Web .
·                                 Processor : Intel Atom; Intel atau AMD sekelas Pentium IV atau lebih
·                                 Memory : RAM minimum 512 MB, rekomendasi 1 GB.
·                                 Hard disk : minimum 8 GB, rekomendasi 20 GB atau lebih jika ingin menginstal program lain
·                                 Video card : nVidia, ATI, Intel, SiS, Matrox, VIA
·                                 Sound card : Sound Blaster, kartu AC97 atau HDA
Perkantoran :
·                                 LibreOffice 3.3 – disertai kumpulan ribuan clipart, kompatibel dengan MS Office dan mendukung format dokumen SNI (Standar Nasional Indonesia)
·                                 Scribus – desktop publishing (pengganti Adobe InDesign, Page Maker)
·                                 Dia – diagram / flowchart (pengganti MS Visio)
·                                 Planner – manajemen proyek (pengganti MS Project)
·                                 GnuCash, KMyMoney – program keuangan (pengganti MYOB, MS Money, Quicken)
·                                 Kontact – Personal Information Manager / PIM
·                                 Okular, FBReader – universal document viewer
·                                 dan lain-lain …
Internet :
·                                 Mozilla Firefox 4.0.1, Chromium, Opera – web browser (pengganti Internet Explorer)
·                                 Mozilla Thunderbird – program email (pengganti MS Outlook)
·                                 FileZilla – upload download / FTP
·                                 kTorrent – program bittorrent
·                                 DropBox – Online Storage Program (free 2 Gb)
·                                 Choqok, Qwit, Twitux, Pino – aplikasi microblogging
·                                 Google Earth – penjelajah dunia
·                                 Skype – video conference / VOIP
·                                 Gyachi, Pidgin – Internet messenger
·                                 xChat – program chatting / IRC
·                                 Kompozer, Bluefish – web / html editor (pengganti Dreamweaver)
·                                 Miro – Internet TV
·                                 dan lain-lain …
Multimedia :
·                                 GIMP – editor gambar bitmap (pengganti Adobe Photoshop)
·                                 Inkscape – editor gambar vektor (pengganti CorelDraw)
·                                 Blender – Animasi 3D
·                                 Synfig, Pencil – Animasi 2D
·                                 XBMC – multimedia studio
·                                 kSnapshot – penangkap gambar layar
·                                 Digikam – pengelola foto digital
·                                 Gwenview – Photo Viewing Client
·                                 Amarok – audio player + Internet radio
·                                 Kaffeine – video / movie player
·                                 TVtime – television viewer
·                                 Audacity – audio editor
·                                 Cinelerra, Avidemux – video editor
·                                 dan lain-lain …
Edukasi :
·                                 Matematika – aljabar, geometri, plotter, pecahan
·                                 Bahasa – Inggris, Jepang, permainan bahasa
·                                 Geografi – atlas dunia, planetarium, kuis
·                                 Kimia – tabel periodik
·                                 Logika Pemrograman
Administrasi Sistem :
·                                 DrakConf – Computer Control Center
·                                 Synaptic – Software Package Manager
·                                 Samba – Windows sharing file
·                                 Team Viewer – remote desktop & online meeting
·                                 Bleachbit – pembersih sistem
·                                 Back in Time – backup restore sistem
·                                 dan lain-lain …
Program Bantu :
·                                 Ark – program kompres file (pengganti Winzip, WinRar)
·                                 K3b – pembakar CD/DVD (pengganti Nero)
·                                 Dolphin – file manager
·                                 Cairo Dock – Mac OS menu dock
·                                 Compiz Fusion + Emerald
·                                 Emulator DOS + Windows
·                                 dan lain-lain …
Game :
·                                 3D Game Maker
·                                 Mahjong, Tetris, Rubik, Billiard, Pinball, BlockOut, Sudoku, Reversi
·                                 Solitaire, Heart, Domino, Poker, Backgammon, Chess, Scrabble
·                                 Frozen Bubble, Flight Simulator, Tron, Karaoke
·                                 City Simulation, Fighter, Doom, Racing, Tremulous FPS
·                                 DJL, Play on Linux, Autodownloader – game manager / downloader 
d                        dan lain-lain ..

Dan diluar program-program yang sudah terinstal diatas, masih ada lebih dari 10.000 program tambahan dalam berbagai kategori yang tersedia di repository(pustaka program) Synaptic.

Yang mau coba bisa langsung download (3,6 GB)
Silahkan Klik Link Dibawah ini untuk Download :

Keterangan Lebih Lanjut Mengenai Garuda OS
Bisa Klik

Special Thanks to DYTOSHARE

          DIY HEPA Air Purifier (Lasercut + 3D-print)   
After having seen IanVanMoruik his Open Source Air purifier I decided to make a similar device. include some basic steps in order for you to replicate the air-purifier. However, the main goal is not providing a full detailed construction overvie...
By: StevenVB

Continue Reading »
          Episode 088 - 2007 Project Donations   

In this mini-episode: a short discussion of the 2007 Project Donation Page and a request that, in the spirit of the holidays, Linux Reality listeners consider making a small donation to any free or open source project of their choosing.

          Episode 087 - Interview with Cory Jaeger   

In this episode: an interview with Cory Jaeger, Network Manager at D.C. Everest School District about Linux and open source software in education; three audio Listener Tips, one on Gutsy Gibbon tweaks, one on changing hypertext links in, and another on recording with command line tools such as sox.

Extra notes are located here.

          Episode 075 - BSD Wrap-Up   

In this episode: OReilly discount code for Linux Reality listeners available on the LR website; a new Linux Reality contest where one can win a listener-donated book, LPI Certification in a Nutshell, for the best audio Listener Tip sent in between now and the end of November; a new podcast client I am developing in Python; petition to open source the Main Actor video editing software; a call for guest podcasts; a brief wrap-up discussion of my adventures with the BSD's; audio and email listener feedback.

          Episode 068 - Interview with Jonas Kron   

In this episode: an interview with attorney Jonas Kron, who practices in the area of corporate social and environmental responsibility, in which we discuss bringing open source software into a small business; audio question on open source groupware (see Citadel, phpGroupWare, and Zimbra); and audio and email feedback.

          Episode 066 - Interview with Andrew Smith   

In this episode: an interview with recent Seneca School of Computer Studies graduate, Andrew Smith, in which we discuss various projects he has worked on including the Freedom Toaster, ISO Master, and a 2006 Google Summer of Code project sponsored by Mozilla that has been incorporated into the Firefox web browser and Thunderbird email client, as well as Seneca's annual Free Software and Open Source Symposium; an email Listener Tip; audio and email listener feedback.

          Special Episode 002 - Site Updates   

In this short and sweet special episode: amazing number of donations by Linux Reality listeners to free and open source projects; four new LR feeds are available on the LR homepage, one with the mp3 files for all episodes, one with the mp3 files for the last 10 episodes, one with the ogg files for all episodes, and one with the ogg files for the last 10 episodes. Although these feeds are still in beta, they appear to be working fine.

          Episode 041 - Compiling from Source   

In this episode: donating to free and open source projects; a discussion of how to compile software from source code (more guides here and here); a Listener Tip; listener feedback.

          Episode 017 - SUSE Linux 10.1 Part 2   

In this episode: a brief mention of Ubuntu Dapper Drake; audio feedback; a look at the SUSE YaST configuration tool; how to fix the SUSE Linux 10.1 package management problems with the Smart package manager using the packages provided by a SUSE developer; additional resources on how to enable Smart in SUSE Linux 10.1 are here, here, and here; Serenity charity screenings. Note: in this episode, I forgot to mention that you should run “/sbin/SuSEconfig” and then “/sbin/ldconfig” in a terminal as your root user after installing the Smart packages.

          Come registrare lo schermo dei principali sistemi operativi   
Registrare lo schermo e` una pratica molto usata per una vasta quantita` di scopi, sia per indicare qualcosa ad un utente oppure per registrare la tipica partita da videogame.Vediamo come fare a registrare il nostro schermo sui principali sistemi operativi fissi e mobile:WindowsPossiamo l&rsquo;usare lo strumento OBS. Si tratta d&rsquo;un programma Open source molto potente, con il quale pos...

FONTE  »  rec
Come registrare lo schermo dei principali sistemi operativi
          6 hidden Moodle administrator tools which you may not be knowing at all   
Source: 6 hidden Moodle administrator tools which you may not be knowing at all #Moodletips – Moodle World Moodle – the best open source LMS in the world is modular in nature and provides a lot of flexibility to extend the available features with plugins. Managing a Moodle site as a Moodle Administrator can be…
          RTK Rover and Base Configuration Web Interface   

When I first started to mess around with RTKLIB I've wanted to have a nice UI to do that, while researching it I came across a product from which already has a nice UI and I was glad to find its an open source (!!).…

Read Full Story
          Creating Custom Fancy Address Labels in LibreOffice   
There are something like a squillion and one different Avery® and Avery-compatible address labels you can buy, and with the open source LibreOffice productivity suite you can easily create your own custom fancy return address labels. If you’re wondering what LibreOffice is, it’s an offshoot of the popular OpenOffice productivity suite. Development on OpenOffice has stagnated for the […]
          Value, Marketing and Freedom   
The Free Software Foundation speaks about Free Software but the GPL gives less freedom to authors and users of the code than e.g. the BSD license does. Why is the GPL more successful in the eyes of many people than the BSD license?

One important reason of course is marketing. There is better marketing for the GPL as a result of the success of Linux.

The other reason is the value of the software from the FSF in the 1980s. The GCC is of great value to people and the fact that it is of great value caused people to accept the license even though it does not give as much freedom as the BSDL gives.

This acceptance has not been present from the beginning. In the beginning, the whole GCC has been published under the GPL and thus could not be used to compile software that itself has not been published under the GPL. For this reason, there has been an excited discussion about the usability of GCC.

Later, the LGPL has been created and parts of the GCC (libgcc) has been put under LGPL.

As we see, people are willing to accept a reduced freedom if the value of the software gives a compensation.

Now, what happened to GPLd software in the past few years? The Free Software Foundation heavily reduced the effort in extending Free Software and instead started a campaign to _talk_ about Free Software instead. Other software meanwhile did improve or become Open Source.

It seems that these ideas help to understand why Linux people did start a campaign against Open Solaris and the CDDL....

The BSD operating systems (although they give more freedom than Linux) don't look like a real threat for Linux as there is not enough marketing for BSD based operating systems.

OpenSolaris however _is_ a real threat for Linux. OpenSolaris gives more freedom than Linux, it gives new impressing features and there is marketing.

It seems that the reason for the FUD against OpenSolaris published by Linux people is caused by the fact that product of value and freedom found in Linux is smaller than the product of value and freedom available with OpenSolaris.

A proof that OpenSolaris is on the right way?
In the long term, real freedom always wins....
          Work on SchilliX, the fiest OpenSolaris based UNIX is underway   
Now that a pure self compiled OpenSolaris boots, we started working on completing our OpenSolaris based UNIX. The first version will be text only but isn't the rule for Open Source projects to publish often and to start early? Isn't there even a text based Linux (grml)?

When I started this project, I was in fear that I would not get enough help. Now (since a month), I have an employed student on expense of Fokus Fraunhofer and a few people who are helping as volunteers. It seems that it is not too late for a real Open Source UNIX...
          Do we need different levels of OSI compliance?   
OSI (the Open Source Initiative
defines rules for Open Source licenses. Licenses that like to call
themselves OSI compliant need to match these rules and get an approval
from OSI. Do these rules still serve the demands of the OpenSource
community of today?

The most important issue in the OpenSource community of today is
collaboration. In order to support collaboration, people need to be
able to combine code from different authors which today often means
code that has been published under different licenses. So a real
Open Source license should allow the covered code to be used together
with code from other OSI compliant licenses.

Taking a closer look at the licenses listed at OSI, results in some licenses
that are compatible to each other in a way that allows to include code
covered under license a) in a project covered by license b) and vice versa.
There are other licenses that do not allow this. Wouldn't it then be a
good idea to call the licenses that are compatible to each other "first class
OSI compliant" and the others "second class OSI compliant"?

Let me give an example: I frequently read that code covered by the GPL and
code covered by the BSD license are compatible to be used together within a
single project. Is this really true?

The GPL requires all projects that include code covered by the GPL to be licensed
under the GPL.

There is no problem with including a small part of BSD code inside a bigger
project licensed under the GPL. This is not because both licenses are
compatible but only because this kind of usage is tolerated by the authors
of the code covered by the BSD license. However, you cannot include a small
part of GPLd code into a bigger project licensed under the BSD license without
losing the freedom of the BSD project.

What the GPL tries to to is to try to change the license of other people's code.
It is questionable whether this is compliant with the European Copyright law.

How can this problem be avoided?

The first step would be that OSI would list the licenses that are compatible
to each others in both directions in a separate list.

The second step would be to change the incompatible licenses. If e.g. the GPL
would not require the whole project to be put under the GPL but just require
that the whole project must not use code that is not under a OSI compliant
license, the primary intention of the GPL would not be given up but the
GPL would become compatible to most other OSI licenses.

          Linus thinks Solaris is a joke   
Linus does not stop giving interviews on Solaris 10. The last I realized is at:

Among several, Linus slams on Solaris:

"Solaris/x86 is a joke, last I heard."

It is interesting to see that Linus only fetches his knowledge from other (doubtles biased) people instead of trying it himself and judging himself.

It seems that Linus cannot escape his "I only watch my own belly button" mentality and even promises not to check Solaris after is has been finally announced and released as Open Source Software.

So let us ask: Why is Linus constantly attacking Solaris?

If Solaris was really on the declining branch and dying, why the hell Linus needs to attack it? If Linus was right, he could just recline relax and wait for it's death...

It seems that Linus is in big fear of Solaris and the kind of openness it will offer once OpenSolaris is available to more people than only the participants of the OpenSolaris pilot.

For me, Solaris is not a joke. I use it as my preferred development platform for many reasons. It comes with free and useful debuggers, it offers me stable and reliable interfaces and (important for my SCSI tools) it returns SCSI error codes from the drives correctly to applications.

          Humanitarian Applications of Open Source at IEEE   

Q&A with Alfredo Herrera, IEEE Humanitarian Activities Committee The IEEE Standards Association (IEEE-SA) is exhibiting at OSCON 2017 in Austin, Texas, 10-11 May 2017. Stop by Booth 207 to learn about the role that open source plays in IEEE standards development. How is IEEE involved in humanitarian activities? The IEEE Humanitarian Activities Committee (HAC), for

The post Humanitarian Applications of Open Source at IEEE appeared first on Beyond Standards.

          Licensing and Intellectual Property Rights of Open Source   

Q&A with Andrew Updegrove, Gesmer Updegrove LLP, Partner The following is for general information purposes only, and is not intended as legal advice. The IEEE Standards Association (IEEE-SA) is exhibiting at OSCON 2017 in Austin, Texas, 10-11 May 2017. Stop by Booth 207 to learn about the role that open source plays in IEEE standards

The post Licensing and Intellectual Property Rights of Open Source appeared first on Beyond Standards.

          What Is Open Source, and Why Is IEEE Involved?   

Q&A with Gary Stuebing, IEEE Corporate Advisory Group Open Source Ad Hoc, Chair The IEEE Standards Association (IEEE-SA) is exhibiting at OSCON 2017 in Austin, Texas, 10-11 May 2017. Stop by Booth 207 to learn about the role that open source plays in IEEE standards development. How do you define the term “open source”? Basically,

The post What Is Open Source, and Why Is IEEE Involved? appeared first on Beyond Standards.

          Ex-CIA now Open Source Intel Luminary Robert David Steele; Ex-political prisoner and refugee Brendon O'Connell   
This show broadcasts LIVE 8 to 10 pm Eastern Friday, April 21st at - click on Studio B - then gets archived about 24 hours later.  For only $3.95 a month you can listen to shows on-demand before they are broadcast - and also get free downloads. Help Kevin keep these shows on the air – Click HERE! Or if you prefer, PAYPAL a one time donation, or a regular payment, to truthjihad(at)gmail[dot]com.

Robert David Steele exposes false flags in Orlando False Flag: The Clash of Histories
First hour: Former CIA Clandestine Services Officer Robert David Steele will be giving a "new stump speech ... at the Cosmoplitan Club in NYC on 18 May — it will be video-taped and posted to YouTube and is the official start of the 2nd American Revolution." Get a preview of the revolution on tonight's show. Donations welcome here.
       In this show, Steele predicts that a major pedophilia scandal, unleashed by a Wikileaks data-dump, will destroy the careers of several leading Republican political figures, leading to a Democratic sweep in the 2018 elections and the impeachment and conviction of Donald Trump.

Second hour: Brendon O'Connell spent three years in prison based on this youtube video. Meanwhile, every single day, tens of thousands of examples of vastly worst hate speech – directed against victims, not oppressors – permeate all forms of Western discourse, and are accepted and internalized by hundreds of millions of people.
    Did O'Connell speak truth to power? Did he express or exaggerate that truth in offensive terms? What are the limits of discourse when speaking out against oppression to a member of the oppressor group who is essentially an apologist for genocide?
     Only a moral imbecile could think O'Connell ever deserved prison. Whether you think he deserves a medal for valor, or a verbal reprimand for going too far, is another matter. Watch the video, listen to this show, and decide for yourself.
     Maisoon Rice, a friend and supporter of Brendon O'Connell, will also join us....
     During this show, Brendon O'Connell references some of the following material:

Interview 1 -
Interview 2 -
Interview 3 -

These two video's of me at a Perth Sydney Gaza Rally would also be insightful for anyone wanting to know what I'm about.

Perth -
Sydney -

Also, this is the ACTUAL video I was arrested for. This is a modified one with commentary. The link to the blog contains excellent information also.



Important info too - Israel has "kill switched" the internet.

My main writing on Israeli intelligence activity


I'd say with those links you pretty much know what I'm about :-)

-Brendan O'Connell
          Comment on ATO2014: Open Source in Healthcare by How to train your doctor... to use open source   
[…] published on Nicole blog, ATO2014: Open Source in Healthcare. Posted here via Creative […]
          Comment on ATO2014: Lessons from Open Source Schoolhouse by Free Software in Education News – October « Being Fellow #952 of FSFE   
[…] Yet another interesting article from the Penn Manor School District […]
          Comment on ATO2014: Open Source in Healthcare by How to Train Your Doctor… to Use Open Source | The Open Standard   
[…] published on Nicole’s blog, ATO2014: Open Source in Healthcare. Posted here […]
          Comment on ATO2014: Open Source Schools: More Soup, Less Nuts by ATO2014: Lessons from Open Source Schoolhouse - What I Learned Today...   
[…] Charlie Reisinger from the Penn Manor School District talked to us next about open source at his school. This was an expanded version of his lightning talk from the other night. […]
          Comment on Keynote: We The People: Open Source, Open Data by It’s Been One Year Since We Published the Source Code for We The People | The White House   
[…] Keynote: We The People: Open Source, Open Data […]
          Comment on 10 Secrets to Sustainable Open Source Communities by 10 secrets to sustainable in open source communities – The most sensational news   
[...] posted on What I Learned Today…. Reposted using Creative [...]
          Comment on Using Open Source in the Classroom Every Single Day by OSCON 2013: Day 1 keynotes and sessions – The most sensational news   
[...] and other tools like Kmplot, Katrium, Kstars, Kgeography, and Scratch. Here’s a more detailed post about this session by Nicole [...]
          Comment on Using Open Source in the Classroom Every Single Day by OSCON 2013: We’re live blogging! – The most sensational news   
[...] and other tools like Kmplot, Katrium, Kstars, Kgeography, and Scratch. Here’s a more detailed post about this session by Nicole [...]
          Xml Europe Conference   
XML Europe provides the premier European forum for the XML community, increasing the area of electronic businness, publishing, Internet, government, softwares and open developments. X-Tech is the main European conference for developers and i-tech people working with XML and Web technologies, bringing together the worlds of web development, open source, semantic web and open standards. This year’s the program include: - Core Technology - […]
          2016 by the numbers:   

After reading patio11's voluminous rundown of 2016, I decided to run some numbers of my own. Here's what I came up with as far as creative output outside of work and family time:

No wonder I feel soooooo tired.

          The Joy of Making Simple Edits to Microsoft's Docs   

When working on the previous blog post, I spent some time at microsoft's documentation site, where I noticed a small error: two similar articles had their titles swapped with each other. As in: Foo had Bar's title and Bar had Foo's title.

tiny bug

Usually you see an error like this and just move on.

edit button

But the edit button caught my eye, and I decided to try fixing the mistake. I was pretty apprehensive because these kind of things so often lead to yak shaving, and my current schedule does not permit the shaving of yaks.

So I clicked 'edit'. This took me to a github repo containing the source file, written in markdown. Hey, I write everything in markdown these days, so this looked promising. I backed up and soon found a page describing how to contribute.

contribute link

Reading the description there, a tiny bug fix like this doesn't require too much work.

I "forked" the repository:

Fork the repository

...and then cloned the fork of the repository onto my local machine, with:

git clone

UPDATE: There is a much simpler way to do this, that avoids the need to press 'fork', or to clone onto your local machine. It is detailed at the end of this article in a P.S.

I found the files that need to be corrected, made the correction, and then committed the changes locally:

git add *
git commit . -m "title of and iis-with-msdeploy were back-to-front."

Double checked that the changes I'd made were the right ones, by inspecting them again...

diffs ok

git diff --cached

Then pushed the changes back to github, so that my online forked version of the repository was correct.

git push

At github, in my fork of the repository, I clicked on 'New pull request' which led me to a page where I could see how my fork differed from the original repo. I reviewed the changes once more, then clicked the 'Create pull request' button. I wrote a brief description of my change and then sent it into The Ether by clicking the first green button I saw.

create pull request

Off it went. Wheeeeee! My pedantry, embodied in digital form, zipping around the globe.

Back in the original repository I saw a fleet of bots spring to life, checking and analyzing everything that was happening. One of the bots noticed I was a first time contributor, and asked me to digitally sign a contribution license agreement. It even gave this helpful guarantee "I promise there's no faxing" which was a nice touch.

I was then whisked expertly through a DocuSign process where I gave a website enough details about me that they could prepare the paperwork for this contribution license. It arrived in my inbox a minute later. I signed what I needed to sign, digitally, and was amazed that some team of people, somewhere in the world, had designed this well enough for legal hoops to be jumped in mere moments. This sort of baloney would've eaten up a week, easily, not too many years ago.

While I was busy doing the signing, the bots were hard at work verifying everything else, and I noticed from other emails that the bots were now happy, and had decided to forward my pull request onto their human masters.

I felt oddly nervous at this point. Battle scars from years of forum administrators shouting that a question has just been asked in the wrong forum, or from bureaucrats rejecting work because form 27B is not filled out in triplicate... a knot formed in my stomach as I prepared for a tiny little rejection, of this tiny little contrib.

Shortly thereafter there was a kind response from the original author, who was happy with the contribution, and the deal was done.

It was a good feeling, good enough that I felt inspired to write about it, at a time when I do not have time for writing about things.

You know me: I usually just take pot-shots at microsoft from afar, and have plenty of fun doing it. But now, this increasingly open microsoft lets me make positive contributions instead. And having done it once, it will be easier in future.

I hope that this extremely detailed walkthrough will inspire you to consider making a tiny contribution to an open source project for yourself.

(As usual, let me know the bits I did in a silly way, so others can learn.)

(Threw this together, including historically accurate screenshots, in 15 minutes, thanks to TimeSnapper)

P.S. There's also a simpler way that avoids the need to clone onto your local drive...



          Futuro de las instituciones en la Era Digital   
Acerca de las Instituciones educativas y las transformaciones que enfrentarán (o resistirán) en los próximos años.

Hace un tiempo, nos habían pedido reflexionar un poco acerca del impacto de la Sociedad Red (preferimos este término a la Era Digital) en las universidades, extendiendo nuestro pensamiento (que generalmente se enfoca en la escuela media).

Con la universidad presente en algún lugar de nuestra conciencia, empezamos por recordar el video "A vision of the students today" de Michael Welsch (blog).

Unos días después, a partir de ir siguiendo las publicaciones de la The John D. and Catherine T. MacArthur Foundation, (tal vez la institución referente en lo relacionado a educación y nuevas tecnologías), nos encontramos dentro de sus Reports on Digital Media and Learning una publicación llamada The Future of Learning Institutions in a Digital Age, cuyos autores son Cathy N. Davidson and David Theo Goldberg. Todo el contenido es un insumo clave para quien desee pensar a la nueva universidad y hemos traducido libremente algunas extractos.

El título en español es El Futuro de las instituciones que aprenden en la Era Digital). Si bien ya hemos hablado de pasar de una organización que transmite "conocimiento" a una institución que aprende (ver Institución educativa, una definición provisional), queremos destacar un interesante decálogo llamado "Pilares de la pedagogía institucional", dónde los autores nos sugieren diez principios para el futuro del aprendizaje en instituciones que aprenden.

Nos parece un interesante resumen, que nos permite no perder foco al pensar y accionar el cambio en nuestras instituciones (tanto del nivel medio como del nivel universitario).
Sugerimos que los diez principios siguientes sean fundacionales para repensar el futuro de las instituciones que aprenden.

Vemos estos principios al mismo tiempo como desafíos y como andamiajes en los cuales desarrollar prácticas creativas de aprendizaje, simultáneamente transformativas y transformándose a sí mismas, mientras aparecen nuevos desafíos y se delineen las nuevas posibilidades que ofrece la tecnología.
  1. Autodidacta
  2. Estructuras horizontales
  3. De autoridad presumida a la credibilidad colectiva
  4. Una pedagogía descentralizada
  5. Aprendizaje en red
  6. Educación "Open Source"
  7. Aprendizaje pensado como conectividad e interactividad
  8. Formación continua
  9. Instituciones que aprenden, operando como redes de movilización
  10. Escalabilidad flexible y simulación
Quisimos, además, transcribir además el desarrollo del punto 9, enfocado en los cambios que mencionamos, dejándonos un desafío que no debemos ignorar (pero que no debe inmovilizarnos ni esclerotizarnos)
9. Instituciones que aprenden, operando como redes de movilización

El aprendizaje colaborativo y en red modifica también cómo pensamos acerca del aprendizaje de las instituciones, acerca de la "cultura en red", y acerca de como concebimos a las instituciones general.

Tradicionalmente, hemos pensado a las instituciones en términos de reglas, regulaciones, normas que gobiernan la interacción, la producción, y la distribución dentro de la estructura institucional. La cultura de la red y las prácticas y dispositivos de aprendizaje asociado sugieren que nosotros pensemos en las instituciones, especialmente aquellas que promueven el aprendizaje, como redes de movilización.

Las redes permiten una movilidad que destaca la flexibilidad, la interactividad, y la producción. Y la movilización, a su vez, anima y permite la interactividad dentro de una red, la cual se mantiene mientras sea productiva, abriendo o dejando paso a nuevas redes interactivas que ofrecen nuevas posibilidades, mientras las más viejas se esclerotizan. La cultura institucional, entonces, cambia de lo pesado a lo liviano, de lo mandatorio a lo abierto.

Con esta nueva construcción de comprensión institucional y práctica, los desafíos que aparecen son como construir la confiabilidad y la previsibilidad junto a la flexibilidad y a la innovación.
Technorati tags:

          Write your Amazon Alexa Skill using C# on AWS Lambda services   

After a sick day a few weeks ago and writing my first Alexa Skill I’ve been pretty engaged with understanding this voice UI world with Amazon Echo, Google Home and others.  It’s pretty fun to use and as ‘new tech’ it is pretty fun to play around with.  Almost immediately after my skill was certified, I saw this come across my Twitter stream:

I had spent a few days getting up-to-speed on Node and the environment (I’ve been working in client technologies for a long while remember) and using VS Code, which was fun.  But using C# would have been more efficient for me (or so I thought).  AWS Lambda services just announced they will support C# as the authoring environment for a Lambda service.  As it turns out, the C# Lambda support is pretty general so there is not compatibility in the dev experience for creating a C# Lambda backing a skill as there presently is for Node.JS development…at least right now.  I thought it would be fun to try and was eventually successful, so hopefully this post finds others trying as well.  Here’s what I’ve learned in the < 4 hours (I time-boxed myself for this exercise) spent trying to get it to work.  If there is something obvious I missed to make this simpler, please comment!

The Tools

You will first need a set of tools.  Here was my list:

With these I was ready to go.  The AWS Toolkit is the key here as it provides a lot of VS integration that will help make this a lot easier.

NOTE: You can do all of this technically with VS Code (even on a Mac) but I think the AWS Toolkit for VS makes this a lot simpler to initially understand the pieces and WAY simpler in the publishing step to the AWS Lambda service itself.  If there is a VS Code plugin model, that would be great, but I didn’t find one that did the same things here.

Armed with these tools, here is what I did…

Creating the Lambda project

First, create a new project in VS, using the AWS Lambda template:

This project name doesn’t need to map to your service/function names but it is one of the parameters you will set for the Lambda configuration, so while it doesn’t entirely matter, maybe naming it something that makes sense would help.  We’re just going to demonstrate a dumb Alexa skill for addition so I’m calling it NumberFunctions.

NOTE: This post isn’t covering the concepts of an Alexa skill, merely the ability to use C# to create your logic for the skill if you choose to use AWS Lambda services.  You can, of course, use your own web server, web service, or whatever hosted on whatever server you’d like and an Alexa skill can use that as well. 

Once we have that created you may see the VS project complain a bit.  Right click on the project and choose to restore NuGet packages and that should clear it up.

Create the function handler

The next step is to write the function handler for your skill.  The namespace and public function name matter as these are also inputs to the configuration so be smart about them.  For me, I’m just using the default namespace, class and function name that the template provided.  The next step is to gather the input from the Alexa skill request.  Now a Lambda service can be a function for anything…it is NOT limited to serve Alexa responses, it can do a lot more.  But this is focused on Alexa skills so that is why I’m referring to this specific input.  Alexa requests will come in the form of a JSON payload with a specific format.  Right now if you accept the default signature of the function handler of string, ILambdaContext it will likely fail due to issues you can read about here on GitHub.  So the best way is to really understand that the request will come in with three main JSON properties: request, version, and session.  Having an object with those properties exposed will help…especially if you have an object that understands how to automatically map the JSON payload to a C# object…after all that’s one of the main benefits of using C# is more strongly-typed development you may be used to.

Rather than create my own, I went on the hunt for some options.  There doesn’t exist yet an Alexa Skills SDK for .NET yet (perhaps that is coming) but there are two options I found.  The first seemed a bit more setup/understanding and I haven’t dug deep into it yet, but might be viable.  For me, I just wanted to basically deserialize/serialize the payload into known Alexa types.  For this I found an Open Source project called Slight.Alexa.  This was build for the full .NET Framework and won’t work with the Lambda service until it was ported to .NET Core, so I forked it and moved code to shared and created a .NET Core version of the library. 

NOTE: The port of the library was fairly straight forward sans for a few project.json things (which will be going away) as well as finding some replacements for things that aren’t in .NET Core like System.ComponentModel.DataAnnotations.  Luckily there were replacements that made this simple.

With my fork in place I made a quick beta NuGet package of my .NET Core version so I could use it in my Lambda service (.NET Core projects can’t reference DLLs so they need to be in NuGet packages).  You can get my beta package of this library by adding a reference to it via your new Lambda project:

This now gives me a strongly-typed OM against the Alexa request/response payloads.  You’ll also want to add a NuGet reference to the JSON.NET library (isn’t every project using this now…shouldn’t it be the default reference for any .NET project???!!!).  With these both in place now you have what it takes to process.  The requests for Alexa come in as Launch, Intent and Session requests primarily (again I’m over-simplifying here but for our purposes these are the ones we will look at).  The launch request is when someone just launches your skill via the ‘Alexa, open <skill name>’ command.  We’ll handle that and just tell the user what our simple skill does.  Do do this, we change the function handler input from string to SkillRequest from our newly-added Slight.Alexa.Core library we added:

public string FunctionHandler(SkillRequest input, ILambdaContext context)

Because SkillRequest is an annotated type the library knows how to map the JSON payload to the object model from the library.  We can now work in C# against the object model rather than worry about any JSON path parsing.

Working with the Alexa request/response

Now that we have the SkillRequest object, we can examine the data to understand how our skill should respond.  We can do this by looking at the request type.  Alexa skills have a few request types that we’ll want to look at.  Specifically for us we want to handle the LaunchRequest and IntentRequest types.  So we can examine the type and let’s first handle the LaunchRequest:

Response response;
IOutputSpeech innerResponse = null;
var log = context.Logger;

if (input.GetRequestType() == typeof(Slight.Alexa.Framework.Models.Requests.RequestTypes.ILaunchRequest))
    // default launch request, let's just let them know what you can do
    log.LogLine($"Default LaunchRequest made");

    innerResponse = new PlainTextOutputSpeech();
    (innerResponse as PlainTextOutputSpeech).Text = "Welcome to number functions.  You can ask us to add numbers!";

You can see that I’m just looking at the type and if a LaunchRequest, then I’m starting to provide my response, which is going to be a simple plain-text speech response (with Alexa you can use SSML for speech synthesis, but we don’t need that right now).  If the request is an IntentRequest, then I first want to get out my parameters from the slots and then execute my intent function (which in this case is adding the parameters):

else if (input.GetRequestType() == typeof(Slight.Alexa.Framework.Models.Requests.RequestTypes.IIntentRequest))
    // intent request, process the intent
    log.LogLine($"Intent Requested {input.Request.Intent.Name}");

    // AddNumbersIntent
    // get the slots
    var n1 = Convert.ToDouble(input.Request.Intent.Slots["firstnum"].Value);
    var n2 = Convert.ToDouble(input.Request.Intent.Slots["secondnum"].Value);

    double result = n1 + n2;

    innerResponse = new PlainTextOutputSpeech();
    (innerResponse as PlainTextOutputSpeech).Text = $"The result is {result.ToString()}.";


With these in place I can now create my response object (to provide session management, etc.) and add my actual response payload, using JSON.NET to serialize it into the correct format.  Again, the Slight.Alexa library does this for us via that annotations it has on the object model.  Please note this sample code is not robust, handles zero errors, etc…you know, the standard ‘works on my machine’ warranty applies here.:

response = new Response();
response.ShouldEndSession = true;
response.OutputSpeech = innerResponse;
SkillResponse skillResponse = new SkillResponse();
skillResponse.Response = response;
skillResponse.Version = "1.0";

return skillResponse;

I’ve now completed my function, let’s upload it to AWS!

Publishing the Lambda Function

Using the AWS Toolkit for Visual Studio this process is dead simple.  You’ll first have to make sure the toolkit is configured with your AWS account credentials which are explained here in the Specifying Credentials information.  Right click on your project and choose Publish to AWS Lambda:

You’ll then be met with a dialog that you need to choose some options.  Luckily it should be pretty self-explanatory:

You’ll want to make sure you choose a region that has the Alexa Skill trigger enabled.  I don’t know how they determine this but the US-Oregon one does NOT have that enabled, so I’ve been using US-Virginia and that enables me just fine.  The next screen will ask you to specify the user role (I am using the basic execution role).  If you don’t know what these are, re-review the Alexa skills SDK documentation with Lambda to get started there.  These are basically IAM roles in AWS that you have to choose.  After that you click Upload and done.  The toolkit takes care of bundling all your stuff up into a zip, creating the function (if you didn’t already have one – as if you did you can choose it from the drop-down to update an existing one) and uploading it for you.  You can do all this manually, but the toolkit really, really makes this simple.

Testing the function

After you publish you’ll get popped the test window basically:

This allows you to manually test your lambda.  In the pre-configured requests objects you can see a few Alexa request object specified there.  None of them will be the exact one you need but you can start with one and manually modify it easily to do a quick test.  If you notice my screenshot I modified to specify our payload…you can see the payload I’m sending here:

  "session": {
    "new": false,
    "sessionId": "session1234",
    "attributes": {},
    "user": {
      "userId": null
    "application": {
      "applicationId": "[unique-value-here]"
  "version": "1.0",
  "request": {
    "intent": {
      "slots": {
        "firstnum": {
          "name": "firstnum",
          "value": "3"
        }, "secondnum" : { "name": "secondnum", "value": "5" }
      "name": "AddIntent"
    "type": "IntentRequest",
    "requestId": "request5678"

That is sending an IntentRequest with two parameters and you can see the response functioned correctly!  Yay!

Of course the better way is to use Alexa to test it so you’ll need a skill to do that.  Again, this post isn’t about how to do that, but once you have the skill you will have a test console that you can point to your AWS Lambda function.  I’ve got a test skill and will point it to my Lambda instance:

UPDATE: Previously this wasn’t working but thanks to user @jpkbst in the Alexa Slack channel he pointed out my issue.  All code above updated to reflect working version.

Well I had you reading this far at least.  As you can see the port of the Slight.Alexa library doesn’t seem to quite be working with the response object.  I can’t pinpoint why the Alexa test console feels the response is valid as the schema looks correct for the response object.  Can you spot the issue in the code above?  If so, please comment (or better yet, fix it in my sample code).

Summary (thus far)

I set out to spend a minimal amount of time getting the C# Lambda service + Alexa skill working.  I’ve uploaded the full solution to a GitHub repository: timheuer/alexa-csharp-lambda-sample for you to take a look at.  I’m hopeful that this is simple and we can start using C# more for Alexa skills.  I think we’ll likely see some Alexa Skills SDK for .NET popping up elsewhere as well. 

Hope this helps!

          Updated Flickr4Writer for new Flickr API restrictions   

Before Windows Live Writer was even publically released, I was glad to have been an early beta user/tester of the product.  The team thought early about an extensible model and it has been my content authoring tool ever since.  It has allowed me to use *my* preferred content workflow with my cloud providers/formatters/tracking and other such plug-ins due to this extensibility.

Flickr4Writer screenshotOne of the first plugins available was one of mine I called Flickr4Writer.  It was pretty popular (as most ‘firsts’ are) and I got a lot of good feedback that changed the functionality and user interface.  Is it the best design/code?  Probably not, but it seems to have served the needs of many folks and I’m happy about that.  I put the code into the Open Source world around the same time and it never received much uptake there and only one contribution of literal code (plenty of feedback). 

I depended on an early library that was created called FlickrNet.  I contributed a few small fixes during my development of Flickr4Writer to the cause.  This has been a very popular library and I think even used in some close-to-official Flickr apps for the Windows platform.  It served my purpose fine for a LONG time…until 2 days ago.

Because Flickr4Writer was pretty much complete and ‘bug-free’ for the mainstream cases, it hadn’t been touched in years and there was never any need.  I felt no need to fiddle with code at all that didn’t need to be messed with.  Another factor also was that Live Writer plugins are pretty locked on .NET 2.0 for loading, so there was no real incentive for me to move to anything else.  Two days ago I started getting emails that Flickr4Writer was not working anymore.  One writer sent me a very kind note detailing what he felt the problem was due to the recent API changes required by Flickr.  One 27-June-2014 the Flickr API went SSL-only and pretty much all my code broke.  Well, to be true, the version of FlickrNet I was using no longer worked.  It was time for me to update.

I spent a few hours today switching to the latest FlickrNet library (and using NuGet now since it is published that way now) and take the time to switch over all the now-obsolete API usage my app was using.  I hit a few speed bumps along the way but got it done.  I sent the bits to a few of the folks that emailed me and they indicated it was working so I’m feeling good about publishing it.  So here is the update to Flickr4Writer, version 1.5 and the steps:

  1. Close Windows Live Writer completely
  2. Uninstall any previous version of Flick4Writer from Control Panel on your machine
  3. Run the new installer for Flickr4Writer by downloading it here.
  4. Launch Windows Live Writer again
  5. Go to the Plugin Options screen and select ‘Flickr Image Reference’ and click Options
  6. Step #5 should launch the authentication flow again to get new tokens. 
  7. Pay attention to the permission screen on Flickr web site as you will need the code provided when you authorize
  8. Enter the code and click OK
  9. Resume using Flickr4Writer

This worked for a set of folks and a few tests I did on my machines.  Performing the re-authentication is key to get the updated tokens for the API usage for this plugin.  I apologize about making folks uninstall/re-install but the installer code was one thing that was really old and I just didn’t want to spend too much time getting that working so I just created a new one.

I’m really glad people find Flickr4Writer useful still and I apologize for not having an update sooner (I actually didn’t get the notice that Flickr indicates was sent out…probably in my spam somewhere) but I appreciate those users who alerted me to the problem quickly!

Hope this helps!

This work is licensed under a Creative Commons Attribution By license.

          TuxMachines: OSS Leftovers   
  • AMD Plays Catch-Up in Deep Learning with New GPUs and Open Source Strategy

    AMD is looking to penetrate the deep learning market with a new line of Radeon GPU cards optimized for processing neural networks, along with a suite of open source software meant to offer an alternative to NVIDIA’s more proprietary CUDA ecosystem.

  • Baidu Research Announces Next Generation Open Source Deep Learning Benchmark Tool

    In September of 2016, Baidu released the initial version of DeepBench, which became the first tool to be opened up to the wider deep learning community to evaluate how different processors perform when they are used to train deep neural networks. Since its initial release, several companies have used and contributed to the DeepBench platform, including Intel, Nvidia, and AMD.

  • GitHub Declares Every Friday Open Source Day And Wants You to Take Part

    GitHub is home to many open-source development projects, a lot of which are featured on XDA. The service wants more people to contribute to open-source projects with a new initiative called Open Source Friday. In a nutshell, GitHub will be encouraging companies to allow their employees to work on open-source projects at the end of each working week.

    Even if all of the products you use on a daily basis are based on closed source software, much of the technology world operates using software based on open source software. A lot of servers are based off of various GNU/Linux based operating systems such as Red Hat Enterprise Linux. Much of the world’s infrastructure depends on open source software.

  • Open Source Friday

    GitHub is inviting every one - individuals, teams, departments and companies - to join in Open Source Friday, a structured program for contributing to open source that started inside GitHub and has since expanded.

  • Open Tools Help Streamline Kubernetes and Application Development

    Organizations everywhere are implementing container technology, and many of them are also turning to Kubernetes as a solution for orchestrating containers. Kubernetes is attractive for its extensible architecture and healthy open source community, but some still feel that it is too difficult to use. Now, new tools are emerging that help streamline Kubernetes and make building container-based applications easier. Here, we will consider several open source options worth noting.

  • Survey finds growing interest in Open Source

    Look for increased interest - and growth - in Open Source software and programming options. That's the word from NodeSource, whose recent survey found that most (91%) of enterprise software developers believe new businesses will come from open source projects.

  • Sony Open-Sources Its Deep Learning AI Libraries For Devs

    Sony on Tuesday open-sourced its Neural Network Libraries, a framework meant for developing artificial intelligence (AI) solutions with deep learning capabilities, the Japanse tech giant said in a statement. The company is hoping that its latest move will help grow a development community centered around its software tools and consequently improve the “core libraries” of the framework, thus helping advance this emerging technology. The decision to make its proprietary deep learning libraries available to everyone free of charge mimics those recently made by a number of other tech giants including Google, Amazon, and Facebook, all of whom are currently in the process of trying to incentivize AI developers to use their tools and grow their software ecosystems.


    Unused features blur the focus of LibreOffice, and maintaining legacy capabilities is difficult and error-prone. The engineering steering committee (ESC) collected some ideas of what features could be flagged as deprecated in the next release – 5.4 – with the plan to remove them later. However, without any good information on what is being used in the wild the decision is very hard. So we run a survey in the last week to get insights into what features are being used.

  • Rehost and Carry On, Redux

    After leaving Sun I was pleased that a group of former employees and partners chose to start a new company. Their idea was to pick up the Sun identity management software Oracle was abandoning and continue to sustain and evolve it. Open source made this possible.

    We had made Sun’s identity management portfolio open source as part of our strategy to open new markets. Sun’s products were technically excellent and applicable to very large-scale problems, but were not differentiated in the market until we added the extra attraction of software freedom. The early signs were very good, with corporations globally seeking the freedoms other IDM vendors denied them. By the time Oracle acquired Sun, there were many new customers approaching full production with our products.

    History showed that Oracle could be expected to silently abandon Sun’s IDM portfolio in favour of its existing products and strong-arm customers to migrate. Forgerock’s founders took the gamble that this would happen and disentangled themselves from any non-competes in good time for the acquisition to close. Sun’s practice was open development as well as open source licensing, so Forgerock maintained a mirror of the source trees ready for the inevitable day when they would disappear.

    Sure enough, Oracle silently stepped back from the products, reassigned or laid off key staff and talked to customers about how the cost of support was rising but offering discounts on Oracle’s products as mitigation. With most of them in the final deployment stages of strategic investments, you can imagine how popular this news was. Oracle become Forgerock’s dream salesman.

  • Boundless Reinforces its Commitment to Open Source with Diamond OSGeo Sponsorship
  • A C++ developer looks at Go (the programming language), Part 2: Modularity and Object Orientation

read more

          Comment on Using Wikis as Pre-Packaged Knowledge Bases by Royan Braun   
" Hi, A wiki is definitelty a flexible, powerful and easy-to-use platform that should be leveraged by enterprises for knowledge sharing. In fact, a well designed wiki makes knowledge sharing a breeze. I always wanted to try Docuwiki, however, I never had the budget to get an IT team for setting it up and customize it. After a lot of research I decided to go for Knowledgebase as it is a SaaS unit and was a little easy on pocket. Do you think I am missing out on something by using a SaaS tool instead of open source wiki?"
          Dissecting VLC – Windows 7 x32   
Author Name Carlos A. Amorocho Acosta Artifact Name VLC media player 2.2.1 for win32 Artifact/Program Version VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols. vlc-2.2.1-win32.exe [1] -> SHA-1 checksum: 4cbcea9764b6b657d2147645eeb5b973b642530e (verified with sha1sum) Value “CompanyName”, […]
The open source release of Frontier didn't shake the world, or boil the ocean, but it is steadily climbing the Daypop Top 40, indicating there is some interest.
          DavidJames commented on Chris Anderson's blog post How do modern open source autopilots compare to aerospace-grade IMUs?   
DavidJames commented on Chris Anderson's blog post How do modern open source autopilots compare to aerospace-grade IMUs?

Wiki’s and open source applications are an incredible way to share information and use the combined knowledge of people across borders. Just look at the success of wikipedia and firefox. As you can expect however there are always people out there who will look to exploit the element of trust that is required. One of […]
          Wednesday, June 28, 2017   

Search Interactive Mind Maps to learn anything.

A team of developers just launched an open source search engine that will show you how to learn pretty much anything. It works by clustering resources into “mind maps.” (via

          CPAN issue tracker gets new features   
Best Practical have announced an upgrade and new features for is the issue tracking system available for every one of the 27,000+ open source distributions released through CPAN. New features include: Mobile support (also accessible from Preferred bug tracker information displayed prominently – if the module author wants to use a different [...]
          The Overnightscape #96 (12/24/04)   
Tonight's subjects include: Christmas, arcade memories, wacky weather, dream review ("Time Travel Back to the Dot Com Era"), Jean Shepherd, mash-ups review ("The Beastles" by DJ BC), The iPod Oracle, Santa Claus, listener email ("Steve from England", "Israel Brown", "Anthony from California", "Beaty from England", "The Friendly Fascist"), Blufftoon ("Xmas 1973 Mike"), open source metaverses, completely random memories ("Northgate"), Asteria, earwarmers, Lost in Space, synchronicity report ("Staples", "Cool Whip", "Street Meetings"), motorized wheelchairs, computer pinball, and trucking companies. Hosted by Frank Edward Nora. (2 hours)
          Trace Compass   
Date Created: 
Thu, 2016-03-17 14:47
Date Updated: 
Wed, 2017-06-28 11:24

Eclipse Trace Compass is an open source application for viewing and analyzing any type of logs or traces. Its goal is to provide views, graphs, metrics, and more to help extract useful information from traces, in a way that is more user-friendly and informative than huge text dumps.

          Automation Test Engineer for open source frameworks (Investment Banking) - Pasir Ris   
Experience in Continuous Integration Tool – Jenkins / Hudson. Optimum Solutions (Co....
From Jobs Bank - Wed, 28 Jun 2017 09:54:55 GMT - View all Pasir Ris jobs
          Senior Software Test Automation Engineer - Jurong Island   
Familiarity with commercial and open source test automation and test case management technologies such as JMeter, Robot Framework, Selenium, Watir or Hudson etc...
From Jobs Bank - Tue, 27 Jun 2017 10:03:13 GMT - View all Jurong Island jobs
          T3CON16 – 6 Insights von der TYPO3 Konferenz 2016 München   
Am 26./27.10.2016 findet die Konferenz der TYPO3 Community T3CONEU16 in München statt. Ca. 350 Teilnehmer erleben eine Innovationsshow, welche die Leistungsfähigkeit des Open Source Enterprise Content Management Systems in beeindruckender Weise aufzeigt. Während der Veranstaltung nutzen wir die Pausen zum
          Perfectly secure Bitcoin wallet generation   
Generate your own Bitcoin wallet without a computer! Never mind

EDIT: I have been informed that BIP39 derives the last word from SHA hash of all the others, and thus needs a computer to generate the seeds. Thus, this post is moot and useless. I will leave the post here as a mahnmal, in the hope that someone will find something in it useful.

EDIT 2: After the initial failure, I decided to do the next best thing, and write a short program for the ESP8266 that will generate a random seed every time it boots up and print it to a screen. That should be a good compromise, scroll down to see it.

Being the geek that I am, I find Bitcoin fascinating (if only everybody focused on something other than the price!), and hardware wallets doubly so. If you haven’t heard of them, hardware wallets are small, flash-drive-sized devices that usually connect to a computer’s USB port and hold your wallet keys. That way, even if the computer you’re trying to send bitcoins from is riddled with viruses, you remain very secure and nobody but you can pay on your behalf. Unsurprisingly, I bought one! I was between the Trezor and the Ledger Nano S, but I decided on the Nano S in the end, as their platform looks more exciting, more secure and I was quite satisfied from the two HW1s I had bought for cheap at a sale.

However, since I’m in it for the technology and cryptoparanoia, rather than for any practical purpose, I find that hardware wallets have a few issues. For a short primer, a hardware wallet’s main advantage is that the keys are generated on the device and never, ever leave it, as whoever has the keys can spend your money. Since the keys never leave the device, though, you’re screwed if you ever lose it. To avoid that, wallet designers usually allow you to do a one-time export of the keys (many devices have a screen they show you the keys on), right after creating them. The export is usually a Bitcoin standard called BIP39, and is usually in the form of 12 or 24 everyday words, which you write down on a piece of paper, store it in your safe, and that’s all that’s needed to retrieve your keys if you lose the hardware wallet. No computer ever touches the keys, and you can sleep peacefully.

The problem

My problem, though, is that I want to go full paranoia, and don’t want to trust the hardware wallet developers to generate the random keys. If they really wanted to (and they probably don’t want to, but I’m not going to bet my $100 worth of bitcoins on “probably”), they could add a backdoor to the keys that would allow them to know beforehand any key that would ever be generated with their devices, while still looking completely random to the unsuspecting user. This is “easily” avoidable if you supply your own entropy (i.e. basically give them a series of dice rolls, coin tosses, whatever source you have that’s actually random).

Now, getting randomness is easy (albeit slow) if you have a die or coin lying around, but inputting that randomness is not always easy. Most hardware wallets allow you to restore the words you have written down as your backup, which can also be used to create new keys (because they don’t actually know or care if you had these words before, they just know that the words correspond to valid keys), but they usually don’t let you roll some dice and generate the keys that way.

You can also get good randomness the easy way, by generating it on a computer, which has the added benefit that it can convert that to a list of words you can import on your wallet directly, but that means you have to use a (potentially infected, bitcoin-key-stealing) computer. One solution there is to buy a small, cheap computer (like a $30 Raspberry Pi), install an operating system on it, download the proper software, disconnect it from all networks, generate the keys and then physically destroy the computer, burn your house down with your loved ones in it. That’s the only way to be sure, but, as you can tell, it’s a hassle.

The solution Nope, just lots of wasted time

Finally, after that entire introduction, and for the two people that haven’t stopped reading already, I came up with a solution! Here it is:

Since there’s no manual way (that I know of) to convert dice rolls to keys, and since I didn’t want to use a computer for this at all, I created a way! (*Also Sprach Zarathustra plays*). I decided to simply map dice rolls to words directly, and created a PDF that you can download and print. It’s six short pages with tiny words on them (to save paper), and the way it works is easy:

You can either roll a six-sided die (my method only uses four of the sides), or a 32-sided die (which is not common, but you can use a d20 and a d12 together from your D&D dice). Each word has numbers next to it, and the numbers correspond to your dice rolls. For example, the word “pave” has the numbers 221141 in the 6-sided die column, so if you roll 2, 2, 1, 1, 4, 1 in a row, that’s the word you should use for your seed. Keep in mind that the order of the words matters.

The only rule needed, apart from the simple list above, is this: If you don’t see your number in the column that you’re currently at, reroll. This means that, if you roll a 3 as the first number for the 6-sided die, you must reroll, since there’s no 6 anywhere. Just roll until you get either a 1 or a 2, then continue.

You need six rolls per word for the six-sided (really four-sided, as rolls of 5 and 6 will be discarded and rerolled) die, and only three rolls for the 32-sided die. Since the first roll of either method needs only a 1 or 2, you can just flip a coin for the first roll.

Download the secure wallet generator PDF here.

Please let me know in the comments section or on Twitter, if the PDF didn’t print well, or if my explanation is too complicated, or if I’m stupid and should kill myself. I’d especially like to know if you used this method to great success. (EDIT: Spoiler alert: You didn’t.)

The actual solution

I figured that, since you definitely need a computer to generate a seed, and I couldn’t be bothered with an airgapped one, I would use a $3 microcontroller that doesn’t even have an OS, and only runs open source code. So, I used an easily obtainable ESP8266 and a $3 screen to show the seed on.

I wrote a MicroPython script that will generate BIP39-compatible words and display them on an OLED screen that you connect to it. The code is trivially auditable (under 70 lines) and doesn’t use networking or anything. If you feel like it, you can just burn the ESP8266 afterwards, it costs next to nothing.

The code uses the ESP8266’s (apparently high-quality) random number generator. I know, I know, if I don’t trust the wallet’s generator, why would I trust the ESP’s, but after spending many hours on the above (failed) solution, my patience is running thin. Getting entropy from 53 d32 dice rolls is left as an exercise for the reader.

The code and usage instructions can be found in the script’s GitLab repository:

I hope you at least find this useful!

          Spamnesty: Waste spammers' time   
Artificial Intelligence finally used for evil.

Have you ever received a spam email? If not, I would definitely recommend getting your own email address, the positives usually outweigh the negatives. For the rest of us, who have had an email address for more than two minutes, spam is a real problem. I’ve found myself wanting to reply to spam messages many times, just to see what would happen, and to waste spammers’ time a bit.

That’s why reading Brian Weinreich’s post Two years spamming spammers back resonated with me. The summary is that he built an app for his personal use which would reply to spammers and engage them in a dialog of canned responses, trying to string them along for as long as possible, leading to some pretty funny exchanges. That struck me as a brilliant idea, and I wanted to use it, but he had built it for his own use and it wasn’t well-suited for use by other people.

To that end, and because I had a free Saturday, I decided to rewrite the service and make it freely accessible to anyone, and so Spamnesty was created.

The service

We can't stop spam, but we can make it more expensive.

Spamnesty is pretty much what Sp@mlooper is/was. I blatantly stole multiple ideas from Brian, including the half-domain. The root,, is styled to look like a legitimate maritime logistics company of a landlocked country, just in case one of the spammers decided to look. Visiting gets you the real site, where you can immediately see that I can’t design websites at all, and read conversations from real spammers.

You can use Spamnesty right now. Just forward a spam email to, taking care to remove any personally-identifiable information from the body (but do leave the body otherwise intact, or the spammer won’t know what you’re replying to), and Spamnesty will generate a person and pretend to be interested in whatever the spammer has to sell.

You’ll almost immediately get an email that will let you read the conversation as it unfolds, and some interesting conversations will show up on the front page for you to read (hence the need to remove sensitive information from the original email).

The conversations

The conversations themselves are usually what you would expect if you had spent a moment thinking about spammers themselves, but they were surprising to me, because I hadn’t. Spammers will usually realize they’re talking to a bot (or at least to someone who isn’t interested or not going to give them any money) after around 3-5 messages, but some have sent up to 15 messages before giving up in frustration.

The service has only been running for a few weeks, but already there are quite a few conversations with replies. Some my favorites are:

How you can help

It turns out that working with email is a pretty messy affair, because of how unstructured it is. It’s just plaintext, and there pretty much are no standards for anything, including what an email address is supposed to look like. For that reason, the service does have a few glitches and other bugs, but it generally works pretty well.

You can help improve it by forwarding a few of your favorite spam emails to, as I mentioned above, or you can go the extra mile and help me out with development. In the spirit of openness, I have made the app open source. You can find it on Gitlab:

Please report any issues you find, issue merge requests to help me fix bugs, Tweet your feedback to me or leave a comment in the form below. Thanks for your help!

          RPTools: Open Source Tools for Pen & Paper RPGs   
RPTools is an open source tool set for PC designed to enhance pen and paper role-playing games.  If you’re a RPG fanatic you are probably already aware of these tools or at least heard of them from your fellow gamers.  After experimenting with the tools in my own Pathfinder and D&D games I decided to dig a little deeper and obtain an interview with the folks who have made these tools openly available to the general public! NERD TREK interview with Frank Edwards & Keith Athey of RPTools.   Jonathan Nerdtrek:  Hello Keith!  Please tell our readers a bit about your RPTools programs and your role within the company. Keith Athey:  RPTools is a community devoted to producing open source […]
          TYPO3 - dobrá voľba v oblasti security   
Tlačová správa
Argumenty, ktoré potvrdzujú dominanciu TYPO3 medzi open source CMS systémami.
          Magento, PrestaShop and VirtueMart among most popular eshop platforms processing product data   
Based on the Icecat sample of connected webshops in Q1 2017, the most popular standard webshop software platforms are Magento Commerce (9%), PrestaShop (8%) and VirtueMart/Joomla (4%). PrestaShop made relatively more progress, over the past years, than its contenders, and might surpass Magento soon in popularity. All three are open source platforms, which are complemented … Continue reading Magento, PrestaShop and VirtueMart among most popular eshop platforms processing product data
          Node.js – Environment Set up   

  Before proceeding with more details of Node.js, I would like to give some basic definition for Node.js. Node.js is a very powerful JavaScript runtime built on Chrome’s V8 JavaScript engine which is used for Web applications such as video streaming sites, event driven. It is open source and totally free. Now let us see […]

The post Node.js – Environment Set up appeared first on Bitware Technologies.

          What is CakePHP? Its Features   

Overview CakePHP is free, open source and radid development framework for PHP. It is a foundational structure for programmers to create web applications. Our primary goal is to enable you to work in a structured and rapid affection–without loss of flexibility. The CakePHP framework provides a robust base application. It can handle every aspect, from the […]

The post What is CakePHP? Its Features appeared first on Bitware Technologies.

          Ouverture du forum d'entraide dédié à la programmation en Rust, pour obtenir simplement les meilleures réponses à vos questions sur ce langage   
Ouverture du forum d'entraide dédié à la programmation en Rust
pour obtenir simplement les meilleures réponses à vos questions sur ce langage

Chers membres du club, j'ai le plaisir de vous informer de l'ouverture du forum d'entraide dédié à la programmation en Rust, le langage de développement libre et open source développé par Mozilla.

Ce nouveau forum entre dans la dynamique du club de satisfaire les besoins de ses membres, en créant un espace plus large de partage de connaissances...
          Networks - The Next Challenge in Digital Transformation   

As digital transformation has been picking up momentum, leading analysts such as 451 Research have suggested that hybrid multi-clouds and automated DevOps will become key constituents powering enterprises in the new era. At the heart of these enabling technologies lies Lifecycle Service Orchestration (LSO) designed for near-autonomous application deployment across hybrid infrastructures consisting of traditional on-premise data centers and public clouds.

The business case for LSO is straight-forward. To be able to accommodate the peak loads that any digital services may experience, enterprises running their own data centers have been forced to invest in excess capacity to accommodate the x2-3 load peaks that may occur from time to time. As this excess capacity idles for most of the time, buying the peak capacity from cloud service providers considerably reduces the CAPEX investments an enterprise would otherwise have to make.

With the promise of slashing the CAPEX spending in half, it is no surprise that hybrid IT has been gaining in popularity as of late. This has led into something of a gold rush into the market space, with industry bellwethers such as Cisco Systems, Hewlett-Packard Enterprise and Microsoft all making significant investments into the area. Also, the open source community has taken a note of the potential, with DevOps communities and key commercial players such as Red Hat making their way to the hybrid world.

Hybrid IT Runs on Networks

The curious thing about hybrid IT innovation is that practically all the focus has been going into the application realm of things. I find this somewhat troubling from the operational point of view because, in order for the applications to be moved around automatically using technologies like LSO, both the application release parameters and the underlying networks should be managed within a single unified system.

By fusing the unified network management and the LSO together, enterprises will be able to develop streamlined processes that allow private Wide Area Network (WAN) segments to be activated automatically in Virtual Private Clouds (VPC). This enables seamless connectivity between private enterprise data centers and public clouds such as Amazon Web Services or Microsoft Azure, making the free movement of application workloads a reality.

In contrast, without a unified network management process in place, the IT departments can easily end up with delivery times calculated in months. This is the typical time it takes to manually assign and activate new network segments, to requisition new virtual appliances, and to install and to configure all of this manually. In situations where the business user is in urgent need of additional application capacity to meet business needs, delivery times this long are simply not acceptable.


To solve the network part of the digital transformation, Lifecycle Service Orchestration (LSO) should be paired up with a unified network management system that is responsible for the underlying networks in the same way as the LSO is responsible for the business applications that run in them. You can call me an idealist, but I have a strong feeling that solutions like this are just around the corner. Otherwise, the ICT industry will have hard time unleashing the true power of Hybrid IT.

Written by Juha Holkkola, Co-Founder and Chief Technologist at FusionLayer Inc.

          NFV Orchestration Without Network Visibility: OS MANO Needs Operational Improvements   

Open Source (OS) Management and Orchestrations (MANO) is a European Telecommunications Standards Institute (ETSI) initiative that aims to develop a Network Function Virtualization (NFV) MANO software stack, aligned with ETSI NFV. The main goal of MANO is to simplify the onboarding of virtual network components in telco cloud data centers. The initiative has gained impressive momentum among leading Communication Service Providers (CSPs) around the world as part of their NFV programs.

A major limitation of the initial MANO releases was that they only supported one data center. That of course is not acceptable for production NFV, because regulations alone require a distributed infrastructure to ensure service continuity. While there has been much debate as to why CSPs have been slow to roll out NFV into production, the limitations of the initial OS MANO releases have not come up that often.

In October 2016, the OS MANO community addressed the continuity issue with its new RELEASE ONE. More specifically, the latest version of the OS MANO allows the NFV infrastructure and, consequently, the Virtualized Network Functions (VNF) to be distributed across multiple sites. The new OS MANO functionalities making this possible include:

  • Multisite Support allowing a single OS MANO deployment to manage and orchestrate VNFs across multiple data centers.
  • Network Creation via Graphical User-Interface or automatically by a Service Orchestrator.
  • The ability to manage IP parameters such as security groups, IPv4 / IPv6 ranges, gateways, DNS, and other configurations for VNFs.

While these features enable centralized orchestration of highly available network fabrics that span across multiple data centers, the problem is that the OS MANO framework has no mechanism for managing these attributes properly. It is simply assumed that they will come from somewhere — either manually or magically appearing in the service orchestrator — which to me does not represent the level of rigor that is required when designing automated service architectures of tomorrow.

Since any workflow is only as efficient as its slowest phase, leaving undefined manual steps in the NFV orchestration process is likely to create multiple operational and scalability issues down the road. In the case of OS MANO RELEASE ONE, at least the following problems are easy to foresee:

  1. Agility. Automating the assignment of logical networks and IP parameters is mandatory to reap the full benefits of end-to-end service automation. Two possible approaches would be to either retrieve this information from a centralized network Configuration and Management Database (CMDB) by the Service Orchestrator, or alternatively by pushing the networks and IP parameters directly into their place. Either way, to ensure the integrity of the configured data and to automate this part of the workflow, the logical networks and IP parameters must be managed within a unified system.
  2. Manageability. As the NFV network fabrics span across multiple data centers, the CSPs running these environments need unified real-time visibility into all the tenant networks across all sites. As the multisite model in OS MANO assumes that each data center runs its own dedicated cloud stack for NFV-I, the unified visibility can only be achieved on a layer that sits atop the NFV-Is. Therefore, this is something that either OS MANO should do — or alternatively, there can be a separate layer for the authoritative management of all networks and IP parameters.
  3. Administrative Security. The problem with the current OS MANO framework is that it leaves the door open for engineers to manage the network assignments and IP parameters in any way they see fit. An ad hoc approach would typically involve a number of spreadsheets with configurations like security groups in them, which may be rather problematic from the security and regulation compliance perspective since it can easily lead to not having proper authorization and audit trail mechanisms in place.

In fairness to OS MANO, most CSPs still continue to mostly experiment with NFV. It is therefore likely that these operational issues are yet to surface in most telco cloud environments. That said, we have already seen these issues emerge at early NFV adopters, creating unnecessary bottlenecks when the NFV environment is handed over to operations. Therefore, my suggestion to the Open Source MANO community is to establish a best practice for addressing these issues before we reach a point at which they start slowing down the NFV production.

Written by Juha Holkkola, Co-Founder and Chief Technologist at FusionLayer Inc.

Got a new laptop 2 days ago - a Lenovo Thinkpad Carbon X1 3rd gen. Looks good, feels good. But there's a problem - it's a bit too good. I'm not used to it, and neither is my dear Linux.

The laptop came with Windows 10 installed. I got 16GB of RAM so I could set up a Linux Mint VM with Virtualbox with Win 10 as a host. I ran into issues. I'm listing them out in case someone else is desperately searching online for solutions, and also so that the readers can have a good laugh at my expense:

1. I started by installing Virtualbox 5x and using a Linux Mint 17 iso file to create the VM. I noticed that under the 'OS Type' option, all I could see were 32-bit options. I shrugged and thought maybe it was a VirtualBox 5x thing and I'd be fine choosing Ubuntu 32 bit to run my Mint 64 bit VM. Yes, I'm stupid, but to be fair, I really didn't see any 64-bit options.

2. As one would expect, starting the VM didn't throw any explicit errors but the VM screen just went blank after flashing the Virtualbox icon. Didn't take long to realize that the 32 vs 64 bit issue was causing this.

3. Turns out Window 10 comes with Virtualization options disabled by default. To turn them on, I went in the BIOS by restarting the machine and hitting Enter. Once in, I navigated to Security>Virtualization and enabled the two virtualization options (Intel-VTx and Virtualization Extensions). I'll add more details later.

4. Rebooted and the OS dropdown in Virtualbox listed all 64-bit OS version this time. Yay! Chose one and restarted VM. It came up fine, but just one issue - one I couldn't overlook - the resolution was way too high. Windows 10 is optimized for this high res screen, but unfortunately Linux Mint isn't. "Well, lets try Ubuntu 14.04" I thought, and created another VM. Same issue. Resolution way too high.

5. At this point I played around with resolutions a bit - chose different options in VirtualBox Guest, but every other option only made the guest OS screen smaller. I didn't want that, no one would.

6. So now I decided I'd try dual booting Linux. The first thing to do for that is to turn UEFI Secure Boot off by going into boot option at restart and finding this option under the Security tab.

7. Once disabled, I burnt the Mint disc image onto a 2GB USB with Universal USB Installer. Connected it to the laptop and restarted, hit F12 and entered the boot option page. Chose the USB option but it would just keep loading back the boot option screen.

8. Thought that maybe, just maybe it was a problem with the pendrive and burnt the image again to my external HDD. But againt, the laptop refused to boot from it.

9. Unfortunately, this being an ultrabook doesn't have a CD/DVD drive. Requested my colleague to get his USB DVD drive the next day, hence bringing Day 1's struggles to a halt.

10. This morning, got the USB DVD reader and continued with my dual boot trial. Good news was that it booted linux mint trial just fine. But there was no wireless detection. None. Nada. Nil.

11. Looked around online to see if others had this issue, and they did. In fact there's a video on youtube that shows how to enable a supposedly disabled driver for wireless in Linux Mint.

12. Optimistically, I did exactly what the video said - search for driver manager, clicked on it and waited till it appeared just as shown int he video. Only on my screen, the window that appeared was blank and there was a pop up saying that I should install Mint first before making any changes. For those of you who have installed Mint in the past, having an internet connection is recommended. I don't know why its recommended, but I didn't want to take a chance, especially since I had no windows backup. Oh wait, I should explain why I didn't make a windows backup first.

13. According to my colleague, lenovo thinkpads have a "Rescue and Recovery utility" that puts the OS image on a DVD. Only in this Carbon X1 3rd Gen, there is no such utility. The only recovery option there is is to allow windows to backup everything, for which it asked for 43GB of space. Now there's no way a DVD disc would suffice. And from my experience the day before, a burnt image on a USD HDD was not being recognized by the laptop at all. So yeah, I was so frustrated by this time that I threw caution to the wind and proceeded with dual boot anyway.

14. Coming back to dual boot, I needed an internet connection. After some searching, found an ethernet wire and a port that worked. Then proceeded with the install. It asked me whether it should install Mint on the whole disc, and I chose 'Do something else'. Once I chose that, it brought me to a page where I was expected to choose the discs manually. This didn't scare me too much, until...

15. I noticed that the amount of free space was only 1 MB. You read it right... 1 MB. Turns out all the space was being occupied by Windows C drive. Rebooted into Windows 10 and chose the 'Disk Management' setting. Once in, right clicking the Window C: showed an option 'Shrink Volume'. I clicked it and for once, something was done for me automatically - Windows determined that it could shrink the C drive to exactly half of what it was currently - 512GB to 250GB. Woot!.

16. After Making 250GB available, I rebooted to Mint and on the partition page, saw that 'Free Space' now had exactly that - 250GB. Happily, I followed this awesome guide to create / and /HOME and swap partitions. Once done, I briefly looked into 'Device for bootloader installation' option and made sure I didn't choose something that'd overwrite the Windows loader. After some Googling, I was certain that the default internal SSD option /dev/sda was ok to proceed with. With this, my dual boot voes ended. But this wasn't the end of all my problems.

17. Once I rebooted to Linux Mint, I noticed 3 things:

  • The resolution was abysmally high
  • Still no wireless detection
  • the keys atop the touchpad were just scrolling up and down a line when pressed, not actually clicking anything.
18. my Linux kernel version was 3.13, and according to Linux Mint and Ubuntu forums, the Intel Wireless 7265 card wasn't supported. So to make it work, I would have to upgrade the kernel. Followed this tutorial to upgrade the kernel to 3.14. Unbelievably, the steps all worked on the first try. 

19. After rebooting post upgrade, I went online and downloaded the Wireless 7265 driver for the kernel from this page and copied the *.ucode file to /lib/firmware with sudo. And Voila! Wireless started working. 

20. Still ignoring the resolution issue which was the cause of all this to begin with, my stupid brain decided to resolve the mouse/touchpad buttons issue first. After again reading through forums, it appeared that this bug was reported sometime in April 2015, and even though there was a temporary workaround ( echo "options psmouse proto=imps" > /etc/modprobe.d/psmouse.conf) that involved Synaptics touch to be disabled (no finger scrolling, zooming etc.), the buttons started working but the touchpad was essentially jumpy and useless. 

21. After I undid the change by removing the  options psmouse proto=imps  line from the config file, I rebooted again and decided it was already time for yet another kernel upgrade to 4.0. Post 4.0 upgrade, the touchpad and mouse issues were fixed and I only had to reinstall the Wireless Card 7265 driver. 

22. Finally, all my issues except the resolution were resolved. Even with Linux as base OS, changing resolution seemed to only show a smaller screen. At this point, I emailed the salesman I dealt with for purchase my laptop stating "Both the laptop and windows 10 are too new to be supported by open source software critical to my work and study". As a last resort, and because I knew I was exhausted and not thinking well, I asked my colleague if this was the normal behavior upon changing screen resolutions. He suggested I try a particular option 1920 x 1080. And lo and behold... everything was MUCH BETTER. Still not perfect, but very usable. I asked how he knew that particular resolution would work, and he said it was the standard resolution for most screen. At that point it dawned on me the number of times I'd seen this particular resolution everywhere. Once would think I'd know to choose this option when looking at resolutions, but I just didn't realize. 

And so, here I am typing out this blog post on my shiny new Thinkpad ultrabook with Linux Mint 17. Go ahead and laugh. I'm laughing too. :D

          Discussion with Jared Duval about the Future of Democracy   
Hear a talk with Jared Duval about his book "Next Generation Democracy: What the Open Source Revolution Means for Power, Politics, and Change."
          Absolutely random...   
A full round of the flu in the family. More snow, more snow pics. Tax prep time. Learning more about J2EE, VOIP, Asterix Open source PBX and about jobs, real estate in Canada. What a week!

The Oscars are tonight and the Indian news sites such as rediff, samachar have been rife with rumour about Aishwarya Rai presenting an Oscar- an honour, as noted by one of them. On the one hand these sites are jumping on every opportunity to criticize the USA and on the other they gloatat every every nod and smile that the USA effects in the direction of India. It might be ages before I manage to figure out what really is their stance.

The NASA examination fiasco: News agencies all over the world including the BBC fell flat on their nose on this one. No one has outlined clearly, however, why this kid and their family concocted this out-of-this world story.

Browsed the web for websites of IIT alumni associations based in Canada and interestingly enough the only one I found was a broken link on the IIT MAANA website. Nice!

John Wright has served his notice of intent to leave after the Pakistan visit. India's first foreign coach and what a difference he has made! Speaking of the Pakistan tour, the hype continues to be built up- Inzamam now pleading to his team to avenge their home loss earlier this year, the photo sessions of Ganguly and Inzamam in battle armour and who knows what else the ad media is cooking up back home.

The 42nd Mersenne prime has now been confirmed as found!
          Leagle issue please read   

As of Oct 15th 2003 MSN will stop all clients that are using MSNP7 protocal including XCOMMSN, Although it is easly updateble to version MSNP9 which is used on MSN Messenger 6 , We have doubts that this process might not be leagle , and if so we will not upgrade this client to MSNP9.

We are observing other 3rd party Open source clients and the direction they are heading before doing any feuture developments on MSNP9 .

Eventhough we are not going to do any additional developments Full source for the client will be available for any interested party to download.

Sorry guys,

and thank you for being with us this far.

Which points to

          Some Major Software Company's Worst Nightmare   
Ok, so we just got done watching Revolution OS, and I have to say I think esr is now my role-model. Maybe that's a bit extreme...or is it? Despite the obviously biased nature of Revolution OS (mentioned by our professor ahead of time), I think the video elicited some great discussion on where open source is today and where it could be going. Personally, I think it is valuable practically, but I also think that the free software movement's views are good ideals to work toward. We need idealists pushing us in the right direction, even if their true vision would never be met. Thoughts?
          Warning Radioactive Material!   
Welcome to the Super Magnet Room! Despite the title of the blog and this post, this has nothing to do with super magnets or radioactive material--well, maybe just a tad. I'm a university student on a mission to contribute to the OpenMRS project. Each week I will be blogging a little bit about the progress of the ticket that I and my partner are working on, or things we have been discussing in class about Open Source in general (Free Software, if you prefer (I prefer)). We are looking into tackling the mobile style sheet for OpenMRS.

So, what do super magnets or radioactive material have to do with anything? Well, our classroom is in the basement of the science building--right next to the super magnet room and about 50 feet away from the radioactive material closet. If you were looking for a site about blowing stuff up or crazy science experiments, maybe this is not for you. Then again, you might see something you like here. On the other hand, if you're looking for a bunch of "scruffy hackers," you came to the right place.
          Le librerie Qt di Nokia portate sui Samsung Omnia HD e altri Symbian   
Le librerie Qt sono parte integrante dei nuovi sistemi operativi Open Source di Symbian e offrono agli sviluppatori la possibilità di creare i propri software con poca fatica grazie a potenti...
          Un client per i veri intenditori del Peer-to-Peer: SmartDC per Windows Mobile   
Tra i tanti protocolli utilizzati per il file sharing ve n'è uno, probabilmente, meno conosciuto dalla massa chiamato Direct Connect. Per questo sistema, esistono diversi client, alcuni Open Source,...
          Вносим свой вклад в Open Source   

I love open source

Недавно я закончил вторую статью из обзоров фреймворков с открытым исходным кодом для ревёрсеров: Radare2 и Viper. Первая была опубликована в журнале “Хакер” за Сентябрь 2014, а вторая будет в номере за Ноябрь. В связи с этим и родился этот пост. Хотя точнее будет назвать небольшой заметкой для себя, чтобы не забывать, так как я не часто отправляю какие-то патчи или улучшения, да и о внесение своих доработок в open source проекты было уже сказано не раз. Все команды будут указаны на примере моего недавнего патча для одного из фреймворков.

У каждого проекта на github в правом углу есть 3 кнопки.

Fork project

Из них нам нужна 3-я - “Fork”. Кликаем по ней и уже в нашем github-аккаунте появляется этот проект. Далее клонируем уже свой проект к себе на машину:

~/soft# git clone
~/soft# cd viper/

Создаем новую ветку разработки под именем которой будут проходить все наши изменения:

~/soft/viper# git checkout -b fix-mistypes-vs-apk

Далее патчим/добавляем/удаляем файлы и указываем путь для git, в котором происходили изменения:

~/soft/viper# git add modules/

Коммитим и пушим изменения:

~/soft/viper# git commit -m "Fix mistypes in apk and virustotal modules"
[fix-mistypes-vs-apk be8d11d] Fix mistypes in apk and virustotal modules
2 files changed, 2 insertions(+), 2 deletions(-)
~/soft/viper# git push origin fix-mistypes-vs-apk

Теперь делаем Pull Request кнопкой “New Pull Request” из страницы нашего github-проекта в настоящий, добавив описание/скриншоты/видео. После чего на нашей созданной странице из списка “Pull requests” реального проекта общаемся с разработчиком до принятия доработок, в некоторых случаях иногда что-то исправляя и отправляя изменения через команду commit:

~/soft/viper# git checkout fix-mistypes-vs-apk
~/soft/viper# git add modules/
~/soft/viper# git commit -m "Fix"

После закрытия этой страницы ветку из своего репозитория нужно удалить:

~/soft/viper# git checkout master
~/soft/viper# git branch -d fix-mistypes-vs-apk

Так как мы работаем с проектом из своего репозитория, то периодически (особенно перед добавлением своего функционала) нужно переносить изменения из реального. Для этого есть набор команд рекомендуемых в некоторых гайдах, смысл которых создать ветку для нашего проекта, которая будет указывать на репозиторий реального, и обновлять её:

~/soft/viper# git remote add upstream  git://
~/soft/viper# git fetch upstream
~/soft/viper# git checkout -b upstream-master
~/soft/viper# git checkout master
~/soft/viper# git fetch
~/soft/viper# git rebase upstream-master 
~/soft/viper# git push origin master

Для последующих обновлений пользуемся командой:

~/soft/viper# git fetch upstream
~/soft/viper# git checkout master
~/soft/viper# git merge upstream/master

Для более комфортной работы можно:

  • добавить вывод в терминал слева от логина название ветки, в которой в данный момент работаешь
  • воспользоваться IDE. Например, в продукты от JetBrains по-умолчанию входит модуль для работы с Git

Так что не стесняемся вносить свой вклад в развитие ПО, тем более, если оно помогает нам в работе ;-)

####UPD На старом добром HabraHabr выложили статью о том, как получать ещё и деньги за патчи в opensource-продукты

Ants contributers

          They love Apple, but why?   

Is it me or is it weird that so many open source purists, people who swear by it, argue it to death, and would die for it, seem to like Apple, which isn't open source? Maybe I'm missing something. Or maybe it makes sense, if you need to charge for the software (so you can pay the engineers, for example), to hold on to the source. Hmmm. Sorry. ";->"

BTW, imho, "open source" is a vestige of dotcom mania. Sure, you can do anything with free money, but that's over, for good (fingers crossed) so let's get real, okay? Thanks. One more thing, open source zealots, like all zealots, checked their minds at the door when they joined the party. They're anti-intellectual, can't handle disagreement, are about anything but freedom.

A picture named molly.gifA bonus BTW -- to people who are borderline open source zealots -- consider this. In the late 90s open source defined a club that excluded many well-intentioned hard-working developers. Now it no longer has the power to do that, because the hype is over, and the money that was funding it is gone. So if open source, as a cause, tries to pretend that the barriers still exist, you only cut yourself out of the mainstream, and become more and more irrelevant. And here's the most important point. There's lots of work to do. In Washington they're passing laws that any developer, whether or not he or she develops open source, should be working to stop. The fact is that if you use a computer, you probably depend on some of both kinds of software -- so stop seeing the world so black and white, stop seeing an enemy in anyone who dares criticize the most bizarre zealotry, relax, and see that the world is a lot bigger than it may have seemed, and let's work together for real freedom, for all of us. Now, really, have a nice day, no kidding.

A correspondent writes: "The answer to the question is 'Unix.'" Yes that's right. Much of the misplaced open source zealotry is really love of Unix. No problem with that. I grew up with Unix myself. Totally. You can see that in the design of Frontier and my outliners. I've written about that many times. That's how I learned to write code, by studying the source code of the original Bell Labs Unix. Now we're getting somewhere.

          More patents in the service of open source   
          More patents in the service of open source   
          En Standard Ebooks puedes descargas libros gratis y de dominio público en un hermoso formato   

Standard Ebooks Free And Liberated Ebooks Carefully Produced For The True Book Lover

Si bien existen muchos sitios web en los que podemos descargar libros gratis y sin conflictos sobre los derechos de autor, como el caso del ultra conocido Proyecto Gutenberg que recopila todas las obras que ya son de dominio púbico, no todos ofrecen precisamente los libros "más bonitos".

Con esto queremos decir que muchas veces al guardar estos ebooks a tu lector de libros electrónicos, el formato no es precisamente el mejor. Puede que te encuentres con tipografías feas, borrosas o inconsistentes, errores, y casi siempre portadas ausentes y metadatos pobres.

Este problema es algo que buscan resolver en Standard Ebooks, una web dedicada a producir de forma muy cuidadosa ebooks libres y de dominio público para los amantes de los la lectura.

Ebooks Gratis Y Bonitos

Standard Ebooks es un proyecto sin fines de lucro y open source. La web toma libros de fuentes como el mencionado Project Guntenberg, les da formato, les mejora la tipografía, y utilizando una guía de estilo profesional y diseñada con mucho cuidado, termina por modernizar las ediciones de muchos libros, y hasta corregirlos.

Este proyecto busca aprovechar mejor la alta tecnología de los lectores de ebooks modernos y sus diferentes funciones. De ahí que se ponga especial interes en las tipografías modernas, metadados detallados con enlaces a fuentes de enciclopedias para los lectores más curiosos, y soporte para tablas de contenido, notas de pie de página, alta resolción y gráficos vectoriales, además de portadas de calidad.

Puedes navegar la librería de Standard Ebooks por orden de lanzamiento o alfábetico. Las descargas están disponibles en formato epub, azw3 (especial para dispositivos Kindle), kepup (especial para dispositivos Kobo) y además epub3 (el formato de ebook avanzado que aún es poco compatible con la mayoría de los lectores).

En Genbeta | La guía completa para dominar Calibre (y organizar tu biblioteca de e-books)

          Automation Test Engineer for open source frameworks (Investment Banking) - Pasir Ris   
Experience in Continuous Integration Tool – Jenkins / Hudson. Optimum Solutions (Co....
From Jobs Bank - Wed, 28 Jun 2017 09:54:55 GMT - View all Pasir Ris jobs
          Senior Software Test Automation Engineer - Jurong Island   
Familiarity with commercial and open source test automation and test case management technologies such as JMeter, Robot Framework, Selenium, Watir or Hudson etc...
From Jobs Bank - Tue, 27 Jun 2017 10:03:13 GMT - View all Jurong Island jobs
          How to Create a CIPA Compliant Content Filter Using Free Software.   
In this post I will describe how to use free open source software to create a simple CIPA compliant web filter that can be used in a small to medium public library.  I am using Ubuntu server 10.04 with Squid, … Continue reading
          Come registrare lo schermo dei principali sistemi operativi   
Registrare lo schermo recordingRegistrare lo schermo è una pratica molto usata per una vasta quantità di scopi, sia per indicare qualcosa ad un utente oppure per registrare la tipica partita da videogame. Vediamo come fare a registrare il nostro schermo sui principali sistemi operativi fissi e mobile: Windows Possiamo l’usare lo strumento OBS. Si tratta d’un programma Open source molto potente, con il quale possiamo anche fare Streaming in vario tipo o anche diversi Cast. Del resto, può essere molto complicato: tuttavia una volta imparato come funziona, avete a disposizione un ottimo strumento. Se avete Windows 10, c’è anche la funzione Game DVR
          Sustaining Open Source and Ecosystems   

The topic on how to value open source, and how to make sure that it can continue to thrive and work can be rewarded, is an old one. It has been heating up again recently with waves of tweetconvos, the latest being:

It seems obvious that foundational work for companies that make a ton of money from it should be rewarded. However, rewards come in many flavors, and individuals create open source works for varied reasons.

Let’s bypass the topics of “you get other rewards…” (reputation, skill, community, job possibilities, and even rare golden tickets): what role should large companies play?

Large companies should invest in projects that they get value in. There are many ways to do this and the issue of control often comes up. Are you giving money to a project, or looking to hire a particular commiter and steer their direction? When does it make sense to hire vs. pay a contract? There are trade offs for all parties when it comes to bringing someone on as an FTE: e.g. benefits and growth with the company vs the bureaucracy of performance reviews :) Do you sponsor issues or interest or stay at arms reach?

If you look at the majority of large companies you will see all types of arrangements in place that depend on many particulars.

I want to focus in on a type or sponsorship that ties into the question around funding for the Babel library. I think large companies have a responsibility to help an open ecosystem, such as the Web, thrive. At Google or Facebook, we make a great living from the Web and much usage touches both companies.

It is definitely our turn to help out. The government kicked off the incubation. Vint Cerf and friends had what they needed in resource and constraints to make the Internet, open for all to build on top of. This seeded an ecosystem that allowed for many platforms and business models, all leading to today.

Now we have commercial success we need to step in to support the ecosystem. The Web still suffers from an obesity epidemic, and anyone who is helping deserves much credit. As folk invest though, we need to be very careful to do so without king making and accidentally blocking future innovation. It is easy to do damage here, but that isn’t an excuse to sit back and do nothing.

I am excited to see the conversation continue and to work together on how best to garden the special platforms that we now have. Open Collective even has a conference on the topic in June.

Whenever I think about the topic I have to admit that I end up pulling the strings that end with frustration around how we value work in the world at large, and how we do not reward for long term value (see: how we pay teachers).

Sustaining Open Source and Ecosystems was originally published in Ben and Dion on Medium, where people are continuing the conversation by highlighting and responding to this story.

          Web Blog tentang software membobol password wifi mikrotik   

Cara Membobol Password Hotspot Wifi

Cara Membobol Mikrotik Hotspot Login: batam11: Software: 1: 03-19-2010 11:32 AM ... Inilah software untuk Cara Membobol Password Hotspot Wifi Whisher

Password Mikrotik

MIKROTIK software and Linux Redhat / Fedora Linux ... Dan Tutorial Menjebol Password Wifi Cara Menjebol Password Mikrotik ... pemakaian linux dan open source, membobol password mikrotik ...

Download Membobol Admin Wifi Software: AirGrab WiFi Radar, HP ...

Free membobol admin wifi downloads ... Pirated Software Hurts Software Developers. Using Membobol Admin Wifi Free Download crack, warez, password, serial ... Ping Mikrotik: Call Recader ...

Free Aplikasi Membobol Password Hotspot Downloads: IE Password ...

Top free aplikasi membobol password hotspot downloads. ... IE Password Recovery 1.0 Top Password Software . Download ... be used as communication media: The Internet, WiFi ...

Free Membobol Hotspot Downloads: Antamedia HotSpot Software by ...

Top free membobol hotspot downloads. Windows-based software for WiFi Hotspot billing. ... Windows Password Breaker... See-and-Calc ... Hotspot Shield Elite; Hotspot Web Mikrotik

Cara Membobol Mikrotik Hotspot Login

Cara Membobol Mikrotik Hotspot Login gw harap video ... > Forum Computer > Software: Cara Membobol Mikrotik ... Cara Membobol Password Hotspot Wifi: batam11: Internet

Bypass Hotspot Mikrotik Software Downloads

The MikroTik Router will replace your hardware ... Windows-based software for WiFi Hotspot billing. ... Membobol Password Wifi: Ptr Import: Winrar Destroy Password

Membobol Password Wifi Hotspot Bag 2 | Situs Download SOFTWARE ...

... keamanannya,,Ini adalah lanjutan postingan saya sebelumnya Membobol Password Wifi ... MIKROTIK Terdiri dari 2 macam yaitu software dan hardwarenya MikroTik Routerboard.

          How to catalogue a collection and plan a rare book sale   
I'm looking for general advice about how to bring a garage collection of many hundreds of valuable books to market. Part of this process will likely involve software choices. My brother-in-law, Colin, has a garage full of about 7000 books and of those, he suspects maybe 500 would fall under the general heading of "rare or desirable". It's those 500 that he wants to sell, but the whole collection needs to be assessed and catalogued. It *might* be the basis for starting his own display/sale website. Or he might do that and list items in auctions or on Amazon too.

It's almost certain that the books won't be sold at a single time or on consignment; he's going to want to rid himself of them one by one, depending on market circumstances, ease of advertising and level of motivation when he (soon) starts to retire from his regular job and get into these books as money-making hobby. He was once a part-time trader of books, so the remainder of his stocks are books he's collected himsef; the ones that interest him most.

He's considering making the assessment and unloading of his book collection a part-time paid project for someone like a book history student or a bookseller who wants some weekend work. Colin will be helping to some extent, but for the moment, he just wants to chart out a practical way forward. Colin is tech-illiterate and I'm only a modest dabbler at best (and won't be on-hand anyway), so please give simple explanations if possible. His only input on a computer will be data entry. But he might look for a tech wizard with rare book skills as a worker; so nothing is off the table. But simpler is still best.

With the above in mind (and noting that we are in Australia, so no regional esoterica please), here's some of the information sought from AskMefi:

- are there any recommended paid or open source database programs that would be good to use for cataloguing his collection? If there are many, what are the pos/neg features? I guess Colin's going to want to be able to enter a photograph, title, author, publisher, year, description and maybe a few more categories. The more versatile and automated the better and it might be a good thing if it could auto-load to a blog too (I'm simply riffing from a conversation I had with him - I have a bookish hobby, but it's not much help in planning a commercial venture.)

- is there a free or subscription site where book sellers can go to get help to value a book (I thought there might be some trade portal that includes auction results over a few decades, plus all amazon and rare book sales details) ??

- any other tips or ideas people have relating to getting a big stash of books into a digital metadata form, to ultimately bring it to market
          Stream tv over wifi   
Is it possible to stream a tv signal from my computer over wifi to a mobile android device? I have a USB tv stick connected to an external antenna and I watch tv on a 2nd monitor at my desk through one of a few media programs (usu. either Arcsoft TotalMedia or MediaPortal).

What I'd like to do is stream the tv signal via wifi to my Nexus 7. I've been unsuccessfully trying to do this with the MediaPortal software - European open source software of evolving modules whose instructions are juuuuust a bit beyond my ken. It's not really the tech wording or the English wording per se, but the combination of the two of them makes the already complicated instructions and troubleshooting guide head-to-desk-bangingly incomprehensible for moi.

I've also tried AirStream, an app that essentially let's the Nexus 7 access anything on the pc via wifi. It does that job very well but I can't work out how to identify the tv signal --> I presume it's actually not able to be streamed in real time(?) I think AirStream has to ID an actual file like .jpg or .mp3. I decided that, with tv anyway, you're supposed to record the program +/- change from .avi to .mp4 and then d/load it onto the Nexus 7. Obviously too cumbersome. If that's as good as it gets I've got movies on the device I could watch and just watch tv when I'm next at my desk.

There are a couple of recent answers in this LifeHacker thread, but they seem aimed at watching tv via a flash work around on the tv stations' non-html 5 websites. I don't want to use the internet; I'd rather not waste that bandwidth. I just want to watch tv in bed if it's possible in some 'reasonably easy' fashion. If not, not great loss.

As a final thought: could I connect an hdmi cable from the pc to the microUSB connection on the Nexus 7 and have it work that way?? If so, what android software would I need??

I'm in Sydney and just want our local free-to-air HD channels (~15 of them I guess). I'm on 64bit Windows 7. The Nexus 7 has Android version 4.3.
I am not tech illiterate but am certainly no geek.
          OIC Creating IoT Standards and Open Source Project to Drive Development   

Here’s something that might surprise you: Many people assume that the connectivity challenges for the Internet of Things have already been solved. In fact, many obstacles remain, which is why Intel has teamed with Samsung, Atmel, Broadcom, Dell, and Wind … Read more >

The post OIC Creating IoT Standards and Open Source Project to Drive Development appeared first on IoT@Intel.

Read more >

The post OIC Creating IoT Standards and Open Source Project to Drive Development appeared first on Blogs@Intel.


Typing master is the world best type practicing software. The software is not open source and you have to pay for buy full version. But I will give Typing master full version for world's Best price. It's free. you can Download it and install in your PC. practice to increase typing speed. You can download it by click on DOWNLOAD link.

          Open Source Band: Fresh talent every year   
The Open Source Band is an SVRocks tradition – available to the public for use and modification beyond its original design. Band members this year include: Jonah Matranga, Moniz Franco, Whitney Nichole, Greg Studley, Larry Marcus, Andrew Stess, Maxine Marcus and Alexandra Elliott.  Tell us about your band. How did you get started? How long […]
          Sony adds the Xperia XZs to its Open Device Program   

Sony has added yet another handset to its Open Device Program. This time it's the Xperia XZs. With older brothers, the Xperia XZ and the X Performance, already on the list, it was only a matter of time before the newer flagship made its way to the program. Sony's support for the open source community is commendable, and this is more great news for developers hoping to play with custom builds of Nougat on their Xperia devices.

Read More

Sony adds the Xperia XZs to its Open Device Program was written by the awesome team at Android Police.

          10 Media Centers Open Source a examen   
Aunque me esperaba más tras ver de qué iba el tema, la verdad es que la comparativa que se han currado en un organismo llamado Telematics Freedom Foundation es algo decepcionante. La idea es fantástica, y consiste en comparar 10[...]
          Open Source Photoblog Workflow with phpGraphy   
As I am lucky enough to be able to travel a bit in the near future, I thought of blogging a few pictures en route. I wanted to do this from my Androidphone (2.3) without the hassle of logging in … Continue reading
          Open Source event in Philadelphia tomorrow   

New Open Source event in Philadelphia tomorrow Thursday. Chariot Solutions is sponsoring the Philly Emerging Technology conference. It's a free event and there are apparently well over 200 that have signed up so far. Should be an interesting event.

Lots of interesting Open Source projects represented: Geronimo - Aaron Mulder, XFire - Dan Diephouse, Apache Software Foundation - Brian McCallister, Maven - Jason van Zyl, ServiceMix - Bruce Snyder, Open JPA - Patrick Linskey, Spring - myself

You can check out the schedule and it's looking like it will help with the withdrawal symptoms for those of us that won't make The Server Side Java Symposium this year :)

          ActiveMapper - Part 1 - automatic mapping between database table and a Java class.   

Every time I define a simple mapping between a Java class and a corresponding database table I tend to think that somehow this could be a lot easier. Why do we have to provide mappings for the really simple cases? Look at this table and java class:






public class Beer {

    private Long id;

    private String brand;

    private BigDecimal price;

    .. setters/getters


How hard is this really? I think that most of us could figure this one out. Now, granted, there are much more complex cases and also foreign key - relationship mappings to take into account. But, if the easy stuff was taken care of, then a we would only have to worry about the more interesting aspects.

Enter Ruby on Rails with its mapping support in Active Record. I really liked their approach of basing the mapping on some simple assumptions on naming tables and columns using a certain pattern. I happen to like their choice anyway, so the automatic mapping immediately looked like an attractive solution. The only part I had a problem with was the requirement that your persistent domain objects had to extend a certain Active Record base class. I have a hard time justifying tying any domain objects to a specific framework. It just seems like a bad and unnecessary dependency to me.

Anyway, I think that Active Record is a neat solution and I have been thinking that it is a shame that there is nothing like it in the Java space. To fill that void I have been prototyping a DataMapper that would provide the basic mappings, yet allowing us to extend it to provide any of the more complex cases if we needed to. I’m calling this solution ‘ActiveMapper’ and here is an example of how you would use it:

DataMapper beerMapper = new ActiveMapper(dataSource, Beer.class);
Beer b = (Beer)beerMapper.find(new Long(1));

I did not have to provide any mapping file -- it’s all based on JDBC meta data and reflection, plus assumptions similar to what Active Record uses. I can also create new instances and save them to the database.

Beer newBrew = new Beer();
newBrew.setPrice(new BigDecimal("12.44"));;

The id will be populated based on a primary key generation strategy depending on that database. For Oracle/PostgreSQL we will use a sequence and for MySQL we can use identity column and pick up the generated key with the getGeneratedKey method on a JDBC 3.0 compliant driver.

All of the existing functionality is built on top of Spring’s JdbcTemplate and at some point when I have enough functionality nailed down, I will put the source under some open source project -- most likely as part of Spring Modules. I'll post some more about this the next couple of days, and I will also provide a download of the prototype at some point.

          NYC Web Seminar   
I went to the "Developing Web Apps using Open Source Tools" seminar in New York. It was a great seminar organized by Vic Cekvenich. Here is the full lineup for this seminar:
  • Clinton Begin - iBATIS SQL Maps
  • Rod Johnson - J2EE without EJB
  • Ted Husted - Commons Chain
  • Matt Raible - Struts Menu and DisplayTag
  • Vic Cekvenich - basicPortal RIA
  • Christophe Coenraets - MacroMedia Flex
  • Jason Carreira - WebWork2
One interesting development was the dicussion between Ted Husted (Struts), Jason Carreira (WebWork) and Rod Johnson (Spring) about borrowing ideas, features and code from each other. The Struts developers feel that they have lost some momentum. This is partly the result of factoring out lots of features into Jakarta commons, that is now beeing reused by other frameworks. They are ready to start "borrowing" some back and are looking at Spring for ideas. You can read more about the seminar on Spring's news page. More blogging coverage covering the morning and the afternoon sessions from Matt Raible. Jason Carreira has also posted some notes.
          Damn, the disclosure of PRISM cost my money;-)   
Time is running on. It's been about two years after Mr.Sn0wden made the 1st disclosure of those documents back in June 2013. Everybody was shocked back then. In security/hacker community, those news about what BIG BROTHER did to us was nothing new. Guess most people already knew it. But what Mr.Sn0wden brings us, is to confirm the details about how BIG BROTHER has been doing the shit. More importantly, it has educational purposes for the public. The whole world is fuc*ing changed, because of PRISM disclosure. People( I mean crypto-anarchist, professional paranoia, etc) think differently from then. To myself( as a FOSS cybersecurity dude), the PRISM definitely changed my life.

I kept reading some astonishing news about leaked documents back in July 2013 and thought a lot during the period of oSC2013 at a beautiful city nearby Aegean Sea. "What should I do about it? Should I get involve with something? What kind of philosophical ideas can better fit in post-prism era?" and so on..these questions I asked myself many times. Then I was thinking ......

1, Philosophical level. Well, free software philosophy would be the same to me since 2007. The concept of free/libre is more important than ever before. In post-prism era, BIG BROTHER and big corps are too powerful to restrict the individual freedom in digital world. Although we've won the war between open vs. closed. But many people still misunderstand about the differences between free software and open source. IMOHO, support FSF( Free Software Foundation) will always be on my TODO.

2, Technical level. Many researches reveal that open system is more secure than closed one. Btw, Bruce Schneier agrees with that. After all these years, I finally realize there are two powerful weapons we can use to against the enemy: System security & Cryptography. Some people only focus on crypto and OS level security is totally missing, which might cause a failure. It's like building a fortress upon the sand. Some 0ld sch00l hackers criticised about it last year. In the practical cases to GNU/Linux users, PaX/Grsecurity is the only option we have.

3, Law level. Speak of law & public education, EFF has been doing the great work in past two decades. Why would I support EFF? The reason is so simple: They speak for me, or they speak for the type of person like me.

I did the math a little bit today and found out I've donated around $5800 to the FOSS community including FSF, EFF, Debian, Mempo, PaX/Grsecurity, HardenedLinux, HardenedBSD since the disclosure of PRISM. I'm not trying to convince anyone to donate money to any organizations here. But I'm encouraging you to think for yourself, about why are you here reading my fuc*ing annoying & noisy blog? Does free software matters to you? Or don't you think is worth supporting about what EFF is doing?

Long live 0ld sch00l!
Long live anarchy!
          Happy New Year 2015   
Time is running on and brings us to another new year. Does this fuc*ing mean another fight? I've been sitting on my butt and watching a lot of presentations of 31C3. Unfortunately, I couldn't be there physically. I'm fuc*ing jealous you guys who were there;-)

I've learned a lot from these videos. So, I'd like to write down what I thought about some great topics.

31C3 Opening Event [31c3] mit Erdgeist und Geraldine de Bastion