National Business Expo 8-9th March   

Get startup and exporting advice for free! It is our privilege to be able to announce that the Irish Export Cooperative will be taking part in getting SME’s exporting at the RDS for businessexpo.ie. Amongst many of the opportunities available, the Export Coop will be in the ‘Export Pavilion’ assisting the attendees of the exposition to find out more information and to get exporting. This incredible initiative, the National Business Expo, is an unmissable event for anyone thinking of starting up or already involved in a small, medium or micro enterprise in Ireland. The show ranges from start up supports to finance workshops to incredible speakers and events. For just two days, the RDS will be a showcase for everything positive with the sector and a guide for those who have questions about running a small business. And all of this is for free! Registering tickets couldn’t be easier and we would encourage all members to avail of the opportunity to view all that there is on offer. Click here to register for your free tickets to the National Business Expo 2013 If you are interested in the supports that are available, check out the list of those that will be on stand: InterTrade Ireland, Management Works, LIT, Enterprise Ireland, FR Kelly, Credit Review Office, County Enterprise Boards, Bank of Ireland, Microfinance Ireland, Vodafone, 11890, UPC, Kernel Capital, Sage, Grant Thornton, Irish Times and RTE Radio One amongst many other exhibitors and speakers. Amongst many of the workshops and seminars that are offered are: new market entry, e-commerce, mobile marketing, social media, business productivity, space as a service, selling online, product protection, cloud computing, digital mentoring, business model development, senior enterprise, women in business, exporting, credit clinics, setting up a company and r&d tax credit amongst a host more over 2 days. Also, why not check out and share our micro site on the businessexpo.ie main site.

The post National Business Expo 8-9th March appeared first on Irish Export Cooperative.


          リーナスさん自身が主導したかどうかは知り…   

リーナスさん自身が主導したかどうかは知りませんが、Linuxはユニコードの採用が比較的早かった。1995年に来日されたときに、Linuxの多言語化について質問した人がいましたが、リーナスさんはそれはアプリの問題でカーネルの問題ではない、しかし近い将来ユニコードが普及すればその問題の大半は解決するみたいな答えでした。実際にユニコードのためにカーネルが書き換えられたのは2.6から、2002年のことです。

http://linuxjf.sourceforge.jp/JFdocs/kernel-docs-2.6/unicode.txt.html

リーナスさんはスウェーデン系のフィンランド人なので、ドイツ語やフランス語を使う人、アジアの言語を使う人と同じく、特殊記号を含む文字コードの問題について理解しやすかったと思います。


          NILFS on Jaunty   
The latest release of Ubuntu includes the long awaited Ext4 FS (works flawlessly on my system).
Ext4 is faster & more secure but still lacks the ability to manage FS snapshots (ZFS excels in that, but runs only in FUSE on linux).
An interesting alternative is NILFS:
NILFS is a log-structured file system supporting versioning of the entire file system and continuous snapshotting which allows users to even restore files mistakenly overwritten or destroyed just a few seconds ago.


NILFS maintains a repo for Hardy, the Jaunty repos contain only the user land tools which won't do us much good since we need the kernel module as well, this leaves us only with the option of installing it from source (still quite easy).


# a perquisite
$ sudo aptitude install uuid-dev
# installing kernel module, result module resides in /lib/modules/2.6.28-11-generic/kernel/fs/nilfs2/nilfs2.ko
$ wget http://www.nilfs.org/download/nilfs-2.0.12.tar.bz2
$ tar jxf nilfs-2.0.12.tar.bz2
$ cd nilfs-2.0.12
$ make
$ sudo make install
# installing user land tools
$ wget http://www.nilfs.org/download/nilfs-utils-2.0.11.tar.bz2
$ tar jxf nilfs-utils-2.0.11.tar.bz2
$ cd nilfs-utils-2.0.11
$ ./configure
$ make
$ sudo make install

Creating a file system on a file (ideal for playing around):

$ dd if=/dev/zero of=mynilfs bs=512M count=1
$ mkfs.nilfs2 mynilfs

The FS is only a mount away:

# mounting the file as a loop device
$ sudo losetup /dev/loop0 mynilfs
$ sudo mkdir /media/nilfs
$ sudo mount -t nilfs2 /dev/loop0 /media/nilfs/

Now lets create a couple of files:

$ cd /media/nilfs
$ touch 1 2 3
# listing all checkpoints & snapshots, on your system list should vary
$ lscp
CNO DATE TIME MODE FLG NBLKINC ICNT
7 2009-05-01 01:08:09 ss - 12 6
13 2009-05-01 19:05:34 cp i 8 3
14 2009-05-01 19:05:59 cp i 8 3
15 2009-05-01 19:07:09 cp - 12 6
# creating a snapshot
$ sudo mkcp -s
# 15 is the new snapshot (mode is ss)
$ lscp
CNO DATE TIME MODE FLG NBLKINC ICNT
7 2009-05-01 01:08:09 ss - 12 6
13 2009-05-01 19:05:34 cp i 8 3
14 2009-05-01 19:05:59 cp i 8 3
15 2009-05-01 19:07:09 ss - 12 6
16 2009-05-01 19:08:59 cp i 8 6

# our post snapshot file
$ touch 4

Now lets go back in time into our snapshot, NILFS enables us to mount old snapshots as read only FS (while the original FS is still mounted):

$ sudo mkdir /media/nilfs-snapshot
$ sudo mount.nilfs2 -r /dev/loop0 /media/nilfs-snapshot/ -o cp=15 # only snapshots works!
$ cd /media/nilfs-snapshot
# as we might expect
$ ls
1 2 3

NILFS has some interesting features, its not production ready yet however it sure worth looking after its development.
          Akademy Qt5 QtQuick course and Nokia N9 fun   

KDE Project:

I went to the day long course on Monday given by KDAB, about QtQuick for Qt5, and it was excellent. I had used Qt4 and QML for a Symbian phone project earlier this year, and the combination worked very well.

The course started with the basics, and in most repects Qt5 QML is pretty similar to the Qt4 version. I learned a few things about the parts of QML that I hadn't tried like state machines and transitions. It got interesting at the end when Kevin Ottens explained how you could include OpenGL fragment and vertex shaders in QML. This Pimp my video: shader effects and multimedia blog on Qt Labs show some sample code and a video of what it looks like running on a phone. My personal OpenGL foo isn't up to writing my own shaders, but there is a pretty complete library of canned effects, such as drop shadows, that comes with Qt5.

At lunch time, Quim Gil announced that all the registered attendees would receive Nokia N9 phones, and when we got them it felt like Christmas had come early this year for me. It really is a very fine phone. The UI is polished with plenty of apps, and the industrial design is slick. I tried the SIM from my ancient dumb Nokia phone and unfortunately it was too big to fit. I will have to get the data transferred to another card, or someone told me that it was possible to cut down an old large SIM to make it smaller. I am certainly looking forward to try it out as a phone.

Today I've been getting the N9 working as a development device. It is quite straightforward and you just need to activate 'developer mode' via an option on the Security settings panel. Then you configure Qt Creator with SSH keys to connect via the N9's USB cable (or WLAN works too) and you're done. I got a Hello World app working, and then tried to port the large Symbian app to the N9. The port was a bit more trouble than I was expecting due to a problem with upper case characters in the app name, and other problems related to doing development under Mac OS X. I had developed the Symbian app under Linux and Windows, and I think it would be best to give up on Mac OS X for the N9 and use Linux instead.

I ssh'd onto the N9 and fished around a bit to see what was there. It has perfectly standard Debian packaging, and when you build an app in Qt Creator a .deb is transferred to /tmp and installed from there. The root partition has 2GB free to install apps which should be plenty and there was another partition with 7GB free for pictures, documents, maps and so on.

I did some work on the QSparql library that is used for interfacing Qt apps with the Tracker Nepomuk store, and it was nice to see that the lib came with the phone. But it didn't come with the driver for accessing SPARQL endpoints though, and I couldn't find that driver with an 'apt-cache search' query. I might have to build and install it myself.

I noticed that none of the apps that were pre-installed used QML, and they used the obsolete MeegoTouch libs instead. I asked about installing Qt5 on the N9 at the QtQuick course and apparently it is quite straightforward.

I can confirm what other people have said about the N9, and how great the UI and phyisical design is, with a very nice development environment. I also thought Qt/QML on Symbian works really well. So I feel puzzled by Stephen Elop's unsuccessful Windows Phone 7 strategy. If WP7 isn't taking off now, I can't see how WP8 based on the Windows NT kernel is going to be any more successful. I can't imagine why anyone would want to buy a WP7 device when Microsoft announced the other week that there would be no forward path for existing WP7 phones to run WP8.

I saw Aaron Seigo and Sebastian Kugler give a couple of great presentations about Plasma Active at the Akademy conference last weekend. The highlight was Aaron laying into so called 'Tech Pundits' who said that Plasma Active can't possibly compete with Android. Aaron pointed out that actually Android had Plasma Active had nothing much in common, and the clueless pundits were doing something like criticizing an Italian meal at a restaurant for not being a French meal. An activity based tablet is made of very different stuff to the conventional 'Bucketful of Apps' approach used by Android and iOS. By not competing with that approach head on, and having a very lean operation they can construct a viable business even with relatively small sales.

If the Nepomuk store in Plasma Active can be used as a basis for the activity based app integration then so could the similar (although less powerful) Tracker store in the N9. That is another powerful capability of the N9 that has yet to be exploited by 3rd party apps. The N9 Tracker app data integration is much more powerful than the Windows Phone 7 hubs and active tiles. Perhaps some kind of nice notification center is perhaps the main N9 feature that could be improved as far as I can see.

Oh well, I just hope that Nokia will be able to sort themselves out, and recover from the train wreck that they appear to be at present. And thanks again guys for the N9.


          Screen Locking in Fedora Gnome 3   

KDE Project:

I wanted to try out Fedora 15 with Gnome 3 running under VirtualBox on my iMac before I went to the Berlin Summit. I've already tried using Unity-2d on Ubuntu, and I thought I if I had some real experience with Gnome 3 as well, I could have a bit more of an informed discussion with our Gnome friends and others at the Summit.

Sadly it didn't go all that well. Installing the basic distro went fine, but I couldn't manage to install VirtualBox Guest tools so that 3D graphics acceleration would work. The tools built fine, but the 'vboxadd' kernel module was never installed and there was no clue why in the build log. Then while I made a first attempt at writing this blog, Virtual Box crashed my machine and I lost everything. So it looks like I'll stick with VMWare for a bit yet even though it doesn't have 3D acceralation for Linux.

I discovered that Gnome 3 locks the screen, when it goes dim, by default just like I found Kubuntu and Mandriva did recently. I had a look at where that option is defined and it was under 'Screen'. So Screen locking was under 'Screen' and I managed to guess where it was first time. Score some points for Gnome usuability vs KDE there! Even so I still don't think it is a 'Screen' thing it is a 'Security' thing. Interestingly Ubuntu doesn't lock the screen by default. Does that mean Fedora and KDE are aimed at banks, while Ubuntu is more aimed at the rest of us?

In contrast, I had spent a lot of time going round the KDE options and failing to find it how to turn off screen locking. Thanks to dipesh's comments on my recent blog about virtual machines and multi booting USB sticks he pointed out that it was under 'Power Saving', and I managed to turn it off on my Mandriva install. There were also options under power saving to disable the various notifications that had annoyed me so much like the power cable being removed. Excess notifications are a real pain and it is very important to be disiplined about when to output them in my opinion. It feels like some programmer has mastered the art of sending notifications, and they want to show that skill off to the world.

Another app that outputs heroic numbers of notifications is Quassel when it starts up. I get a bazillion notifications about every channel it has managed to join, that I really, really don't care about. I think developers need to ask the question 'if the user was given notification XXX how would they behave differently, compared to how they would have behaved if they never received it in the first place?'. For instance, I can't imagine what I would do differently if I am told the power cord is disconnected, when it was me who just pulled it out. Maybe it would be useful if you had a computer where the power cord kept randomly falling out of its socket. Or with Quassel, do I sit watching the notfications for the twenty different IRC channels that I join waiting for '#kde-devel' so I can go in immediately. In fact I can't do anything with my computer because it is jammed up with showing me notfications.

Unlike Kubuntu, Mandriva was able to suspend my laptop when the lid was shut even when the power cord was connected.

The default behaviour on both Kubuntu and Mandriva with my HP 2133 netbook when I opened the lid, was to wake up with lots of notifications that I wasn't interested in, force me to enter my password in a screen lock dialog that I didn't want, and then immediately go back to sleep. This was actually the last straw I had with Kubuntu, and I was really surprised that Mandriva 2011 was exactly the same.

I had a look at my Mac System Preferences and couldn't find any way to lock the screen. The closest equivalent was in the 'Security' group that allowed you to system to log you out after x minutes of inactivity. That option certainly isn't on by default. Macs go to sleep when you close the lid, and wake up when you open the lid without a lot of fuss or bother.

Anyhow I look forward to seeing everyone in Berlin..


          Computer File Recovery 11.01.01   
Recover lost computer files instantly with Kernel for FAT and NTFS.
          Mental Health Rehab Foods To Enhance Mental Health   
Broccoli
Broccoli is America’s favorite vegetable, according to a recent poll. No wonder. A cup of cooked broccoli has a mere 44 calories. It delivers a staggering nutritional payload and is considered the number one cancer-fighting vegetable. It has no fat, loads of fiber, cancer fighting chemicals called indoles, carotene, 21 times the RDA of vitamin C and calcium.

When you’re buying broccoli, pay attention to the color. The tiny florets should be rich green and free of yellowing. Stems should be firm.
Cabbage
This Eastern Europe staple is a true wonder food. There are only 33 calories in a cup of cooked shredded cabbage, and it retains all its nutritional goodness no matter how long you cook it. Eating cabbage raw (18 calories per shredded cup), cooked, as sauerkraut (27 calories per drained cup) or coleslaw (calories depend on dressing) only once a week is enough to protect against colon cancer. And it may be a longevity-enhancing food. Surveys in the United States, Greece and Japan show that people who eat a lot of it have the least colon cancer and the lowest death rates overall.

Carrots
What list of health-promoting, fat-fighting foods would be complete without Bugs Bunny’s favorite? A medium-sized carrot carries about 55 calories and is a nutritional powerhouse. The orange color comes from beta carotene, a powerful cancer-preventing nutrient (provitamin A).

Chop and toss them with pasta, grate them into rice or add them to a stir-fry. Combine them with parsnips, oranges, raisins, lemon juice, chicken, potatoes, broccoli or lamb to create flavorful dishes. Spice them with tarragon, dill, cinnamon or nutmeg. Add finely chopped carrots to soups and spaghetti sauce - they impart a natural sweetness without adding sugar.
Corn
It’s really a grain - not a vegetable - and is another food that’s gotten a bum rap. People think it has little to offer nutritionally and that just isn’t so. There are 178 calories in a cup of cooked kernels. It contains good amounts of iron, zinc and potassium, and University of Nebraska researchers say it delivers a high-quality of protein, too.

The Tarahumara Indians of Mexico eat corn, beans and hardly anything else. Virgil Brown, M.D., of Mount Sinai School of Medicine in New York, points out that high blood cholesterol and cardiovascular heart disease are almost nonexistent among them.
Here are some useful resources for mental health rehab and drug detox centers
          Ksplice gives Linux users 88% of kernel updates without rebooting   

Have you ever wondered why some updates or installs require a reboot, and others don’t? The main reason relates to kernel-level (core) services running in memory which either have been altered by the […]

The post Ksplice gives Linux users 88% of kernel updates without rebooting appeared first on Geek.com.


          Ubuntu 9.04 (Jaunty) su Asus EeePC 901   
Con l’uscita della versione 9.04 di Ubuntu Linux, questa distribuzione ha veramente fatto passi da gigante per quanto riguarda il supporto ai netbook. Oltre a rilasciare una versione specificatamente pensata per i piccoli device (che integra di default l’interfaccia Notebook Remix), sono stati inclusi nel kernel tutti i moduli necessari a far funzionare le periferiche […]
          Comment on Linux? Real time? I don’t think so… by Trevor   
I think you're mistakenly assuming that "real time" always means "hard real time". Don't forget about "soft real time". These new patches may not be viable for safety-critical hard real time systems, but for soft real time, they'll probably be perfect. Think about applications like audio and video processing, VoIP servers running Linux, etc. These are domains where dropping a packet isn't the end of the world, but you still want harder guarantees and stricter, more responsive scheduling than you could get with previous Linux kernels. I wouldn't be so quick to deride the idea of Linux and real time.
          Drivers do Asus ZenFone 4 - Android Driver - Windows   

Drivers do Asus ZenFone 4 - Android Driver - Windows


Asus ZenFone 4 - Android Driver

Baixar os drivers para Asus ZenFone 4 - Android Driver.


Os Download dos drivers para Asus ZenFone 4 - Android Driver para Windows disponíveis em Driver Max Download estão em servidores com links diretos para o arquivo (caso não seja baixado, você pode escolher um outro servidor ou nos informar em - Pedidos). 

Talvez você encontre mais drivers que precise com nossos parceiros :

Parceiros: Giga Driver | Geek Driver | Kit Driver | BR Driver

Nosso Clube no Face: Clube dos Drivers

Seremos muito gratos se você colocar um link para o  Driver Max Download no fórum, rede social, ou em sua página de Internet. 

Peça seu driver em - Pedidos
ou encontre Aqui !


Download para  Asus ZenFone 4 - Android Driver :


 OS Suportado :  Windows


Downloads Disponíveis:

Android Driver

ASUS ZenFone4(T00I) software Image: V6.5.35 (Android 4.4) Download

ASUS Android USB Drivers para Windows Download

Unlock Device App: Unlock boot loader Download

ASUS ZenFone 4 Kernel source file para Android OS Download

ASUS ZenFone 4 English Version User Manual Download

          corn cakes (gone wrong)   

summary of ingredients:

1 cup of corn flour
1 can of corn kernels
a bunch of parsley
ground black pepper
4 egg white
rocket salad
balsamic vinegar
oil

although the photos present the process and product as appetising, i say that the corn cakes have gone wrong because they should have turned out more spongy than they did. they taste more like corn cookies than corn cakes. perhaps, the amount of corn flour should have been reduced. or maybe, i should have just used the regular consistency for pancakes and add corn kernels in the batter.
          Sad experience with Debian on laptop...   

KDE Project:

Until a few weeks ago, I had Kubuntu running on my Acer Aspire 5630 laptop (as described here), and was more or less satisfied. It looked great, hardware support was satisfying, but I was missing the incremental package upgrades that I was used to on Debian (so that things break one small piece at a time, not everything at the same time when you do an upgrade). When, after upgrading to gutsy, the laptop would lock up every few minutes for a minute or so, I thought it was a Kubuntu problem and took it as the reason to setup Debian instead. BIG MISTAKE!!!!

After I had Debian installed, I realized how bad Debian's Laptop support really is:

  • KNetworkManager would not work with any WPA-encrypted WLAN networks (I can only connect to unencrypted networks); So after booting, I now need to run wpa_supplicant manually as root with the proper settings...
  • The ACPI DSDT in the BIOS is broken on this laptop, so suspend and hibernate won't work. In Kubuntu, I could simply fix the DSDT.aml and put it into the initrd, where the kernel picked it up. Infortunately, Debian developers decided not to include that patch, so I can't replace the DSDT with the fixed one in the stock kernel. The patch is also not upstream, as described on the ACPI page, because the kernel devs feel that inter alia "If Windows can handle unmodified firmware, Linux should too.". I think so too, but currently that's simply wishful thinking and does not have a bit to do with reality!!! I have yet to see one laptop where ACPI simply works out of the box in Linux! As a consequence, it seems that I will need to patch and compile the kernel myself for every new kernel upgrade (and of course also the packages for the additional kernel modules to satisfy the dependencies!)! The kernel devs again argue that "If somebody is unable to rebuild the kernel, then it is hard to argue that they have any business running modified platform firmware." Again, I agree, but just because **I AM** able to compile a kernel, does not mean that I should be forced to compile every kernel myself that I ever want to use!
  • The Debian kernel also does not include the acerhk module, which is needed to support the additional hot keys on the laptop

So, in short, I now have a laptop without properly working WLAN, no suspend and hibernate, and no support for the additional multimedia keys. Wait, what were my reasons to buy a laptop? Right, I wanted it for mobile usage, where I'm connected via WLAN, and simply open it, work two minutes and suspend it again...

I'm now starting to understand why some people say that Linux is not ready for the masses yet. If you are using Debian, it really is not ready, while with Kubuntu, all these things worked just fine out of the box (after I simply fixed the DSDT).

If having to recompile your own kernel every time it is upgraded is the price to pay for running Debian, I'm more than happy to switch back to KUbuntu again (which will cost me another weekend, which I simply don't have right now). The KUbuntu people seem to have understood that good hardware support is way more important than following strict principles (since the kernel devs don't include the dsdt patch, the Debian people also won't include it, simply because it's not in upstream... On the other hand, they are more than happy to patch KDE by self-tailored patches and cause bugs by these patches!!!).


          Native Americans Call for Rethinking of Bering Strait Theory   

如果想下载文章的MP3声音、PDF文稿、LRC同步字幕以及中文翻译等配套英语学习资料,请访问以下链接:
http://www.unsv.com/voanews/specialenglish/scripts/2017/06/26/6530/

Native Americans are questioning the leading theory of how the first peoples in North America arrived on the continent.

For years, scientists have been debating where the first Native Americans came from, and when they arrived in North America.

The scientific community generally agrees that a single wave of people crossed a land bridge connecting Siberia and Alaska around 13,000 years ago.

This theory is called the Bering Strait Theory, named after the waterway between eastern Russia and western Alaska. Yet some Native Americans feel that theory is too simple and culturally biased.

Theories from the religion before science.

The first European explorers to arrive in the Americas did not use science to explain the people they found. The explorers instead looked to the Bible. Christianity’s holy book suggested that human beings were created around 4,000 years ago. Biblical tradition holds that all humans are related to the first man, Adam. That would including native peoples whom Europeans considered as primitive or simplistic.

'Dominant science believed in a concept of superiority,' said Alexander Ewen. 'And that created an idea that either people were genetically inferior or that there were stages of civilization, and Indians were at a lower stage,' he said.

Ewen is a member of the Purepecha Nation. He wrote a book called the 'Encyclopedia of the American Indian in the Twentieth Century.'

Early scientists felt the 'primitives' they discovered in the Americas did not have the technology to have sailed the oceans. So they decided that Indians had reached North America by some unknown land bridge. They found their answer in the Bering Strait.

Ewen says that scientific theory has lasted to this day, even with new discoveries and technology. Yet new findings suggest that Indians arrived much earlier and by using different methods.

Map of eastern Russian and Alaska with a light brown boarder depicting Beringia, where archaeologists believe ancient Americans crossed from Siberia into Alaska around 13,000 years ago. (U.S. National Park Service) Map of eastern Russian and Alaska with a light brown boarder depicting Beringia, where archaeologists believe ancient Americans crossed from Siberia into Alaska around 13,000 years ago. (U.S. National Park Service)

'In the first place, it's simplistic,' said Ewen. 'The people in this hemisphere were, and are, extremely diverse, more than any other place in the world.'

Conflicting theories

In the 1930s, scientists studied a number of bones from ancient mammoths. The bones were discovered in the American community of Clovis, New Mexico. Among them were several unusual spear points, which the scientists named “Clovis points.”

Since then, tens of thousands of the Clovis points have been found across North America. Some have even been found in South America, as far south as Venezuela.

This led scientists to decide the Clovis people must have been America's first peoples. They believed the Clovis people arrived about 13,000 years ago.

Additional discoveries in the 1970s led some scientists to push back the arrival date. Archaeologist James Adovasio dated artifacts found in Pennsylvania's Meadowcroft rock shelter to be up to 16,000 years old. But other scientists criticized the methods he used to arrive at that date.

The Meadowcroft Rockshelter in Washington County, Pa., where archaeologists found artifacts dating back 16,000 years. The Meadowcroft Rockshelter in Washington County, Pa., where archaeologists found artifacts dating back 16,000 years.

All fields of science are in the debate

Other scientists have expressed their ideas on the subject. In 1998, University of California-Berkeley linguist Johanna Nichols argued that it would have taken up to 50,000 years for a single language to split into the many languages spoken by modern Native Americans. This theory meant that America’s first peoples would have arrived closer to 19,000 years ago.

Geologists have said that it would not have been possible to cross the Bering Strait by land until 10,000 or 12,000 years ago. This led to theories that early humans might have sailed down the Pacific coast into the New World.

In 2015, a Harvard University geneticist, Pontus Skoglund, noted genetic links between Amazon Indians and the native peoples of Australia and New Guinea.

An elderly member of Brazil's Surui Nation. Researchers found the Surui bear a genetic relationship to indigenous peoples of Australia and New Guinea. An elderly member of Brazil's Surui Nation. Researchers found the Surui bear a genetic relationship to indigenous peoples of Australia and New Guinea.

Yet a Smithsonian Institution anthropologist was criticized for suggesting Stone Age Europeans sailed across the Atlantic thousands of years before Christopher Columbus.

In April of 2017, researchers in California studied crushed bones they say came from an ancient Mastodon. Mastodons are no longer alive, and were related to modern elephants. The researchers think the creature they studied was killed by humans 130,000 years ago. However most scientists reject this theory because the findings cannot be confirmed.

Native American accounts

Some Native American tribes have their own beliefs of how their people came to the continent.

Montana's Blackfoot tradition says that the first Indians lived on the other side of the ocean, but their creator decided to take them to a better place. 'So he brought them over the ice to the far north,' the story says.

The Hopi people of Arizona say their ancestors had to travel through three worlds before they finally crossed the ocean going east to a final new world.

And Oklahoma's Tuskagee people believe the 'Great Spirit' chose them to be the first people to live on the earth.

However, few scientists seem to take those beliefs seriously. Joe Watkins, supervisory anthropologist at the U.S. National Park Service, says scientists are uneasy about the time references and possibility for more than one explanation.

Yet he does not feel the beliefs should be dismissed completely.

'…I do believe most of them carry within them kernels of truth of use to researchers,” he adds.

I’m Phil Dierking.

­­­­­­­­­­­­Cecily Hilleary reported this story for VOANews.com. Phil Dierking adapted her report for Learning English. George Grow was the editor.

How do you think the first people came to the Americas? We want to hear from you. Write to us in the Comments Section or on our Facebook page.

Words in This Story

artifact - n. a simple object (such as a tool or weapon) that was made by people in the past​.

bias - n. a tendency to believe that some people, ideas, etc., are better than others that usually results in treating some people unfairly​

geologist - n. a science that studies rocks, layers of soil, etc., in order to learn about the history of the Earth and its life​

inferior - adj. low or lower in quality​

kernel - n. the small, somewhat soft part inside a seed or nut​

linguist - n. a person who studies the science of languages

primitive - adj. very simple and basic ​

reference - n. the act of referring to something or someone​

spear - n. a weapon that has a long straight handle and a sharp point​

superior - adj. high or higher in quality​


          Avast! Free Antivirus   

Avast! Free Antivirus è una soluzione antivirus che fa dell'efficacia e della semplicità d'uso i suoi principali cavalli di battaglia. L'interfaccia utente, completamente rinnovata a partire dalla release 5.0, risulta infatti particolarmente adatta ai meno esperti, ma consente anche un facile accesso alle impostazioni più avanzate.

Per quanto riguarda la sicurezza, l'efficacia dell'engine antivirus di Avast! Free Antivirus è provata dalle certificazioni ICSA Labs e VB100, oltre che dagli interessanti risultati fatti registrare nel corso di alcuni test comparativi messi a punto da laboratori di ricerca indipendenti come AV-Comparatives.org e AV-Test.org.

Sotto il profilo prestazionale, Avast! Free Anvirus utilizza un quantitativo limitato di risorse del sistema e risulta estremamente leggero, pertanto rappresenta una valida soluzione anche per i computer meno potenti. Avast! Free Antivirus è utilizzabile liberamente per fini non commerciali, ma in alternativa sono disponibili a pagamento le due edizioni Avast! Pro Edition e Avast Internet Security; per ulteriori informazioni a riguardo, vi rimandiamo alsito ufficiale di Avast!

Di seguito ecco l'elenco delle principali caratteristiche tecniche del software:

Kernel antivirus Motore di scansione euristico Aggiornamenti automatici AntiRootkit integrato in realtime Protezione completa del sistema da malware e spyware Protezione web Protezione e-mail Protezione P2P Protezione instant messaging Virus Cleaner integrato Emulatore PUP (programmi potenzialmente indesiderati) Modalità di gioco silenziosa Supporto Sistemi Operativi a 64 bit Supporto Windows 7

Qui di seguito elenchiamo in sintesi le novità presentate in questa release:
Updates:
Smart virus definition updates
Incremental updating system minimizes the size of regular update files.

Fast application of updates
New format for the virus definition file speeds up application of updates into avast! 5.0 and reduces demand on CPU/memory, resulting in uninterrupted computer use.
New format for the virus definition file means faster updates and reduces demand on CPU/memory, resulting in uninterrupted computer use.

Gaming:
New Silent/Gaming Mode automatically detects full-screen applications and disables pop-ups and other on-screen notifications without degrading security.

Optimized for latest Intel Core i7 CPUs:
Critical sections of the avast! scanning engine code have been optimized to deliver unrivaled performance on the latest Intel chips.

CPU optimization:
Multi-threaded scanning optimization
avast! runs faster on new multi-core CPUs. A new avast! feature allows the splitting of large individual files between cores, accelerating the scanning process.

Green computing:
Reduced demands on the disk drive result in lower energy consumption.

Miscellaneous:
avast! iTrack - Real-time graphic scanning reports.
Graphical user interface - Easy to navigate graphical interface.
Automatic processing - Infected files are processed automatically without requiring user instructions.

          Ideas for a cgroups UI   
On and off over the past year I’ve been working with Jason Baron on a design for a UI for system administrators to control processes’ and users’ usage of system resources on their systems via the relatively recently-developed (~2007) cgroups feature of the Linux kernel. After the excitement and the fun that is the Red … Continue reading
          Keeping the Faith (or Not)   

I think most families, especially large ones, have histories that are far more legend than factual.  Kernels of truth - sometimes as little as coincidence of surnames on the old family tree, or some ancestor having lived in the same locale as a celebrated figure - grow into luxurious vines of mythology: "We're related!"  "Great Grandfather knew him!"  Added to this tendency, I think, is the practice of parents or older relatives to sanitize or simplify complex situations into tales fit for young ears.  

It was an object of faith in my family - and still is among some of my cousins on the Warren side (my mother's family), especially the Protestant ones - that "Grandma was kicked out of her strict Catholic Irish family for marrying a Protestant."  This, too, was what I had believed - until in her later years, when I think she sensed she was sinking into the dementia which finally overcame her memory entirely, Mom told me what I assume is a more accurate version of events.  The Catholic/Protestant version of my mother's story paired nicely with a reverse story from my father's side whereby my Uncle Henry's wife was disowned by her strict Lutheran family when she married my Catholic Irish uncle (I wouldn't be surprised if the Irish aspect was worse than the Catholic one) and, when this Aunt Laura died giving birth to her second child, "they didn't even come to her funeral."  This second story seems, also to be less than accurate.  For instance, it turns out that that child and Laura are buried in her (Laura's) family plot.  Incidentally, the child of this birth was named Hedwig and died eleven days after her mother, no doubt having realized what a handicap growing up with the name 'Hedwig' would prove to be.  

The story of my grandmother being disowned was true, and it apparently was also true that her picture was cut from her family's photographs, since we learned the latter detail from cousins who discovered our kinship when a cousin of mine and her employer noticed that they had similar names in their ancestry.  "We always wondered what she had done," they told my mother, when they finally met.  However, as my mother later explained, my grandmother Elsie met my Protestant grandfather Ephraim after she had already left home.  The new story has some suspicious details, which I will point out but, I think, it is substantially accurate.  

Elsie had graduated high school and had taken a job at a local hospital, which she evidently enjoyed very much.  She still lived at home and, I gather, was either the eldest daughter of the family or else she was the oldest girl still living at home when her mother died.  Not too long after she began working, her mother passed away.  Since the family was a strict Catholic Irish family, there was, of course, a passel of kids younger than Elsie who were still in school and in need of a parent substitute devoted to the domestic chores involved in raising children around in the first decade of the 1900's.  My great grandfather, who by all accounts was a son of a bitch, was not about to take over these duties, nor to pay someone else to do it (I gather the family was comfortably off, though not wealthy).  The suspicious details (because they sound a tad melodramatic) of what followed are these:  it was just before Christmas, gifts were already wrapped and the names of the recipients were attached.  Great-grandfather removed Elsie's name from her gifts and readdressed them to her younger sisters.  He then led her to her late mother's closet and told her she was to quit her job and that henceforth these would be her clothes, and that she was to stay home taking over the duties of keeping house for the family and of raising her younger siblings.  

By all accounts, Elsie was a girl who, though an extremely strict parent later to her own daughters, loved a joke and loved a good time.  By this I don't mean to imply she was in any way loose, but just that she was not ready to give up her independence and probable future happiness to become a domestic slave.  She had vacationed the previous summer with a cousin in Geneva, NY and had had a marvelous time there.  Upon being faced with a dreary future at home, she packed her bag and as soon as she got the chance, left home and fled to the cousin, who took her in.  With a single exception, she never saw any of her family again; her siblings were forbidden to mention her name and her face was cut from all the family photos.  The one sister she did see again was Great-Aunt Daisy who, after she grew to adulthood, tracked Elsie down and re-established a relationship with her.  My mother remembers Aunt Daisy's visits as great treats; Daisy always came to visit laden with gifts for the children.  By leaving home and later marrying my grandfather, Elsie Warren left the middle class and became firmly embedded in the working class, in which every one of her daughters remained and among which which they chose their spouses.  

Not too long after she left home, Elsie met Grandpa Ephraim at some social affair - a village dance or festival of some sort - and in short order the two wed.  Grandfather was from an Appalachain mountain family that was spread along the New York Southern Tier and the Pennsylvania Northern Tier and, believe me, even today that is country.  At some point in his youth, Ephraim lived in Elmira, NY and family legend has it that he was "friends with Sam Clemens", who is, of course, better known as Mark Twain.  I doubt they were friends, (there would have been quite an age difference) but he may have known Clemens, in passing, as a fellow Elmiran.  Perhaps more likely, he just knew Clemens by reputation as his city's most famed inhabitant at the time.  Or possibly they weren't even there at exactly the same time, merely about the same time.  However, I do recall that I once mentioned "Mark Twain" and Grandpa (who didn't like me a whole lot anyway), frowned and thundered, "His name is Sam Clemens!"

I suspect the basic truth about Elsie is that she was a rebel from an early age.  She was probably a bit of a 'handful', and I wouldn't be at all surprised if her father disliked her a bit.   These legends of people being cast off for marrying outside the faith may be technically true as to the specific timing of the family decree that they be removed from the family, but my guess is that more often than not, if the religion is not one of those few cults that practice shunning, the marriage is the only last in a long line of small rebellions against the parental strictures.  The child who is thus cast off naturally feels that he or she is on the right side of the equation and is likely to pass on to the following generations a tale told from her point of view.  The parent depicted as overly strict probably would, in turn, describe the child as overly wild or naughty or willful.

I knew Grandpa Ephraim Warren (my only grandparent who had not died before I was born), and as I say, he didn't care for me too much.  As a man who had brought up eight daughters, the younger ones of whom he had to raise without Elsie's help, Grandpa wasn't terribly fond of boys in general.  Elsie died at 49 from complications from epilepsy, just months after Mom's high school graduation; Mom's two youngest sisters either did not recall their mother at all, or had only one or two vague memories.  My mother was raised very strictly, and she herself was not at all a rebel, although a couple of her sisters were somewhat more rebellious against the family norms than she.  Mom and her sisters grew up in a series of small country towns; Grandpa worked in the lumber trade, which required him to move occasionally.  In addition to those requisite moves, Elsie had some variety of wanderlust which caused her to change houses every couple of years even if Grandpa's work did not require a move.  Elsie never returned to the Catholic church, but she made sure her daughters attended whichever Protestant church was nearby.  

My mother so hated moving about that she made owning their own home from the start a condition of marriage to my father, and he and she chose and purchased a house in the city before they married.   Mom always wanted to be a "city girl" and she absolutely hated being a stand-out in any way.  She was the farthest thing from a rebel, yet fate conspired against her.   She became a Catholic, the only one of her sisters to do so, although no less than four of the others married Catholics.  She grew up thinking boys were somehow nasty, and those of her sisters who had children before she did dutifully had only daughters.  Mom broke the family tradition by having me, and then compounded her apostasy by having seven more boys.  And the whole City Girl thing went by the wayside when my Dad's brother Bernard developed a heart condition that rendered him unable to continue working on the family farm which he had inherited.  When I was three, Dad swapped the house in the city, which contained a rental flat upstairs, for the family farm and thus Mom became a farmer's wife as well as mother of eight boys (and of my only sister, Lucy), for neither of which activities she'd had any practical preparation.  "No one will ever know how often I was faking it," she confessed to me a few years ago.   It was strange to hear, since I always remember her as a serene presence, and as calmly expert in any matter that arose.  And you better believe, with nine children and a bipolar, alcoholic husband, plenty of unusual matters did arise.  

I have been thinking about the unreliability of so much of what I "know" lately, as I find out more and more things I was sure were true are actually highly doubtful.  There is so little we actually know about the past; we often find that even the events we witnessed are remembered differently by others who were also present.  Although I really try to be truthful when telling about my past or my family's history, the fact is that much of the nuance, at least, could be better labelled, "my story" than "my history".   It really is true that the older one gets, the less one knows.  Or at least there is so much less about which one can be certain.  It gives me quite a different perspective on history, which, besides being written by the winners, is even more likely written in service of mythologizing and bowdlerizing the past to fit the tellers' prejudices.    

Put another way, there is so little of what actually happened that matters to any individual life.  What one believes is true is the sole determinant of the impact of the past upon one's life.  

Yikes! we are even more rudderless than I thought!

          Ya está disponible Debian 9 Stretch   

Ya está disponible Debian 9 Stretch

Ya está disponible Debian 9 StretchDespués de varios meses de desarrollo, el equipo de Debian ha lanzado Debian 9 Stretch en su versión estable. La nueva versión lleva consigo numerosas novedades interesantes para los usuarios.

Debian 9 abandona la plataforma Powerpc y añade la opción para la plataforma mips64el. Incluye la versión 4.9 de Kernel que no es la última (4.11), pero si la más testada hasta la fecha. Esta característica no nos impedirá instalar la versión de Kernel que queramos.

En cuanto ...

          Assembly 6510: programmazione vintage   

Non sono passati trent'anni, ma quasi. Durante le scuole i miei genitori mi regalarono il Commodore 64: lo avevano per giocare alcuni amici, e lo volevo anche io per i suoi incredibili giochi. Dopo il primo periodo di sfogo videoludico ecco che in me nacque il desiderio di capirne di più. In edicola, ai tempi, si potevano trovare pochissime riviste di informatica (stiamo parlando della seconda metà degli anni '80) e per lo più si trovavano cassette ricolme di giochi e alcune riviste che approfondivano la programmazione. Grazie a queste riviste, una su tutte CCC, iniziai ad appassionarmi per la programmazione, inizialmente per il Basic, poi per l'assembly.

Viene quasi da sorridere pensando alla potenza dei computer dell'epoca. Il Commodore 64, un piccolo gioiello per l'epoca, era dotato di una CPU CMOS 6510 all'incredibile velocità di 1MHz, di un chip apposito per la gestione del sonoro, il sid 6581, di ben 64KB di ram, rom di 20kbyte (8k basic + 8k kernal + 4k set caratteri), uscita video RGB PAL con la risoluzione 320x200 a 16 colori in modalità grafica VIC-II. Tanta roba per l'anno di uscita, il 1982.

Il Basic V2 di cui era dotato era veramente minimale: utile per scopo didattico ma ridicolo per ottenere qualcosa di più serio. Già all'epoca, per usare le modalità grafiche e sprite, si doveva fare largo uso di comandi diretti per le modifiche di alcune celle di memoria - per chi è della mia epoca e conosce il Commodore 64, sicuramente sa cos'è il comando poke. Ad essere sinceri il passaggio all'Assembly non era una libera scelta, ma una necessità.

La semplicità tecnologica della CPU la si poteva constatare dalla sua tecnologia a 8 bit e all'indirizzamento di memoria a 16 bit (2^16 = 655536 = 64KB... ma va?) e soprattutto dall'esiguo e semplice numero di istruzioni dell'assembly; inoltre il numero di registri era limitato: Accumulator (A), X-Register (X), Y-Register (Y), Program Counter (PC) e Status Register. I primi tre erano utilizzabili direttamente durante la programmazione (si può pensare a questi tre registri come a delle variabili il cui valore, strettamente a 8 bit, poteva venire usato per modificare una cella di memoria); il Program Counter era il registro che puntava alla locazione di memoria dell'attuale istruzione in linguaggio macchina in esecuzione, e infine lo Status Register che impostava determinati bit dipendentemente dall'esecuzione del codice. Al giorno d'oggi la programmazione assembly è ormai solo per scopi super specialistici data la complessità a cui siamo giunti, ma all'epoca era sufficiente sapere che la locazione di memoria 53280 ($D020 in esadecimale) era utilizzata per impostare il colore del bordo dello schermo, per poter scrivere in assembly:

lda #$00 sta $d020 rts

Trasformato questo codice in puro linguaggio macchina e lanciato, ci si sentiva super programmatori nel vedere il bordo colorarsi di nero. Ricordo che qualche tempo dopo mi appassionai anche per la programmazione assembly per l'Amiga e il suo Motorola 68000, ma anche se lì i registri erano 15 (se non sbaglio) e tutto era molto più avanzato, non riprovai mai la stessa magia con l'assembly che sentivo con il Commodore 64.

Sarà l'effetto che siamo agli ultimi giorni dell'anno e ci si ritrova a pensare a quanto successo in quello trascorso, però tornare indietro di trent'anni è forse troppo. Eppure spesso mi ritrovo a pensare come all'epoca non solo la vita era più semplice, ma anche la programmazione! Ricordo il ritorno a casa da scuola e, nel pomeriggio, finiti gli obblighi dei compiti e altre cose, mi ritrovavo a provare, scrivere, modificare codice che trovavo sulle suddette riviste, e da lì partivo per esperimenti tutti miei che finivano spesso in fallimenti, ma in altre occasioni in scoperte che, all'epoca, sembravano fantascientifiche. Ricordo, per esempio, una problematica che mi ero posto all'epoca e che mi aveva tenuto in ballo per parecchi giorni per trovare una soluzione. Una cosa che odiavo (e odio tuttora quando uso gli emulatori) erano i source editor di codice. Forse perché sono abituato troppo bene adesso, odiavo lo scroll e la limitata visione di colonne e righe che avevano i computer dell'epoca (il Commodore 64 aveva, in modalità testuale, 40 colonne per 25 righe). Quando si doveva fare riferimento a determinati blocchi di codice o di dati, si perdeva un sacco di tempo solo per andarsi a cercare il tutto. Per esempio, volendo visualizzare un blocco di testo, si doveva scrivere alcune righe di codice che chiamavano alcune funzioni direttamente nella ROM del computer, e riempire una zona di memoria con il testo da utilizzare. Mi spiego con un esempio:

#pseudo codice lda {locazione di memoria "bassa" di msg1} ldx {locazione di memoria "alta" di msg1} jsr {locazione di memoria funzione kernel per visualizzare testo} ... msg1 .text "Testo da visualizzare 1" .byte 0 msg2 .text "Testo da visualizzare 2" .byte 0 msg3 .text "Testo da visualizzare 3" .byte 0

Per limitare gli spostamenti nel codice e per un mio puro diletto, ricordo che provai a pensare ad un modo per raccogliere insieme il codice con i dati da utilizzare. Il mio obiettivo finale era qualcosa di questo tipo:

#pseudo codice jsr {mia funzione per visualizzare testo successivo} .text "Testo da visualizzare 1" .byte 0 jsr {mia funzione per visualizzare testo successivo} .text "Testo da visualizzare 2" .byte 0 jsr {mia funzione per visualizzare testo successivo} .text "Testo da visualizzare 3" .byte 0 ...

Non ho mai scoperto se questa cosa mi sia stata veramente utile, ma ricordo che fu una sfida molto interessante all'epoca. Per non lasciare nessuno sulle spine, taglio dicendo che trovai la soluzione. Ci arrivai conoscendo il modo che usava il 6510 per richiamare funzioni (comando jsr come visto sopra, e rts che sarebbe come l'attuale return). Il Commodore 64, quando incontrava questo comando, salvava nello stack in due byte la locazione di memoria attuale; quindi saltava alla nuova locazione di memoria, quando incontrava il comando rts ripescava dallo stack la locazione di memoria da dove era partito, aggiungeva "uno", quindi impostava il Program Counter a questa locazione di memoria e continuava l'esecuzione. Fine, nulla di complesso: la mia soluzione fu quella di modificare l'indirizzo di memoria per fare in modo che, al comando rts, il programma continuasse dove volevo io.

Questa sfida e il modo con cui la superai all'epoca me la ricordo anche attualmente a distanza di tutti questi anni, ed eccomi con un emulatore a ripetere la sfida ben conoscendo però, la soluzione:

*=25000 jsr print .text "hello world!" .byte 0 # resto del codice rts print pla sta $fb pla sta $fc ldy #0 print2 inc $fb bne print3 inc $fc print3 lda ($fb),y beq end jsr $ffd2 bne print2 end lda $fc pha lda $fb pha rts

Innanzitutto il codice è scritto direttamente con TurboAssembler (T-ASS). "*" specifica la locazione di memoria dove l'editor inserirà il codice inserito. Il tag ".text" e ".byte" sono codici dell'editor che permettono l'inserimento di testo o di byte. "print", "print2", "print3" e "end" sono dei segnalibro che possiamo usare nel codice per saltare in altre zone del codice. Se eseguo questo codice, quando il 6510 incontrerà l'istruzione jsr print accadrà quando descritto sopra: salverà l'attuale posizione nello stack e salterà alle istruzioni presenti al segnalibro "print". pla è un comando del 6510 che prende il contenuto dallo stack, e lo faccio per due volte salvandolo nelle locazioni di memoria $fb/$fc. Essi punteranno proprio nella zona di memoria dov'è presente il mio testo - hello world! - quindi, a grandi linee, il mio codice prende une byte per byte e li visualizza a schermo fino a quando incontrerà il valore zero. Quindi risalva la nuova locazione di memoria nello stack che sarà utilizzata dopo l'rts. Sarei troppo pignolo e sarebbe inutile, nel 2016 - quasi 2017 - a descrivere nel dettaglio tutti questi comandi, e rimando a questa wiki dove si potrà trovare tutto.

Ma funziona? Prima di tutto, ecco il codice all'interno dell'editor nell'emulatore - purtroppo non ho più un Commodore 64:

Ed eccolo eseguito:

Lo so, gli anni passano, i capelli in meno e tutto il resto mi fanno capire che non ho più 14 anni. Però una cosa è rimasta dell'epoca: la curiosità nel mondo della programmazione. Non è poco.

Tags:

Continua a leggere Assembly 6510: programmazione vintage.


(C) 2017 ASPItalia.com Network - All rights reserved


          Asp.net Core, Docker e Docker Swarm   

A me sembra passata un'eternità. Intorno al 2004 si poteva utilizzare codice scritto per il Framework.Net su Linux con Mono. Da appassionato da moltissimi anni anche del mondo linux, la cosa mi era sembrata fin da subito interessante (alcuni punti li avevo trattati anche su questo mio blog) e mai avrei credito che di punto in bianco la stessa Microsoft si decidesse un giorno di supportare pienamente un suo diretto concorrente quando iniziò lo sviluppo di asp.net Core.

Lo sviluppo di applicazioni web su linux è ora più facile che mai e in rete si trovano parecchie guide che mostrano quanto sia alla portata del developer medio (ricordo che anche con mono e linux era possibile sviluppare applicazioni web con asp.net, ma personalmente trovavo la cosa come una dimostrazione delle potenzialità di mono e nulla da utilizzare realmente in produzione: mai mi sarei sognato di sviluppare web app in asp.net su linux). Un'ottima guida, in fase di miglioramento, riguardante la configurazione di asp-net core su Linux, la si può trovare in questo post di Pietro Libro.

Quest'apertura di Microsoft verso altri mondi non si è fermata solo a Linux, ma anche al mondo Apple, e, non accontentandosi, non si è fermata dinanzi neanche a novità come il mondo dei container dove Docker fa ormai da padrone, e ufficialmente rilascia immagini anche per questo mondo. Il suo uso è semplice, avviato Linux possiamo prendere una nostra soluzione appena realizzata con Visual Studio ed avviarla senza installare alcunché (se non lo stesso Docker, naturalmente). Esempio, scaricato in una directory il progetto, basta avviare docker in questo modo:

docker run -it --name azdotnet -v /home/az/Documents/docker/aspnetcorelinux/src/MVC5ForLinuxTest2:/app -p 5000:5000 microsoft/dotnet:1.0.1-sdk-projectjson

Scaricate le immagini indispensabili all'avvio, viene restituito il prompt all'interno del container di docker, dove possiamo scrivere:

dotnet restore dotnet run

Ecco che vengono scaricate le dipendenze e viene compilato il progetto. Alla fine:

Project MVC5ForLinuxTest (.NETCoreApp,Version=v1.0) will be compiled because the version or bitness of the CLI changed since the last build Compiling MVC5ForLinuxTest for .NETCoreApp,Version=v1.0 Compilation succeeded. 0 Warning(s) 0 Error(s) Time elapsed 00:00:02.8545851 Hosting environment: Production Content root path: /dotnet2/src/MVC5ForLinuxTest Now listening on: http://0.0.0.0:5000 Application started. Press Ctrl+C to shut down.

Dal browser:

Prima di proseguire due parole sui parametri utilizzati con docker. Il parametro -it serve per connettere il terminale attuale al contenuto del container. Avremmo potuto anche utilizzare:

docker run -d ...

Dove d sta per detach, in questo modo non avremo visto nulla sul terminale e saremmo rimasti nella shell di linux attuale. Di contro non avremmo visto l'output della compilazione e non avremmo potuto inviare i comandi di compilazione immediatamente. E' sempre possibile riconnettersi ad un container avviato in detach mode per controllare cosa sta succedendo, per esempio:

docker logs azdotnet

Questo comando visualizza il contenuto del terminale all'interno del container (aggiungendo il parametro -f il comando non ritornerebbe al prompt ma continuerebbe a rimanere in attesa di nuovi messaggi). Infine ci saremmo potuti riconnettere con il comando:

docker attach azdotnet

Il parametro -v serve per gestire i volumi all'interno dei container di docker. Riassumendo, in docker si possono definire due tipi di volumi, il primo, quello più semplice e utilizzato nel mio esempio, permette di creare un collegamento all'interno del container con il disco del compute host che ospita la sessione di docker. Nel mio esempio ci sono due path divisi da ":", a sinistra c'è la prima parte del path sul disco dell'host, a destra il path all'interno del container. Nell'esempio: /home/az/Documents/docker/aspnetcorelinux/src/MVC5ForLinuxTest2 è il percorso nell'host che sarà mappato come app all'interno di docker. Solo per completezza, il secondo tipo di volume è quello gestito internamente da docker: eventuali volumi montati in questo modo saranno gestiti in un suo path privato all'interno di docker; quest'ultimo tipo è comodo per condividere tra più container directory e file inseriti in altri container.

Il parametro -p viene utilizzato per definire quali porte docker deve aprire nel container verso l'host; così come il parametro per i volumi, anche questo accetta due valori suddivisi dal caratteri dei due punti in cui il valore a destra definisce quale porta sarà mappata nel container e a quale sarà mappata nell'host (parte sinistra del parametro).

Infine dobbiamo specificare il nome dell'immagine che dobbiamo utilizzare, essendo questa salvata nell'hub ufficiale di docker, possiamo definirla semplicemente con microsoft/dotnet; se fosse su un altro servizio di immagini di docker avremmo dovuto scrivere il path completo di dominio: miohost.com/docker/hub/microsoft/dotnet. Il parametro dopo i due punti è il tag che specifica quale versione vogiamo utilizzare. In questo esempio usiamo una versione specifica; avremmo potuto usare anche latest per avere l'ultima versione, ma nella pratica reale sconsiglio questa procedura perché, come mi è accaduto più volte, con il passaggio di versioni, si possono riscontrare anomalie che obbligano a mettere mano al tutto. In un primo test che avevo fatto, avevo specificato come tag latest, per poi scoprire, quando era uscita la versione 1.1 di asp.net core, che il progetto non era più compilabile per le differenze di versioni nelle dipendenze. Un altro caso che mi è successo di recente: utilizzando di base un'immagine ubuntu, precedentemente con la versione 14.04.4 era presente nell'immagine un comando per la decompressione di una formato particolare, comando che nell'ultima versione, la 16.04 è stato eliminato; al passaggio a quest'ultima versione di ubuntu il tutto si bloccava con un messaggio inizialmente incomprensibile che poi era basato sulla mancanza di quel comando.

Abbiamo usato spesso come valore nel parametro azdotnet, e questo è il nome che abbiamo dato al nostro container grazie al parametro --name: non assegnandolo noi da comando, docker avrebbe creato uno suo. Se siamo ancora nel terminale connesso a docker, possiamo uscirne con la sequenza Ctrl+P Ctrl+Q. Usando il comando docker ps possiamo vedere informazioni sui container che girano all'interno della nostra macchina:

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES df65019f69a5 microsoft/dotnet:1.0.1-sdk-projectjson "/bin/bash" 15 seconds ago Up 11 seconds 0.0.0.0:5000->5000/tcp azdotnet

Se non avessi specificato il nome mi sarei ritrovato questo casuale goofy_shaw:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 04c3276cfd73 microsoft/dotnet:1.0.1-sdk-projectjson "/bin/bash" 10 seconds ago Up 7 seconds 0.0.0.0:5000->5000/tcp goofy_shaw

Vogliamo fermare il processo in questo container (in entrambi i casi)?

docker stop azdotnet ... oppure ... docker stop goofy_shaw

Voglio cancellare il container e l'immagine?

docker rm azdotnet docker rmi azdotnet

Voglio essere distruttivo e cancellare qualsiasi costa presente in docker?

docker rm $(docker ps -qa) docker rmi $(docker images -qa)

La comodità non si ferma qui. Scopro che c'è un piccolo errore? Avendo il volume montato in locale posso fare (se ho l'editor VSCode di Microsoft, ma qualunque editor di testo fa la stessa cosa):

code .

Quindi, salvato il file, posso ricompilare e riavviare, all'interno del terminal in docker, dopo aver fermato con Ctrl+C

dotnet restore dotnet run

E vedere le modifiche. Potremmo creare anche un'immagine già pronta con la nostra app al suo interno. Il modo più semplice è scaricare quella di nostro interesse, salvarci all'interno i file della nostra app, quindi fare il commit delle modifiche e utilizzarla tutte le volte che vogliamo. Oppure, possiamo introdurre nel codice sorgente della nostra web app un file per la creazione automatica dell'immagine per docker. Ecco un esempio che sarà ripreso anche più tardi:

# Example from AZ FROM microsoft/dotnet:1.0.1-sdk-projectjson MAINTAINER "AZ" COPY ["./src/MVC5ForLinuxTest2/", "/app"] COPY ["./start.sh", "."] RUN chmod +x ./start.sh CMD ["./start.sh"]

Con questo file possiamo creare una immagine con questo comando:

docker build -t myexample/exampledotnet .

myexample/exampledotnet sarà il nome dell'immagine che possiamo usare per richiamare e avviare un container con il suo contenuto. Se provassimo ad avviare questo comando, vedremo che docker scarica, se non già presente, l'immagine di base per il dotnet, quindi, dopo la riga di informazione sul maintainer, verranno copiati i file locali dalla directory ./src/MVC5ForLinuxTest2/ nell'immagine e nel path /app. Lo stesso per il file start.sh. Quindi viene dato il flag di avvio a questo file e quando l'immagine sarà avviata, sarà eseguito proprio questo file. Il suo contenuto? E' questo:

#!/bin/sh cd app dotnet restore dotnet run

Naturalmente avremmo potuto creare già un'immagine compilata, ma questo caso ci aiuterà a comprendere un passaggio importante che vedremo più avanti.

La creazione di immagini non è lo scopo di questo post, quindi vado avanti anche perché la documentazione ufficiale è chiara a riguardo. Voglio proseguire perché il resto è una situazione che si sta evolvendo proprio in questo periodo che sto scrivendo questo post. Avevo trattato nei miei post precedenti il mondo dei micro service e sulla possibilità di distribuirli su più macchine. Proprio nell'ultimo post, prendevo in considerazione i vantaggi di Consul per il discovery dei servizi e altro. In quel periodo mi ero anche interessato sulla possibilità che anche docker potesse fare questo. Con la versione 1.1x, scopro che docker mette a disposizione in modo nativo la possibilità di un cluster di host dove saranno ospitati i vari container. Docker swarm mette a disposizione gli strumenti per fare questo ma solo dalla versione 1.12 il tutto è stato semplificato. Nelle versioni precedenti, dal mio punto di vista, era il delirio: nelle macchine che dovevano gestire il tutto dovevano essere collegate con Consul. Inoltre la configurazione di tutto era macchinosa e, per mie prove dirette, bastava un errore da niente nella configurazione per mandare all'aria il tutto - lo so, la colpa non è in docker ma è mia. Dalla versione 1.12 tutto è diventato banale anche se, lo dico subito, ci si ritrova con un bug subdolo che descriverò tra poco. Innanzitutto, cos'è Docker swarm? Non è nient'altro che la gestione di docker un cluster. Quello che abbiamo fatto prima su una singola macchina con i comandi di base di docker, con Docker swarm lo possiamo fare su più macchine senza preoccuparci di come configurare e il tutto. Tutto è semplice? Sì, eccome! Gli sviluppatori di Docker hanno creato un progetto interessante e veramente sbalorditivo per quello che promette e mantiene (al netto dei bug). Cominciando dall'inizio, cosa dobbiamo avere per mettere in piedi un cluster con Docker swarm? Una o più macchine che gestiranno tutti gli host collegati (nella documentazione ufficiale consigliano N/2+1 dove N è il numero di macchine ma scegliate oltre le sette macchine) definiti manager. Si può provare il tutto anche con macchine virtuali come ho fatto io per l'esempio di questo post ed è pressoché obbligatorio Linux (una qualsiasi distribuzione va bene, sconsigliate macchine Apple e Windows). Nel mio caso avevo due macchine con questi due ip:

192.168.0.15 192.168.0.16

La macchina che termina con 15 sarà il manager e la 16 la worker (di base anche la macchina manager sarà utilizzata per l'installazione dei container, anche se è possibile con un comando fare in modo che essa faccia solo da manager). Su questa macchina, da terminale, avviamo il tutto:

docker swarm init --advertise-addr 192.168.0.15

Se tutto funziona alla perfezione, la risposta sarà:

ReplY: swarm initialized: current node (83f6hk7nraat4ikews3tm9dgm) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0hmjrpgm14364r2kl2rkaxtm9tyy33217ew01yidn3l4qu3vaq-8e54w2m4mrwcljzbn9z2yzxrz 192.168.0.15:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Più chiaro di così non si può. La rete swarm è stata creata e possiamo aggiungere macchine manager o worker come descritto nel testo. Sulla macchina 16, dunque, per collegarla, scriviamo:

docker swarm join --token SWMTKN-1-0hmjrpgm14364r2kl2rkaxtm9tyy33217ew01yidn3l4qu3vaq-8e54w2m4mrwcljzbn9z2yzxrz 192.168.0.15:2377

Se la rete è configurata correttamente e se le porte necessarie non sono bloccate da firewall (verificare dalla documentazione di Docker swarm, a memoria non le ricordo), la risposta che avremo sarà:

This node joined a swarm as a worker.

Ora dalla manager, la 15, vediamo se è vero:

docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 83f6hk7nraat4ikews3tm9dgm * osboxes1 Ready Active Leader 897zy6vpbxzrvaif7sfq2rhe0 osboxes2 Ready Active

Vogliamo maggiori info tecniche?

docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 2 Server Version: 1.13.0-rc2 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 61 Dirperm1 Supported: true ... Kernel Version: 4.8.0-28-generic Operating System: Ubuntu 16.10 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.445 GiB ...

Perfetto. Ora possiamo installare i nostri container e distribuirli in cluster. Per questo esempio ho creato una semplice web application con una web API che ritorna un codice univoco per ogni istanza. Il codice sorgete è pubblico e disponibile a questo url:
https://bitbucket.org/sbraer/aspnetcorelinux

Iscrivendosi gratuitamente in docker, potremo creare le nostre immagini, inoltre è stata introdotta da poco tempo la possibilità di eseguire build automatiche dai nostri dockerfile salvati in github o butbutcket. Ecco il link per l'esempio di questo post:
https://hub.docker.com/r/sbraer/aspnetcorelinux/

La comodità è che posso modificare il codice sorgente della mia app in locale, fare il commit su bitbucket dove ho un account gratuito, e dopo pochi minuti avere una immagine in docker pronta. E proprio quello di cui abbiamo bisogno per il nostro esempio.

docker service create --replicas 1 -p 5000:5000 --name app1 sbraer/aspnetcorelinux

Il comando in docker ora è leggermente differente. Si nota subito l'aggiunta di service: questo indica a docker che vogliamo lavorare nel cluster di swarm. Il comportamento è quasi simile a quello che abbiamo visto finora ma non potremmo collegarci da terminale nel metodo conosciuto. Prima di approfondire vediamo che è successo:

docker service ls ID NAME REPLICAS IMAGE COMMAND cx0n4fmzhnry app1 0/1 sbraer/aspnetcorelinux

E' stata scaricata l'immagine e sta per essere avviata. Dopo alcuni istanti potremo richiamare l'API e vedere il risultato:

curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}]

E lo potremo fare da tutte le macchine presenti nel cluster. E se volessimo installare più copie di questa app?

docker service scale app1=5

Ecco fatto: ora docker creerà altre quattro container che saranno distribuiti tra le due macchine:

docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6x8j50wn9475jc70fop0ezy7s app1.1 sbraer/aspnetcorelinux osboxes1 Running Running 6 minutes ago 1cydg0cr7re8suxluh2k7y0kc app1.2 sbraer/aspnetcorelinux osboxes2 Running Preparing 51 seconds ago dku0anrmfbscbrmxce9j7wcnn app1.3 sbraer/aspnetcorelinux osboxes2 Running Preparing 51 seconds ago 5vupi73j7jlbjmbzpmg1gsypr app1.4 sbraer/aspnetcorelinux osboxes1 Running Running 44 seconds ago e5a6xofjmxhcepn60xbm9ef7x app1.5 sbraer/aspnetcorelinux osboxes1 Running Running 44 seconds ago

Una volta avviate, vedremo che Docker swarm sarà in grado di bilanciare tutte le richieste (dal guid si può vedere la differenza di processo):

osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}]

Ora ho voglia di aggiungere una ulteriore informazione nella risposta dell'API. Decido di aggiungere anche data e ora. Creo un nuovo branch, master2, faccio la modifica, commit, creo una nuova immagine in docker con il tag dev. E se volessi aggiornare le cinque istanze che girano sulle mie macchine? Docker swarm fa tutto questo per me:

docker service update --image sbraer/aspnetcorelinux:dev app1

Ora docker fermerà una alla volta i container, lì aggiornerà e li avvierà ancora:

docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6x8j50wn9475jc70fop0ezy7s app1.1 sbraer/aspnetcorelinux osboxes1 Running Running 10 minutes ago 2g98f5qnf3tbtr83wf5yx0vcr app1.2 sbraer/aspnetcorelinux:dev osboxes2 Running Preparing 5 seconds ago 1cydg0cr7re8suxluh2k7y0kc \_ app1.2 sbraer/aspnetcorelinux osboxes2 Shutdown Shutdown 4 seconds ago dku0anrmfbscbrmxce9j7wcnn app1.3 sbraer/aspnetcorelinux osboxes2 Running Preparing 4 minutes ago 5vupi73j7jlbjmbzpmg1gsypr app1.4 sbraer/aspnetcorelinux osboxes1 Running Running 4 minutes ago e5a6xofjmxhcepn60xbm9ef7x app1.5 sbraer/aspnetcorelinux osboxes1 Running Running 4 minutes ago

Lasciandogli il tempo di aggiornare tutto, ecco il risultato:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"e02997ad-ea05-418f-9be4-c1a9b71bff85","dateTime":"2016-11-26T21:14:25.617665+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ff0b1dfa-42e5-4725-ab11-6fdb83488ace","dateTime":"2016-11-26T21:14:27.157971+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"0a7578fb-d7cf-4c6f-a1fa-07deb7cddbc0","dateTime":"2016-11-26T21:14:27.789131+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ce87341b-d445-49f6-ae44-0a62a844060e","dateTime":"2016-11-26T21:14:28.303101+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:28.873405+00:00"}]

Vogliamo fermare il tutto?

docker service rm app1

Interessante, inoltre, è la capacità di docker di prevenire eventuali disastri esterni. Se nel mio caso, spegnessi la macchina 16, docker se ne accorgerebbe, non invierebbe più richieste alle API presenti su quella macchina, e immediatamente avvierebbe lo stesso numero di container persi in quella macchina in altre presenti nel cluster.

E se volessi fare come nel primo esempio, vedere il log di un preciso container? Purtroppo questo non è proprio facile in docker. Innanzitutto è necessario andare sulla macchina dove è installato e scrivere:

docker ps -a

Quindi, come visto nel caso del parametro --name non assegnato, vedere l'output di questo comando per estrapolarne il nome e connettendosi poi. Non proprio comodo.

Tutto magnifico dunque... no, perché come qualcuno può avere intuito, c'è un problema di base nella gestione di cluster di immagini in docker swarm. Abbiamo visto che possiamo scalare un'immagine e automaticamente docker la installerà su quella o su altre macchine; ma, riprendendo l'esempio della nostra API, quando questa sarà disponibile dalla gestione di bilanciamento di docker? E qui nasce il problema: lui manterrà scollegata la macchina in creazione dal load balancer interno fino a quando questa sarà avviata: e il servizio, come in questo caso, è lento a partire perché scarica dipendenze e compila, che succede? Semplice, docker metterà a disposizione quelle macchine che non sapranno gestire la riposta:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ce87341b-d445-49f6-ae44-0a62a844060e","dateTime":"2016-11-26T21:14:28.303101+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost'

Ed ecco il primo problema... Come risolverlo? Per fortuna c'è una soluzione semplice. Nel Dockerfile che crea l'immagine, possiamo usare un comando apposito che rende disponibile l'immagine solo se un comando dà risposta positiva. Ecco il nuovo Dockerfile:

# Example from AZ FROM microsoft/dotnet:1.0.1-sdk-projectjson MAINTAINER "AZ" COPY ["./src/MVC5ForLinuxTest2/", "/app"] COPY ["./start.sh", "."] RUN chmod +x ./start.sh HEALTHCHECK CMD curl --fail http://localhost:5000/api/systeminfo || exit 1 CMD ["./start.sh"]

HEALTHCHECK non fa altro che dire a docker se quel container funziona correttamente, e lo fa eseguendo il comando che verifica se il servizio funziona correttamente - nel mio caso fa una richiesta all'API e se solo risponde positivamente il container viene agganciato al load balancer di docker, questo è comodo anche per verificare che la nostra API non smetta di funzionare per motivi suoi e, in questo caso, può avvertire docker del problema. Perfetto... Proviamo il tutto ed ecco l'output dopo aver aggiunto altre istanze della stessa webapp:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost'

Ma che succede? Purtroppo nella versione attuale, la 1.12, il controllo nel caso di cluster swarm non funziona correttamente. Facendo delle prove personali, sembra che l'healthcheck invii la richiesta a tutto il cluster che ovviamente risponderà positivamente... Purtroppo non ci sono scappatoie ma per fortuna questo bug è stato risolto con la versione 1.13 (ora in RC, ma che dovrebbe essere rilasciata entro metrà mese di dicembre). Infatti, installata la versione 1.13, questo problema scompare - ho verificato di persona. A proposito di quest'ultima versione, è stata rilasciata anche la funzionalità di fare rollback dell'immagine in automatico, ma per ora non ho potuto controllare di persona se la cosa funziona. Inoltre - ed era ora - in experimental è stato inserito il comando per la visualizzazione del log all'interno dello swarm.

E' ora di tirare qualche conclusione personale: con la versione 1.13 il tutto è così facile da far impallidire qualsiasi altra scelta per la distribuzione dei propri service; kubernetes di Google appare fin troppo complicato a riguardo, e anche la soluzione che avevo mostrato con Consul sembra molto più macchinosa. Inoltre sembra che il futuro sia nei containers (pure Microsoft in Azure sta implementando il tutto per renderne l'uso più facile possibile). Inutile dire che la procedura che ho descritto qui sopra, funziona perfettamente sia nel caso di una piccola web farm e sia che si decida di passare al mondo del cloud.

Interessante... altro che...

Tags: ,

Continua a leggere Asp.net Core, Docker e Docker Swarm.


(C) 2017 ASPItalia.com Network - All rights reserved


          snappy sensors   
Sensors are an important part of IoT. Phones, robots and drones all have a slurry of sensors. Sensor chips are everywhere, doing all kinds of jobs to help and entertain us. Modern games and game consoles can thank sensors for some wonderfully active games.

Since I became involved with sensors and wrote QtSensorGestures as part of the QtSensors team at Nokia, sensors have only gotten cheaper and more prolific.

I used Ubuntu Server, snappy, a raspberry pi 3, and the senseHAT sensor board to create a senseHAT sensors snap. Of course, this currently only runs in devmode on raspberry pi3 (and pi2 as well) .

To future proof this, I wanted to get sensor data all the way up to QtSensors, for future QML access.

I now work at Canonical. Snappy is new and still in heavy development so I did run into a few issues. First up was QFactoryLoader which finds and loads plugins, was not looking in the correct spot. For some reason, it uses $SNAP/usr/bin as it's QT_PLUGIN_PATH. I got around this for now by using a wrapper script and setting QT_PLUGIN_PATH to $SNAP/usr/lib/arm-linux-gnueabihf/qt5/plugins

Second issue was that QSensorManager could not see it's configuration file in /etc/xdg/QtProject which is not accessible to a snap. So I used the wrapper script to set up  XDG_CONFIG_DIRS as $SNAP/etc/xdg

[NOTE] I just discovered there is a part named "qt5conf" that can be used to setup Qt's env vars by using the included command qt5-launch  to run your snap's commands.

Since there is no libhybris in Ubuntu Core, I had to decide what QtSensor backend to use. I could have used sensorfw, or maybe iio-sensor-proxy but RTIMULib already worked for senseHAT. It was easier to write a QtSensors plugin that used RTIMULib, as opposed to adding it into sensorfw. iio-sensor-proxy is more for laptop like machines and lacks many sensors.
RTIMULib uses a configuration file that needs to be in a writable area, to hold additional device specific calibration data. Luckily, one of it's functions takes a directory path to look in. Since I was creating the plugin, I made it use a new variable SENSEHAT_CONFIG_DIR so I could then set that up in the wrapper script.

This also runs in confinement without devmode, but involves a simple sensors snapd interface.
One of the issues I can already see with this is that there are a myriad ways of accessing the sensors. Different kernel interfaces - iio,  sysfs, evdev, different middleware - android SensorManager/hybris, libhardware/hybris, sensorfw and others either I cannot speak of or do not know about.

Once the snap goes through a review, it will live here https://code.launchpad.net/~snappy-hwe-team/snappy-hwe-snaps/+git/sensehat, but for now, there is working code is at my sensehat repo.

Next up to snapify, the Matrix Creator sensor array! Perhaps I can use my sensorfw snap or iio-sensor-proxy snap for that.
          oh ya!   
Andrew Morton (kernel hacker) during his keynote address at linux.conf.au 2005 said when asked what desktop he uses, "I won't use a desktop that developers are so stupid that they try to write gui code in C"
          responding via blog!   
wooT gotta love blog conversations!

A lovely respond to Lorn Potter of Trolltech

I just love that people hold Trolltech to some higher threshold. For some, nothing that TT can do is ever good enough, or that Trolltech has a special, evil GPL, which isn't truly open source somehow, since TT doesn't have public source development repositories.

TT releases Greenphone with some close source and closed kernel drivers and gets backlashed, the Neo is released with closed kernel drivers, and a license that allows closed source and is touted as the first open source linux phone.

umm excuse me! LGPL is not about free and open source software! hello! "Dont lend your hand to raise no flag atop no ship of fools."

One of the differences is that Trolltech is not a hardware company, FIC is. TT contracted the hardware from a vendor and what you see is what Trolltech was given. Neo was designed by FIC.
          LGPL and the Neo 1973   
The Neo 1973 phone that runs Openmoko is being touted as the first open source phone, and that they are dedicated to open source...

For one, Trolltech's Greenphone was the first open source phone. Granted, the kernel is less then completely open, as is the phone libraries, but Qtopia is as open source as any.

For two.. if they are so dedicated to open source, why choose a gui library that is LGPL'd? Even RMS says do not use LGPL!
Why you shouldn't use the Lessor GPL for your libraries

As you all know, the LGPL is not about open source, its about letting commercial business keep proprietary code while still using open source. i.e. not give back to the community.. i.e. rip you off.

OpenMoko: if you are so dedicated to open source and keeping it that way, why choose GTK? Qt and Qtopia would be a much better option, and Qtopia is mature and stable phone interface, and it is GPL. All you would have had to do, is write phone libraries, instead of everything.

But, by looking at you choice for gui toolkits, you like doing things the hard way.
          Windows Vista: Kernel Changes - BitLocker, Code Integrity   

Originally posted on: http://geekswithblogs.net/sdorman/archive/2006/06/18/82252.aspx

BitLockerTM Drive Encryption

BitLocker allows the entire OS volume to be encrypted as well as any other volumes. In order to do this, a 1.5 GB unencrypted system volume is required.

BitLocker requires Trusted Platform Module (TPM) v1.2 or a USB device and USB-capable BIOS and is implemented as a file filter driver that sits just above the volume manager drivers.

There are several supported modes for storing the decryption key:

  • TPM locked with signature of boot files
  • TPM locked with user-specified PIN
  • external USB flash device

Code Integrity Verification

The operating system loader and the kernel now perform code signature checks. On 64-bit x64 platforms, all kernel mode code must be signed and the identity of all kernel mode binaries is verified. The system also audits events for integrity check failures.

On 32-bit platforms, the administrator is prompted to install unsigned code. Load-time checks are done on all kernel mode binaries, but if unsigned code is allowed to load you won't be able to play protected high-definition multimedia content.


          Windows Vista: Kernel Changes - Shadows of Reliability, Performance and Scalability   

Originally posted on: http://geekswithblogs.net/sdorman/archive/2006/06/18/82251.aspx

Performance and Scalability

Vista makes fewer and larger disk reads for page faults and system cache read-ahead and has removed the 64KB limit. Fewer, faster, and larger disk writes for the system page file and mapped file I/O reduce the page file fragmentation and allow a larger cluster size.

The CPU usage has also been improved by providing improvements in the concurrency management within the kernel.

Windows Error Reporting (WER)

Vista is a more robust and resilient operating system that provides a read-only system cached view of the registry which protects it from being overwritten by drivers and helps reduce data loss in page crashes.

Prior to Vista, unhandled exceptions were handled in the context of the thread incurring the exception. This relied on the thread stack being valid and could result in the “silent death” of applications when the stack was corrupted.

In Vista, unhandled exceptions are sent to the Windows Error Reporting service, which launches Werfault.exe. This replaces Dwwin.exe (Doctor Watson), and permits WER to be invoked for threads that are too corrupted to invoke their unhandled exception handling.

Volume Shadow Copy

Windows Vista now uses Volume Shadow Copy for System Restore and Previous Versions. This creates a point-in-time copy-on-write snapshot of live volumes and solves the problem of open files not being backed up.

The Previous Versions tab was introduced as Windows Server 2003 “Shadow Copies for Shared Folders” feature.

Volume shadow copy now uses the kernel transaction manager for consistent cross-volume snapshots. Snapshots are taken once per day and when system restore points are taken.

Other Reliability Features

The kernel now supports the concept of a “flight data recorder” with the introduction of the circular kernel context logger.

There are new system events for virtual memory exhaustion, which can be used to help capture and report user-mode memory leaks.

The Restart Manager enables most applications and services to be shutdown and restarted to unblock access to DLLs needing to be replaced. This feature may finally allow seamless replacement of in-use DLLs, reducing the number of times a reboot is necessary at the end of an install.

For the developers, there are new debugger APIs that allow for “wait chain traversal” to help find and report deadlocks.


          Windows Vista: Kernel Changes - Kernel Transactions   

Originally posted on: http://geekswithblogs.net/sdorman/archive/2006/06/18/82249.aspx

Kernel Transaction Manager (KTM)

Before Vista, applications had to do a lot of hard work to recover from errors during the modification of files and registry keys. Windows Vista implements a generalized transaction manager called the Kernel Transaction Manager (KTM) which provides “all or nothing” transaction semantics. This means that changes are committed only when the associated transaction is completed and commits.

The KTM is extensible through third-party resource managers and coordinates between the transaction clients (the applications) and the resource managers.

The registry and NTFS have been enhanced to provide transaction semantics across all operations and is used by the Windows Update service and the System Protection services.

Vista also picks up the Common Log File System (Clfs.sys) introduced in Windows Server 2003 R2, which provides efficient transaction logging facilities.

Transaction APIs

Transactions can span modification across one or many registry keys, files, and volumes. By using the Distributed Transaction Coordinator (DTC) transactions can coordinate changes across files, registry, databases, and MSMQ.

Transactions are relatively easy to use in Vista with the introduction of the new transaction command, which allows scripts to participate in the transaction process.

The Windows API also has a new set of API functions:

  • CreateTransaction
  • SetCurrentTransaction
  • CommitTransaction
  • RollbackTransaction

The kernel has IoCreateFile, which now has an ExtraCreateParameters which specified the transaction handle.


          Windows Vista: Kernel Changes - Wakeup, wakeup, wakeup!   

Originally posted on: http://geekswithblogs.net/sdorman/archive/2006/06/18/82247.aspx

Up until Vista, an application or a driver could prevent the system from entering a sleep mode (standby or hibernate) and was often caused by a bug or an overly aggressive power management policy. The problem with this was that the user might not know the system hasn't entered the appropriate sleep stat and eventually loose data.

Vista no longer queries processes when entering sleep states like previous versions of Windows and has reduced the timeout for user-mode notifications to 2 seconds (down from 20 seconds). In addition, drivers can not veto the transition into a sleep state.

Hopefully, these changes will make going to sleep a lot more peaceful.


          Comment on Linux 4.12 receives second release candidate by Linus Torvalds releases last Linux kernel 4.12 RC - Open Source For You   
[…] original schedule of the Linux 4.12 development, we can expect the final release next week. Meanwhile, the final RC update has emerged with just a handful of minor […]
          ANDROID   
Handphone / Hp Android semakin populer di dunia dan menjadi saingan serius bagi para vendor handphone yang sudah ada sebelumnya seperti Nokia, Blackberry dan iPhone.
Tapi bila anda menanyakan ke orang Indonesia kebanyakan “Apa itu Android ?” Kebanyakan orang tidak akan tahu apa itu Android, dan meskipun ada yang tahu pasti hanya untuk orang tertentu yang geek / update dalam teknologi.
Ini disebabkan karena masyarakat Indonesia hanya mengenal 3 merek handphone yaitu Blackberry, Nokia, dan merek lainnya :)

Ada beberapa hal yang membuat Android sulit (belum) diterima oleh pasar Indonesia, antara lain:

  • Kebanyakan handphone Android menggunakan input touchscreen yang kurang populer di Indonesia,
  • Android membutuhkan koneksi internet yang sangat cepat untuk memaksimalkan kegunaannya padahal Internet dari Operator selular Indonesia kurang dapat diandalkan,
  • Dan yang terakhir anggapan bahwa Android sulit untuk dioperasikan / dipakai bila dibandingkan dengan handphone lain macam Nokia atau Blackberry.

Apa itu Android

Android adalah sistem operasi yang digunakan di smartphone dan juga tablet PC. Fungsinya sama seperti sistem operasi Symbian di Nokia, iOS di Apple dan BlackBerry OS.
Android tidak terikat ke satu merek Handphone saja, beberapa vendor terkenal yang sudah memakai Android antara lain Samsung , Sony Ericsson, HTC, Nexus, Motorolla, dan lain-lain.
Android pertama kali dikembangkan oleh perusahaan bernama Android Inc., dan pada tahun 2005 di akuisisi oleh raksasa Internet Google. Android dibuat dengan basis kernel Linux yang telah dimodifikasi, dan untuk setiap release-nya diberi kode nama berdasarkan nama hidangan makanan.
Keunggulan utama Android adalah gratis dan open source, yang membuat smartphone Android dijual lebih murah dibandingkan dengan Blackberry atau iPhone meski fitur (hardware) yang ditawarkan Android lebih baik.
Beberapa fitur utama dari Android antara lain WiFi hotspot, Multi-touch, Multitasking, GPS, accelerometers, support java, mendukung banyak jaringan (GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE & WiMAX) serta juga kemampuan dasar handphone pada umumnya.

Versi Android yang beredar saat ini

Eclair (2.0 / 2.1)

Versi Android awal yang mulai dipakai oleh banyak smartphone, fitur utama Eclair yaitu perubahan total struktur dan tampilan user interface dan merupakan versi Android yang pertama kali mendukung format HTML5.
Apa itu android

Froyo / Frozen Yogurt (2.2)

Android 2.2 dirilis dengan 20 fitur baru, antara lain peningkatan kecepatan, fitur Wi-Fi hotspot tethering dan dukungan terhadap Adobe Flash.

Gingerbread (2.3)

Perubahan utama di versi 2.3 ini termasuk update UI, peningkatan fitur soft keyboard & copy/paste, power management, dan support Near Field Communication.

Honeycomb (3.0, 3.1 dan 3.2)

Merupakan versi Android yang ditujukan untuk gadget / device dengan layar besar seperti Tablet PC; Fitur baru Honeycomb yaitu dukungan terhadap prosessor multicore dan grafis dengan hardware acceleration.
Tablet pertama yang memakai Honeycomb adalah tablet Motorola Xoom yang dirilis bulan Februari 2011.
Tablet Android
Google memutuskan untuk menutup sementara akses ke source code Honeycomb, hal ini dilakukan untuk mencegah perusahaan pembuat handphone menginstall Honeycomb pada smartphone.
Karena pada versi Android sebelumnya banyak perusahaan yang menggunakan Android ke dalam tablet PC yang menyebabkan pengalaman buruk penggunanya dan mengesankan citra Android tidak bagus.

Ice Cream Sandwich (4.0)

Anroid 4.0 Ice Cream Sandwich diumumkan pada 10 Mei 2011 di ajang Google I/O Developer Conference (San Francisco) dan resmi dirilis pada tanggal 19 Oktober 2011 di Hongkong. “Android Ice Cream Sandwich” akan dapat digunakan baik di smartphone ataupun tablet. Fitur utama yang ditambahkan di Android 4.0 ialah Face UnlockAndroid Beam, perubahan major User Interface, dan ukuran layar standar (native screen) beresolusi 720p (high definition).

Market Share Android

Pada tahun 2012 sekitar 630 juta smartphone akan terjual diseluruh dunia, dimana diperkirakan sebanyak 49,2% diantaranya akan menggunakan OS Android.
Data yang dimiliki Google saat ini mencatat bahwa 500.000 Handphone Android diaktifkan setiap harinya di seluruh dunia dan nilainya akan terus meningkat 4,4% /minggu.
PlatformAPI LevelDistribution
Android 3.x (Honeycomb)110,9%
Android 2.3.x (Gingerbread)9-1018,6%
Android 2.2 (Froyo)859,4%
Android 2.1 (Eclair)5-717,5%
Android 1.6 (Donut)42,2%
Android 1.5 (Cupcake)31,4%
Data distribusi versi Android yang beredar di dunia sampai Juni 2011

Applikasi Android

Android memiliki basis developer yang besar untuk pengembangan applikasi, yang membuat fungsi Android menjadi lebih luas dan beragam. Android Market merupakan tempat applikasi Android didownload baik gratis ataupun berbayar yang dikelola oleh Google.
Applikasi Android di handphone
Meskipun tidak direkomendasikan, kinerja dan fitur Android dapat lebih ditingkatkan dengan melakukan Root Android. Fitur seperti Wireless Tethering, Wired Tethering, uninstall crapware, overclock prosessor, dan install custom flash ROM dapat digunakan pada Android yang sudah diroot.

Artikel Terkait

Peristiwa penting yang terjadi di dunia Teknologi pada tahun 2011 (Kaleidoskop)Chrome for Android beta rilis di Android MarketHandphone China menyetrum mati seorang pemuda di IndiaGame Android terbaik dan gratis | Link DownloadTips jika handphone terkena air
          KDE and Plasma: Release of Plasma 5.10.3, New ISO for Slackware Live PLASMA5, and KDE for FreeBSD   
  • Plasma 5.10.3

    Tuesday, 27 June 2017. Today KDE releases a Bugfix update to KDE Plasma 5, versioned 5.10.3. Plasma 5.10 was released in May with many feature refinements and new modules to complete the desktop experience.

  • KDE Plasma 5.10.3 Fixes Longstanding NVIDIA VT/Suspend Issue

    KDE Plasma 5.10.3 has been released as the newest bug-fix update to Plasma 5. For NVIDIA Linux users in particular this upgrade should be worthwhile.

  • New ISO for Slackware Live PLASMA5, with Stack Clash proof kernel and Plasma 5.10.2
  • GSoC - Week 3- 4

    Today, I will talk about my work on Krita during the week 3-4 of the coding period.

  • Let there be color!

    I've contributed to KDevelop in the past, on the Python plugin, and so far working on the Rust plugin, my impressions from back then were pretty much spot-on. KDevelop has one of the most well thought-out codebases I've seen. Specifically, KDevPlatform abstracts over different programming languages incredibly well and makes writing a new language plugin a very pleasant experience.

  • Daemons and friendly Ninjas

    There’s quite a lot of software that uses CMake as a (meta-)buildsystem. A quick count in the FreeBSD ports tree shows me 1110 ports (over a thousand) that use it. CMake generates buildsystem files which then direct the actual build — it doesn’t do building itself.

    There are multiple buildsystem-backends available: in regular usage, CMake generates Makefiles (and does a reasonable job of producing Makefiles that work for GNU Make and for BSD Make). But it can generate Ninja, or Visual Studio, and other buildsystem files. It’s quite flexible in this regard.

    Recently, the KDE-FreeBSD team has been working on Qt WebEngine, which is horrible. It contains a complete Chromium and who knows what else. Rebuilding it takes forever.


          Linux Kernel NFSv4 Server /fs/nfsd/nfs4proc.c nfsd4_layout_verify UDP Packet denial of service   

A vulnerability, which was classified as problematic, has been found in Linux Kernel (the affected version is unknown). This issue affects the function nfsd4_layout_verify of the file /fs/nfsd/nfs4proc.c of the component NFSv4 Server. The manipulation as part of a UDP Packet leads to a denial of service vulnerability (crash). Using CWE to declare the problem leads to CWE-404. Impacted is availability.

The weakness was shared 05/05/2017 by Jani Tuovila as confirmed git commit (GIT Repository). The advisory is shared for download at git.kernel.org. The identification of this vulnerability is CVE-2017-8797. The attack may be initiated remotely. A single authentication is needed for exploitation. Technical details of the vulnerability are known, but there is no available exploit. The pricing for an exploit might be around USD $0-$5k at the moment (estimation calculated on 06/28/2017). The following code is the reason for this vulnerability:

if (!(exp->ex_layout_types & (1 << layout_type))) {

Applying a patch is able to eliminate this problem. The bugfix is ready for download at git.kernel.org. A possible mitigation has been published immediately after the disclosure of the vulnerability. The vulnerability will be addressed with the following lines of code:

if (layout_type >= LAYOUT_TYPE_MAX ||
   !(exp->ex_layout_types & (1 << layout_type))) {

The vulnerability is also documented in the vulnerability database at SecurityTracker (ID 1038790).

CVSSv3

VulDB Base Score: 4.3
VulDB Temp Score: 4.1
VulDB Vector: CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L/E:X/RL:O/RC:C
VulDB Reliability: High

CVSSv2

VulDB Base Score: 3.5 (CVSS2#AV:N/AC:M/Au:S/C:N/I:N/A:P)
VulDB Temp Score: 3.0 (CVSS2#E:ND/RL:OF/RC:C)
VulDB Reliability: High

CPE

Exploiting

Class: Denial of service / Crash (CWE-404)
Local: No
Remote: Yes

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: Patch
Status: Official fix
Reaction Time: 0 days since reported
0-Day Time: 0 days since found
Exposure Time: 0 days since known

Patch: git.kernel.org

Timeline

05/05/2017 Advisory disclosed
05/05/2017 Countermeasure disclosed
06/27/2017 SecurityTracker entry created
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources

Advisory: git.kernel.org
Researcher: Jani Tuovila
Status: Confirmed

CVE: CVE-2017-8797 (mitre.org) (nvd.nist.org) (cvedetails.com)

SecurityTracker: 1038790 - Linux Kernel NFSv4 Server Input Validation Flaw in pNFS LAYOUTGET Command Lets Remote Users Cause the Target Service to Crash

Entry

Created: 06/28/2017
Entry: 77.6% complete

          Linux Kernel up to 4.11.7 Message Queue msnd_pinnacle.c snd_msnd_interrupt buffer overflow   

A vulnerability, which was classified as critical, was found in Linux Kernel up to 4.11.7. Affected is the function snd_msnd_interrupt of the file sound/isa/msnd/msnd_pinnacle.c of the component Message Queue. The manipulation with an unknown input leads to a buffer overflow vulnerability. CWE is classifying the issue as CWE-119. This is going to have an impact on confidentiality, integrity, and availability.

The weakness was disclosed 06/28/2017. This vulnerability is traded as CVE-2017-9984 since 06/27/2017. Local access is required to approach this attack. A single authentication is required for exploitation. Technical details are known, but there is no available exploit. The structure of the vulnerability defines a possible price range of USD $5k-$25k at the moment (estimation calculated on 06/28/2017).

There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.

The entries 102883 and 102884 are pretty similar.

CVSSv3

VulDB Base Score: 5.3
VulDB Temp Score: 5.3
VulDB Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:X/RL:X/RC:X
VulDB Reliability: High

CVSSv2

VulDB Base Score: 4.1 (CVSS2#AV:L/AC:M/Au:S/C:P/I:P/A:P)
VulDB Temp Score: 4.1 (CVSS2#E:ND/RL:ND/RC:ND)
VulDB Reliability: High

CPE

Exploiting

Class: Buffer overflow (CWE-119)
Local: Yes
Remote: No

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: no mitigation known
0-Day Time: 0 days since found

Timeline

06/27/2017 CVE assigned
06/28/2017 Advisory disclosed
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources


CVE: CVE-2017-9984 (mitre.org) (nvd.nist.org) (cvedetails.com)
See also: 102883, 102884

Entry

Created: 06/28/2017
Entry: 72% complete

          Linux Kernel up to 4.11.7 Message Queue msnd_pinnacle.c intr buffer overflow   

A vulnerability was found in Linux Kernel up to 4.11.7 and classified as critical. Affected by this issue is the function intr of the file sound/oss/msnd_pinnacle.c of the component Message Queue. The manipulation with an unknown input leads to a buffer overflow vulnerability. Using CWE to declare the problem leads to CWE-119. Impacted is confidentiality, integrity, and availability.

The weakness was shared 06/28/2017. This vulnerability is handled as CVE-2017-9986 since 06/27/2017. The attack needs to be approached locally. A single authentication is needed for exploitation. There are known technical details, but no exploit is available. The current price for an exploit might be approx. USD $5k-$25k (estimation calculated on 06/28/2017).

There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.

Similar entries are available at 102882 and 102883.

CVSSv3

VulDB Base Score: 5.3
VulDB Temp Score: 5.3
VulDB Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:X/RL:X/RC:X
VulDB Reliability: High

CVSSv2

VulDB Base Score: 4.1 (CVSS2#AV:L/AC:M/Au:S/C:P/I:P/A:P)
VulDB Temp Score: 4.1 (CVSS2#E:ND/RL:ND/RC:ND)
VulDB Reliability: High

CPE

Exploiting

Class: Buffer overflow (CWE-119)
Local: Yes
Remote: No

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: no mitigation known
0-Day Time: 0 days since found

Timeline

06/27/2017 CVE assigned
06/28/2017 Advisory disclosed
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources


CVE: CVE-2017-9986 (mitre.org) (nvd.nist.org) (cvedetails.com)
See also: 102882, 102883

Entry

Created: 06/28/2017
Entry: 72% complete

          Linux Kernel up to 4.11.7 Message Queue msnd_midi.c snd_msndmidi_input_read buffer overflow   

A vulnerability has been found in Linux Kernel up to 4.11.7 and classified as critical. Affected by this vulnerability is the function snd_msndmidi_input_read of the file sound/isa/msnd/msnd_midi.c of the component Message Queue. The manipulation with an unknown input leads to a buffer overflow vulnerability. The CWE definition for the vulnerability is CWE-119. As an impact it is known to affect confidentiality, integrity, and availability.

The weakness was presented 06/28/2017. This vulnerability is known as CVE-2017-9985 since 06/27/2017. Attacking locally is a requirement. A single authentication is necessary for exploitation. Technical details of the vulnerability are known, but there is no available exploit. The pricing for an exploit might be around USD $5k-$25k at the moment (estimation calculated on 06/28/2017).

There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.

See 102882 and 102884 for similar entries.

CVSSv3

VulDB Base Score: 5.3
VulDB Temp Score: 5.3
VulDB Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:X/RL:X/RC:X
VulDB Reliability: High

CVSSv2

VulDB Base Score: 4.1 (CVSS2#AV:L/AC:M/Au:S/C:P/I:P/A:P)
VulDB Temp Score: 4.1 (CVSS2#E:ND/RL:ND/RC:ND)
VulDB Reliability: High

CPE

Exploiting

Class: Buffer overflow (CWE-119)
Local: Yes
Remote: No

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: no mitigation known
0-Day Time: 0 days since found

Timeline

06/27/2017 CVE assigned
06/28/2017 Advisory disclosed
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources


CVE: CVE-2017-9985 (mitre.org) (nvd.nist.org) (cvedetails.com)
See also: 102882, 102884

Entry

Created: 06/28/2017
Entry: 72% complete

          Microsoft Will Embed EMET Into Windows 10 Starting This Fall   
After previously stating it was dropping support for EMET in July 2018, Microsoft announced yesterday plans to embed its vaunted EMET security toolkit in the Windows 10 kernel during the operating system's major update, scheduled for October-November 2017. [...]
          MGTOW Crossing The Red Pill Sea   

There is an article over on Breibart London concerning men dumping society in general and women in particular that has attracted quiet a bit of attention. As of this morning it has some 14,000 comments and going. I wasn't going to do a post on it because I thought someone else would pick up on it and the stupid page kept crashing from all the ads loading anyway but since I don't see anyone else flagging it I'll put it up here.

I did manage to grab the article in between crashes so I'm going to put the thing up without my interruption just because of the fact that it is attracting enough attention that the Feminazis in their zeal for equality may attempt to have it removed.

Some speech is more equal than others.

The Sexodus, Part 1: The Men Giving Up On Women And Checking Out Of Society

"My generation of boys is f**ked," says Rupert, a young German video game enthusiast I've been getting to know over the past few months. "Marriage is dead. Divorce means you're screwed for life. Women have given up on monogamy, which makes them uninteresting to us for any serious relationship or raising a family. That's just the way it is. Even if we take the risk, chances are the kids won't be ours. In France, we even have to pay for the kids a wife has through adulterous affairs. 


"In school, boys are screwed over time and again. Schools are engineered for women. In the US, they force-feed boys Ritalin like Skittles to shut them up. And while girls are favoured to fulfil quotas, men are slipping into distant second place.

"Nobody in my generation believes they're going to get a meaningful retirement. We have a third or a quarter of the wealth previous generations had, and everyone's fleeing to higher education to stave off unemployment and poverty because there are no jobs.
"All that wouldn't be so bad if we could at least dull the pain with girls. But we're treated like paedophiles and potential rapists just for showing interest. My generation are the beautiful ones," he sighs, referring to a 1960s experiment on mice that supposedly predicted a grim future for the human race.

After overpopulation ran out of control, the female mice in John Calhoun's "mouse universe" experiment stopped breeding, and the male mice withdrew from the company of others entirely, eating, sleeping, feeding and grooming themselves but doing little else. They had shiny coats, but empty lives.

"The parallels are astounding," says Rupert.

Never before in history have relations between the sexes been so fraught with anxiety, animosity and misunderstanding. To radical feminists, who have been the driving force behind many tectonic societal shifts in recent decades, that's a sign of success: they want to tear down the institutions and power structures that underpin society, never mind the fall-out. Nihilistic destruction is part of their road map.

But, for the rest of us, the sight of society breaking down, and ordinary men and women being driven into separate but equal misery, thanks to a small but highly organised group of agitators, is distressing. Particularly because, as increasing numbers of social observers are noticing, an entire generation of young people—mostly men—are being left behind in the wreckage of this social engineering project.

Social commentators, journalists, academics, scientists and young men themselves have all spotted the trend: among men of about 15 to 30 years old, ever-increasing numbers are checking out of society altogether, giving up on women, sex and relationships and retreating into pornography, sexual fetishes, chemical addictions, video games and, in some cases, boorish lad culture, all of which insulate them from a hostile, debilitating social environment created, some argue, by the modern feminist movement.

You can hardly blame them. Cruelly derided as man-children and crybabies for objecting to absurdly unfair conditions in college, bars, clubs and beyond, men are damned if they do and damned if they don't: ridiculed as basement-dwellers for avoiding aggressive, demanding women with unrealistic expectations, or called rapists and misogynists merely for expressing sexual interest.

Jack Rivlin is editor-in-chief of student tabloid media start-up The Tab, a runaway success whose current strap-line reads: "We'll stop writing it when you stop reading it." As the guiding intelligence behind over 30 student newspapers, Rivlin is perhaps the best-placed person in the country to observe this trend in action. And he agrees that the current generation of young men find it particularly difficult to engage with women.

"Teenage boys always have been useless with girls, but there's definitely a fear that now being well-intentioned isn't enough, and you can get into trouble just for being clumsy," he says. "For example, leaning in for a kiss might see you branded a creep, rather than just inept."

The new rules men are expected to live by are never clearly explained, says Rivlin, leaving boys clueless and neurotic about interacting with girls. "That might sound like a good thing because it encourages men to take the unromantic but practical approach of asking women how they should behave, but it causes a lot of them to just opt out of the game and retreat to the sanctuary of their groups of lads, where being rude to women gets you approval, and you can pretty much entirely avoid one-on-one socialising with the opposite sex."

"There are also a lot of blokes who ignore women because they are scared and don't know how to act. It goes without saying that boys who never spend any time alone with women are not very good at relationships."

Rivlin has noticed the increased dependence on substances, normally alcohol, that boys are using to calm their nerves. "I've heard a lot of male students boast about never having experienced sober sex," he says. "They're obviously scared, which is natural, but they would be a lot less scared and dysfunctional if they understood 'the rules.'"

The result? "A lot of nice but awkward young men are opting out of approaching women because there is no opportunity for them to make mistakes without suffering worse embarrassment than ever."
Most troublingly, this effect is felt more acutely among poorer and less well educated communities, where the package of support resources available to young men is slight. At my alma mater, the University of Cambridge, the phenomenon barely registers on the radar, according to Union society president Tim Squirrell.

"I don't think I've really noticed a change recently," he says. "This year has seen the introduction of mandatory consent workshops for freshers, which I believe is probably a good thing, and there's been a big effort by the Women's Campaign in particular to try and combat lad culture on campus.
The atmosphere here is the same as it was a year ago - mostly nerdy guys who are too afraid to approach anyone in the first place, and then a smaller percentage who are confident enough to make a move. Obviously women have agency too, and they approach men in about the same numbers as they do elsewhere. There certainly haven't been any stories in [campus newspaper] The Tab about a sex drought on campus."

"I think that people are probably having as much sex as ever," he adds. At Cambridge, of course, that may not mean much, and for a variety of socioeconomic and class-based reasons the tribes at Oxford and Cambridge are somewhat insulated from the male drop-out effect.

But even at such a prestigious university with a largely middle- and upper-class population, those patronising, mandatory "consent" classes are still being implemented. Squirrell, who admits to being a feminist with left-of-centre politics, thinks they're a good idea. But academics such as Camille Paglia have been warning for years that "rape drives" on campus put women at greater risk, if anything.

Women today are schooled in victimhood, taught to be aggressively vulnerable and convinced that the slightest of perceived infractions, approaches or clumsy misunderstandings represents "assault," "abuse" or "harassment." That may work in the safe confines of campus, where men can have their academic careers destroyed on the mere say-so of a female student.

But, according to Paglia, when that women goes out into the real world without the safety net of college rape committees, she is left totally unprepared for the sometimes violent reality of male sexuality. And the panics and fear-mongering are serving men even more poorly. All in all, education is becoming a miserable experience for boys.

In schools today across Britain and America, boys are relentlessly pathologised, as academics were warning as long ago as 2001. Boyishness and boisterousness have come to be seen as "problematic," with girls' behaviour a gold standard against which these defective boys are measured. When they are found wanting, the solution is often drugs.

One in seven American boys will be diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) at some point in their school career. Millions will be prescribed a powerful mood stabiliser, such as Ritalin, for the crime of being born male. The side effects of these drugs can be hideous and include sudden death.

Meanwhile, boys are falling behind girls academically, perhaps because relentless and well-funded focus has been placed on girls' achievement in the past few decades and little to none on the boys who are now achieving lower grades, fewer honors, fewer degrees and less marketable information economy skills. Boys' literacy, in particular, is in crisis throughout the West. We've been obsessing so much over girls, we haven't noticed that boys have slipped into serious academic trouble.
So what happened to those boys who, in 2001, were falling behind girls at school, were less likely to go to college, were being given drugs they did not need and whose self-esteem and confidence issues haven't just been ignored, but have been actively ridiculed by the feminist Establishment that has such a stranglehold on teaching unions and Left-leaning political parties?

In short: they grew up, dysfunctional, under-served by society, deeply miserable and, in many cases, entirely unable to relate to the opposite sex. It is the boys who were being betrayed by the education system and by culture at large in such vast numbers between 1990 and 2010 who represent the first generation of what I call the sexodus, a large-scale exit from mainstream society by males who have decided they simply can't face, or be bothered with, forming healthy relationships and participating fully in their local communities, national democracies and other real-world social structures.

A second sexodus generation is gestating today, potentially with even greater damage being done to them by the onset of absurd, unworkable, prudish and downright misandrist laws such as California's "Yes Means Yes" legislation—and by third-wave feminism, which dominates newspapers like the Guardian and new media companies like Vox and Gawker, but which is currently enjoying a hysterical last gasp before women themselves reject it by an even greater margin than the present 4 out of 5 women who say they want nothing to do with the dreaded f-word.

The sexodus didn't arrive out of nowhere, and the same pressures that have forced so many millennials out of society exert pressure on their parent's generation, too. One professional researcher in his late thirties, about whom I have been conversing on this topic for some months, puts it spicily: "For the past, at least, 25 years, I've been told to do more and more to keep a woman. But nobody's told me what they're doing to keep me.

"I can tell you as a heterosexual married male in management, who didn’t drop out of society, the message from the chicks is: 'It's not just preferable that you should fuck off, but imperative. You must pay for everything and make everything work; but you yourself and your preferences and needs can fuck off and die.'"

Women have been sending men mixed messages for the last few decades, leaving boys utterly confused about what they are supposed to represent to women, which perhaps explains the strong language some of them use when describing their situation. As the role of breadwinner has been taken away from them by women who earn more and do better in school, men are left to intuit what to do, trying to find a virtuous mean between what women say they want and what they actually pursue, which can be very different things.

Men say the gap between what women say and what they do has never been wider. Men are constantly told they should be delicate, sensitive fellow travellers on the feminist path. But the same women who say they want a nice, unthreatening boyfriend go home and swoon over simple-minded, giant-chested, testosterone-saturated hunks in Game of Thrones. Men know this, and, for some, this giant inconsistency makes the whole game look too much like hard work. Why bother trying to work out what a woman wants, when you can play sports, masturbate or just play video games from the comfort of your bedroom?

Jack Donovan, a writer based in Portland who has written several books on men and masculinity, each of which has become a cult hit, says the phenomenon is already endemic among the adult population. "I do see a lot of young men who would otherwise be dating and marrying giving up on women," he explains, "Or giving up on the idea of having a wife and family. This includes both the kind of men who would traditionally be a little awkward with women, and the kind of men who aren't awkward with women at all.

"They've done a cost-benefit analysis and realised it is a bad deal. They know that if they invest in a marriage and children, a woman can take all of that away from them on a whim. So they use apps like Tinder and OK Cupid to find women to have protected sex with and resign themselves to being 'players,' or when they get tired of that, 'boyfriends.'"

He goes on: "Almost all young men have attended mandatory sexual harassment and anti-rape seminars, and they know that they can be fired, expelled or arrested based more or less on the word of any woman. They know they are basically guilty until proven innocent in most situations."
Donovan lays much of the blame for the way men feel at the door of the modern feminist movement and what he sees as its disingenuousness. "The young men who are struggling the most are conflicted because they are operating under the assumption that feminists are arguing in good faith," he says, "When in fact they are engaged in a zero-sum struggle for sexual, social, political and economic status—and they're winning.

"The media now allows radical feminists to frame all debates, in part because sensationalism attracts more clicks than any sort of fair or balanced discourse. Women can basically say anything about men, no matter how denigrating, to a mix of cheers and jeers."

That has certainly been the experience of several loose coalitions of men in the media recently, whether scientists outraged by feminist denunciations of Dr Matt Taylor, or video gamers campaigning under the banner of press ethics who saw their movement smeared as a misogynistic hate group by mendacious, warring feminists and so-called "social justice warriors".

Donovan has views on why it has been so easy for feminists to triumph in media battles. "Because men instinctively want to protect women and play the hero, if a man writes even a tentative criticism of women or feminism, he's denounced by men and women alike as some kind of extremist scoundrel. The majority of "men's studies" and "men's rights" books and blogs that aren't explicitly pro-feminist are littered with apologies to women. 

"Books like The Myth of Male Power and sites like A Voice for Men are favourite boogeymen of feminists, but only because they call out feminists' one-sided hypocrisy when it comes to pursing 'equality.'"

Unlike modern feminists, who are driving a wedge between the sexes, Men's Rights Activists "actually seem to want sexual equality," he says. But men's studies authors and male academics are constantly tip-toeing around and making sure they don't appear too radical. Their feminine counterparts have no suchforbearance, of course, with what he calls "hipster feminists," such as the Guardian's Jessica Valenti parading around in t-shirts that read: "I BATHE IN MALE TEARS." "I'm a critic of feminism," says Donovan. "But I would never walk around wearing a shirt that says, "I MAKE WOMEN CRY." I'd just look like a jerk and a bully."

It's the contention of academics, sociologists and writers like Jack Donovan that an atmosphere of relentless, jeering hostility to men from entitled middle-class media figures, plus a few confused male collaborators in the feminist project, has been at least partly responsible for a generation of boys who simply don't want to know.

In Part 2, we'll meet some of the men who have "checked out," given up on sex and relationships and sunk into solitary pursuits or alcohol-fuelled lad culture. And we'll discover that the real victims of modern feminism are, of course, women themselves, who have been left lonelier and less satisfied than they have ever been.

A little bit of shaming going on here but overall it's true. Men do get confused by all the conflicting messages and I think that is intentional. It is designed that way so men can be taken advantage of in their confusion or Blue Pill state but sooner or later every man does do his own cost benefit analysis, swallows that Red Pill and figures out that it isn't worth all the trouble.

"I can tell you as a heterosexual married male in management, who didn’t drop out of society, the message from the chicks is: 'It's not just preferable that you should fuck off, but imperative. You must pay for everything and make everything work; but you yourself and your preferences and needs can fuck off and die.'"

That dude is right, that is exactly their attitude and exactly the attitude I was running into and as a result of this prevailing attitude The Exodus by men is what follows.

All that shaming and manipulation to keep men on the plantation does work at first but inadvertently results in them leaving it. It gets to be too much, too soon and too obvious. Again, I think that is intentional but on a level higher than societal where women and society are just the useful idiots in the process to keep men and women at war with one another.

My war is over though and that is the brilliant part of MGTOW. By simply leaving the battlefield I win and so does each man that does the same. That doesn't mean the parasites won't come after us, they most certainly will but they will have to catch us first and we already have a head start on them.

MGTOW... OMW To The Promised Land.

UPDATE:

Paul Joseph Watson of InfoWars picks up on the story.



Yes he does use the term White Knight and gets that it doesn't work for men. I suspect he has been researching the Man-O-Sphere and will likely out it at some point. He certainly isn't a Feminist Mangina and understand how Feminism with all it's insanity is the tip of the spear in the Divide and Conquer strategy for those determined to destroy humanity.

          gpivtrig   

This package contains a kernel module to trigger a CCD camera and a (double-cavity Nd_YAG) laser for the so-called (Digital) Particle Image Velocimetry (PIV) and other image analysing techniques for fluid flows.

The software sends the TTL trigger pulses to the
camera, connected to the first pin of the parallel port and to the lasers, connected to the second and third pin, with a pre-scribed delay. As the application runs under RealTimeLinux and RTAI, the timings of the trigger pulses are well defined (i.e. the CPU load of the system is few critic).

The building system is very prelimary as I don't have any idea how to do it properly. Any suggestions are welcome.


          Easy File & Folder Protector - Update   
Protect files and folders situated on local media of Windows 95/98/ME/NT/2000/XP with Easy File & Folder Protector at Windows kernel level. Your can deny access to certain files and folders, or to hide them securely from viewing and searching.
          移植linux 3.x内核,Calibrating delay loop...   
尝试移植kernel 3.16.44内核到imx.287开发板,使用kernel内mxs_defconfig,imx28-evk.dts,修改dts串口映射,使能内核low-level debug, 编译后通过mfgtool下载到开发板,打印如下,停止在Calibrating delay loop...,是我的dts有问题还是kernel配置错误,请调试过的朋友 ...
          Debian 9.0 "Stretch" ออกแล้ว   

ลินุกซ์เดเบียนออกเวอร์ชั่น 9.0 แล้วอัพเดตซอฟต์แวร์ที่รองรับให้เป็นซอฟต์แวร์ยุคใหม่ ตัวเคอร์เนลใช้ Linux 4.9 ซึ่งเป็นรุ่น longterm ตัวล่าสุด ออกมาเมื่อปลายปี 2016 ความเปลี่ยนแปลงในซอฟต์แวร์ย่อยๆ มีอีกหลายอย่าง เช่น

  • เปลี่ยนมาใช้ MariaDB เป็นมาตรฐานแทน MySQL
  • เปลี่ยนมาใช้ Firefox/Thunderbird อีกครั้ง
  • โค้ดสามารถคอมไพล์แบบทำซ้ำได้ (reproducible) ถึง 94% ของซอร์สโค้ดทั้งหมด
  • X display ไม่ต้องรันด้วย root แล้ว
  • ซอฟต์แวร์หลักๆ อัพเดตเวอร์ชั่น เช่น Apache 2.4.25, Chromium 59.0, PHP 7.0, Ruby 2.3, Python 3.5.3, Golang 1.7

รองรับสถาปัตยกรรมซีพียู mips64el เพิ่มเข้ามาและถอด PowerPC ออกจากการรองรับ ระยะเวลาซัพพอร์ต 5 ปีนับจากวันที่ออกไป

ที่มา - Debian

Topics: 

          แอนดรอยด์จะปรับมาใช้เคอร์เนลลินุกซ์เวอร์ชันใหม่ขึ้น กำลังทำเคอร์เนล 4.4   

ปัญหาอย่างหนึ่งของแอนดรอยด์ คือการใช้เคอร์เนลลินุกซ์ที่เก่ามาก (แอนดรอยด์ใช้ 3.18 ที่ออกในปี 2014 ส่วนเคอร์เนลเวอร์ชันล่าสุดคือ 4.11)

Dave Burke หัวหน้าทีมวิศวกรรมแอนดรอยด์ ตอบคำถามเรื่องนี้ในบทสัมภาษณ์กับ Ars Technica ว่าเคอร์เนล 3.18 เป็นเคอร์เนลที่ซัพพอร์ตระยะยาว (LTS) ซึ่งมันหมดอายุซัพพอร์ตแล้วในเดือนมกราคม 2017

ปัญหาของเรื่องนี้เกิดจากแผนการซัพพอร์ต LTS ของลินุกซ์กับแอนดรอยด์มีระยะเวลาไม่เท่ากัน ลินุกซ์มีระยะซัพพอร์ต 2 ปี ส่วนแอนดรอยด์มีระยะซัพพอร์ตความปลอดภัย 3 ปี ทางแก้คือทีมแอนดรอยด์กำลังเจรจากับทีมลินุกซ์ให้ระยะการซัพพอร์ตยาวนานขึ้น

ตอนนี้แอนดรอยด์กำลังพัฒนาเคอร์เนลเวอร์ชัน 4.4 ใช้งานกันอยู่ภายใน ส่วนเป้าหมายในระยะยาวคือพยายามใช้เคอร์เนลที่ใหม่ขึ้น แต่ก็ต้องขึ้นกับผู้ผลิตชิปเซ็ต (เช่น Qualcomm) ช่วยซัพพอร์ตด้วยอีกทาง

ที่มา - Ars Technica, ภาพจาก Android

No Description


          Re: [PATCH 8/9] RISC-V: User-facing API   
James Hogan writes: (Summary) Hi Palmer,
Hi Palmer,
On Wed, Jun 28, 2017 at 11:55:37AM -0700, Palmer Dabbelt wrote: ...
+ unsigned long, unsigned long);
You suggested in the cover letter this wasn't muxed any longer, maybe you should have a prototype for each of the cmpxchg syscalls instead? Should be possible to test using madvise(MADV_DONTNEED) (which I think makes pages use the zero page with copy-on-write).
zero page with copy-on-write).
Also if this is going to be included on SMP kernels (where I gather proper atomics are available), does it need an SMP safe version too which uses proper atomics?
which uses proper atomics?
+ unsigned int *ptr;
should that be unsigned long __user *?
should that be unsigned long __user *?
+ preempt_enable();
Likewise to other comments above.
Likewise to other comments above.
This doesn't look much different to sysriscv_cmpxchg32 on 32-bit.
          Re: [PATCH v2] um: Avoid longjmp/setjmp symbol clashes with libpth ...   
Florian Fainelli writes: On 06/05/2017 12:34 PM, Richard Weinberger wrote:
It will part of the next pull request.
Humm okay, did you apply the patch in one of your kernel trees on git.kernel.org or somewhere else?

          Re: [PATCH] timekeeping: Use proper timekeeper for debug code   
Stafford Horne writes: (Summary) There was one issue where the per_cpu internal timers (used as clocksource) were not in sync, this pointed it out.
pointed it out.
There is another issue right now when switching from jiffies to the openrisc clocksource. Which is maybe ok because each have different starting points.
starting points.
[ 0.160000] clocksource: Switched to clocksource openrisc_timer [ 0.220000] INFO: timekeeping: Cycle offset (4294173293) is larger than the 'openrisc_timer' clock's 50% safety margin (2147483647) [ 0.220000] timekeeping: Your kernel is still fine, but is feeling a bit nervous Let me know if you want me to add this to the commit message in a v2.
          Re: [PATCH 5/5] dt-bindings: Document the Rockchip RGA bindings   
Rob Herring writes: (Summary) On Mon, Jun 26, 2017 at 10:53:22PM +0800, Jacob Chen wrote: .../devicetree/bindings/media/rockchip-rga.txt | 36 ++++++++++++++++++++++ Should be under bindings/gpu/
Should be under bindings/gpu/
+ rga: rga@ff680000 {
gpu@...
gpu@...
+ status = "disabled";
Don't show status in examples.
Don't show status in examples.
More majordomo info at http://vger.kernel.org/majordomo-info.html More majordomo info at http://vger.kernel.org/majordomo-info.html
          Re: [PATCH 17/20] dt-bindings: serial: stm32: add dma using note   
Rob Herring writes: On Mon, Jun 26, 2017 at 12:49:16PM +0000, Bich HEMON wrote: > From: Bich Hemon <bich.hemon@st.com> > > Signed-off-by: Gerald Baeza <gerald.baeza@st.com> > --- > .../devicetree/bindings/serial/st,stm32-usart.txt | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) Sounds like broken DMA to me. Acked-by: Rob Herring <robh@kernel.org>
          Re: [PATCH v3] cifs: Do not modify mid entry after submitting I/O ...   
Pavel Shilovsky writes: 2017-06-28 15:02 GMT-07:00 Long Li <longli@exchange.microsoft.com>: More majordomo info at http://vger.kernel.org/majordomo-info.html Thanks! Please fix the comment style to
Thanks! Please fix the comment style to
/* * .... */ Other than that - Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> -- Best regards, Pavel Shilovsky
          Re: [alsa-devel] [PATCH v2 12/12] ASoC: Fix use-after-free at card ...   
Robert Jarzmik writes: Mark Brown <broonie@kernel.org> writes:
Mark Brown <broonie@kernel.org> writes:
Hi Mark,
Hi Mark,
The first patch is only moving around a .h file. I was expecting a review before reposting a v3.
reposting a v3.
Cheers.
Cheers.

          Re: [RESEND PATCH] critical patch to fix pci-tree build bot failure   
Stephen Rothwell writes: Hi Bjorn,
Hi Bjorn,
On Wed, 28 Jun 2017 15:43:17 -0500 Bjorn Helgaas <helgaas@kernel.org> wrote: need to add the patch yourself, Stephen.
Excellent, thanks.
Excellent, thanks.

          Re: linux-next: build failure after merge of the block tree   
Stephen Rothwell writes: (Summary) Hi Jens,
Hi Jens,
On Wed, 28 Jun 2017 09:11:32 -0600 Jens Axboe <axboe@kernel.dk> wrote: both a u64 put and get user.
Yes, put_user is fine (it does 2 4 byte moves. The asm is there to do the 8 byte get_user, but the surrounding C code uses an unsigned long for the destination in all cases (some other arches do the same). I don't remember why it is like that.
don't remember why it is like that.
we copy things in and out for that set of fcntls.
OK, thanks.
OK, thanks.

          Re: [PATCH ALT4 V2 2/2] audit: filter PATH records keyed on filesy ...   
Richard Guy Briggs writes: (Summary) On 2017-06-28 15:08, Paul Moore wrote:
think that removes my concerns.
That's fixed in my cleanup that I'm waiting to push. In fact it is "filesystem" in the userspace patch.
"filesystem" in the userspace patch.
paul moore
- RGB
- RGB
--
Richard Guy Briggs <rgb@redhat.com>
Sr. S/W Engineer, Kernel Security, Base Operating Systems Remote, Ottawa, Red Hat Canada
IRC: rgb, SunRaycer
Voice: +1.647.777.2635, Internal: (81) 32635
Voice: +1.647.777.2635, Internal: (81) 32635
Voice: +1.647.777.2635, Internal: (81) 32635

          Re: [PATCH v3 3/5] i2c: pca-platform: add devicetree awareness   
Chris Packham writes: (Summary) On 28/06/17 21:20, Andy Shevchenko wrote:=0A=
-gpios", GPIOD_OUT_LOW);=0A=
=0A=
=0A=
Nothing particular. Happy to change this.=0A= =0A=
,=0A=
=0A=
So what's the current best practice with this? I gather the intent is to = =0A=
keep the kernel size down by only including the of_match tables on =0A= platforms that actually use a device tree. Have we now reached a point where there are more dt-aware =0A= platforms than unaware ones?=0A=
=0A=
=0A=
=0A=

          Re: [PATCH] workqueue: Ensure that cpumask set for pools created a ...   
Michael Bringmann writes: I will try that patch tomorrow. My only concern about that is the use of WARN_ON(). As I may have mentioned in my note of 6/27, I saw about 600 instances of the warning message just during boot of the PowerPC kernel. I doubt that we want to see that on an ongoing basis.
an ongoing basis.
Michael
Michael
On 06/13/2017 03:10 PM, Tejun Heo wrote:
Thanks.

          How do I lock a hard drive with hdparm? :: Kernel & Hardware   

-- Delivered by Feed43 service


          ecryptfs export over NFS :: Kernel & Hardware   

-- Delivered by Feed43 service


          Microsoft releases 20K lines of source code to Linux kernel   

In an effort to bolster performance under Linux in its VMware competitor hypervisor software — Windows Server 2008 Hyper-V and Windows Server 2008 R2 Hyper-V — Microsoft today release 20,000 lines of source […]

The post Microsoft releases 20K lines of source code to Linux kernel appeared first on Geek.com.


          Ksplice gives Linux users 88% of kernel updates without rebooting   

Have you ever wondered why some updates or installs require a reboot, and others don’t? The main reason relates to kernel-level (core) services running in memory which either have been altered by the […]

The post Ksplice gives Linux users 88% of kernel updates without rebooting appeared first on Geek.com.


          Mid-Level C++/Linux Software Engineer job in Herndon, VA   
<span>Seeking a mid-level C++ Embedded Engineer for a permanent job in Herndon, VA. <br>&nbsp;<br>Job Description:<br>This is an entry to mid-level software engineer position, developing embedded software for our hub linecard product using C/C++ on a Linux platform. The position provides an excellent opportunity to work for a rapidly growing company in the technology hub of Northern Virginia, and to gain experience in full life cycle software development, learn embedded system development skills, and to advance your career with the company&#39;s growth. The ideal candidate must have a BS degree in CS or EE, plus a couple of years of experience. MS degree preferred.<br>&nbsp;<br>Required Experience:<br>&bull; 3+ years of related software development experience <br>&bull; hands-on C/C++ development experience, STL <br>&bull; OO knowledge and programming experience in C++ <br>&bull; hands-on software development experience on Linux/Unix <br>&bull; TCP/IP programming experience <br>&bull; flexibility and ability to learn and use new technologies <br>&bull; ability to work well in a team environment as well as independently and get things done <br>&nbsp;<br>Extremely beneficial:<br>&bull; experience in writing Unix shell or Python scripts <br>&bull; cross-platform Linux/Unix programming experience <br>&bull; Knowledge of Linux kernel and drivers <br>&nbsp;<br>Beneficial:<br>&bull; experience with GDB <br>&bull; experience with Git <br>&bull; network programming <br>&nbsp;<br>Education:<br>&bull; Bachelor&#39;s degree in Computer Science or similar is required <br>&bull; Master&#39;s degree in Computer Science or equivalent is preferred <br>&nbsp;<br>Interested in this C++ Embedded Engineer job in Herndon, VA? Apply here!<br></span>
          Booting to a raid volume using jfs   
While upgrading an older debian server, I had to make some modifications to get the kernel booting. This shouldn't normally be necessary, but for some reason the hardware was not getting auto detected. I'm almost 100% sure I messed up some other step in order to require this. The system's root drive is raid1 and uses jfs. The following changes were done:

Added: /etc/initramfs-tools/modules:
jfs
raid1
md

Created:
/etc/initramfs-tools/scripts/local-top/probes
#!/bin/sh
modprobe raid1
modprobe md
modprobe jfs

mknod /dev/md2 b 9 2
mknod /dev/md0 b 9 0
mknod /dev/md1 b 9 1
mdadm --assemble --scan

The above script must be chmod +x'd. I copied to it /etc/initramfs-tools/scripts/local-top/probe/local-premount/ just to make sure it ran.

Modified /usr/sbin/mkinitramfs in the #modutils section:
copy_exec /sbin/mdadm /sbin
copy_exec /sbin/fsck.jfs /sbin
mkdir -p "${DESTDIR}/etc/mdadm"
cp -a /etc/mdadm/mdadm.conf "${DESTDIR}/etc/mdadm/"

The above changes were done by booting into the rescue mode of a debian installer and mounting.
          Linux Next Graphing   

A while back Rusty posted about graphing the size of the daily linux-next patches.

Since we are heading towards the merge window for 2.6.33 and hence sfr has been getting home later and later, I thought I'd take another look at it.

The dodgy script I've been using to create this is out here. This also creates the raw data file which is here.

You can see some periods where there was no linux-next release, like around the 2.6.27 release. You can also see that linux-next is never zero size. Either Linus doesn't take everything in linux-next, or new stuff for the following release is coming in before the last release is done with. sfr mentioned that there is some stuff in linux-next that's been in there for ages and hasn't been merged up to Linus.

There is a difference between how Rusty got his data and how I did. Rusty used the size of the bz2 patch out on kernel.org. These patches are against the release and release candidates (ie. against 2.6.30, 2.6.30-rc1, 2.6.30-rc2, etc). I'm using the linux-next git tree to determine how big linux-next is for that day. Since sfr bases linux-next off Linus' git origin each day, I take the difference between Linus' git origin and the linux-next release to determine the size. Since Linus' origin is at least as new as the RCs, my size is never larger than Rusty's. This is especially noticeable in the merge window (the ~2 weeks between the release and rc1). In the merge window, Rusty's size continues to grow until rc1 is released, but mine starts to go down almost immediately after the main release as Linus starts merging trees into his git origin and making life easier for sfr. Also, Rusty is using patch size (bz2 compressed) and I'm using the number of lines changed (insertions + deletions).

It seems that maintainers are working/merging new code constantly throughout the cycle. Ideally (yeah, coz I'm is the authority on this!), we wouldn't see a lot of new code hit linux-next just before the merge window opens as new code should hopefully be being tested at this point. If the rate of new code was slowing down before the merge window, we'd see the line flatten to horizontally before the release. I guess we're hacking until the last minute, who would have thought!?!? ;-)

The peaks of linux-next seem to be a reasonable predictor of the relative size of the following kernel release. ie. if linux-next is bigger, so is the following release, although it's not perfect (ie. 2.6.29 vs 32)
Release Actual line changes linux-next changes linux-next/Actual %
2.6.29 1879345 1222635 (peak at 2.6.28) 65%
2.6.30 1547035 1168031 (peak at 2.6.29) 76%
2.6.31 1419059 1118892 (peak at 2.6.30) 79%
2.6.32 (-rc8) 1618369 1247456 (peak at 2.6.31) 77%

These last two ideas are interesting to combine. When a release is delayed, it's resulting in more code for the following release, since code is being developed right up until the merge window opens. So delaying a release is double edged sword. It improves the current release (more testing/debugging), but makes the follow release bigger. If we were developing earlier in the cycle and then just testing as the merge window approached, we wouldn't have this phenomenon. I suspect this is already known, but hopefully this backs it up a bit.

I haven't attempted to confirm what Rusty noticed about hackers working more on weekends but if someone wants to analyse the raw data....

Since I've got this scripted up, so I'll endeavour to keep this graph updated out here.


          Why use ccontrol..   
ccontrol is great! Here are some reasons why (in dot points, 'coz everything is better in dot points!):
  • It's very easy to setup. It even does auto probing of your network to find distcc hosts.
  • You can use it to compile any package without the need to screw around with Makefiles etc. Just type make everywhere you want to compile
  • It's been tuned it so that interactive performance on your local machine is not compromised during compiles.
    Examples:
  • I used crosstool to install the 64 POWERPC GCC cross compiler on my x86 laptop. I didn't need to do anything special to get it to work under ccontrol, just downloaded crosstool and build as usual.
  • I can now build 64bit POWER Linux kernel on my laptop without playing with Makefiles or even having to specify any parameters to make (even ARCH=). This also uses distcc as you'd want. ccontrol works out what you want based on the directory name.
  •           [Free] 2017(June) Ensurepass Pass4sure VMware 2V0-731 Actual Tests 1-10   
    Ensurepass2017 June VMware Official New Released 2V0-731 Q&As100% Free Download! 100% Pass Guaranteed!http://www.ensurepass.com/2V0-731.html VMware Certified Professional 7 – Cloud Management and Automation QUESTION 1 Which two components are delivered by a load-balanced vRealize Automation virtual appliance? (Choose two.)   A. Proxy Agent B. DEM Worker C. Identity Manager D. vRealize Orchestrator   Correct Answer: CD     QUESTION 2 What is the correct method for adjusting an existing approval policy that is assigned?   A. Edit the existing policy and update the policy version number. B. Copy the policy, edit the copy, and assign the new copy of the policy. C. Create a new policy, delete the old policy, then assign the new policy. D. Export the policy, edit in an XML editor, and import the policy.   Correct Answer: B     QUESTION 3 Which vCloud Air authorization level is required to add vCloud Air as an Endpoint in vRealize Automation?   A. VPC User B. End User Role C. Read-only Administrator D. Virtual Infrastructure Administrator   Correct Answer: D     QUESTION 4 Which two IIS authentication settings must be enabled on the vRealize Automation IaaS web server? (Choose two.)   A. Negotiate Provider B. Windows Authentication Kernel Mode C. Windows Authentication Extended Protection D. Anonymous Authentication   Correct Answer: AB     QUESTION 5 What are two prerequisites when integrating vRealize Automation with NSX? (Choose two.)   A. A vCloud Air endpoint has been created. B. A vRealize Orchestrator endpoint has been created. C. A vSphere endpoint with network and security has been created. D. An NSX endpoint has been created.   Correct Answer: CD     QUESTION 6 Entitlements are specific to which vRealize Automation containers?   A. Fabric Groups B. Business Groups C. Endpoints D. Tenants   Correct Answer: B     QUESTION 7 What is one reason snapshots would NOT be available in the Clone from snapshot window when using the Linked Clone build Action?   A. The source virtual machine should be a virtual machine template. B. The source virtual machine has multiple snapshots. C. Data collection must be run for the vSphere Endpoint to retrieve the snapshot information. D. Data collection must be run for the Compute Resource to retrieve the snapshot information.   Correct Answer: B     QUESTION 8 Where are vRealize Automation property dictionary items defined?   A. TheCustom Propertytab B. TheDesigntab C. TheInfrastructuretab D. TheAdministrationtab   Correct Answer: C     QUESTION 9 A user receives a submission page with a red exclamation when attempting to submit a request. Which option best explains this behavior?   A. The approver has NOT approved the request. B. The user exceeded the allowable resources for that request. C. Actions have NOT been assigned to that item. D. The user is NOT entitled to that resource.   Correct Answer: B         QUESTION 10 Which option would explain why an error message is displayed when the vRealize Automation Software architect attempts to add the Load Init Data software component?  ... Read More
              Module#prepend   

    Module#include

    クラスAがモジュールMをincludeすると、メソッドの探索順は A → M → …のようになる。

    つまり、モジュールMはクラスAの親クラス側に差し込まれる。

    module M
      def hello
        puts "M hello"
      end
    end
    
    class A
      include M
    
      def hello
        super
        puts "A hello"
      end
    end
    
    A.new.hello
    # => M hello
    # => A hello
    
    p A.ancestors  # 自身と親クラスの配列を返す
    # => [A, M, Object, Kernel, BasicObject]
    

    Module#prepend

    Module#prependはクラスの子側(?)にモジュールを差し込む。

    したがって、モジュールのメソッドで、インスタンスのメソッドをラッピングできる。

    module M
      def hello
        puts "before"
        super
        puts "after"
      end
    end
    
    class A
      prepend M
    
      def hello
        puts "hello"
      end
    end
    
    A.new.hello
    # => before
    # => hello
    # => after
    
    p A.ancestors  # 自身と親クラスの配列を返す
    # => [M, A, Object, Kernel, BasicObject]
    

    差し込まれる順番は自然。

    Module#includeはクラスのすぐ後ろ(親側)に追加していく。

    Module#prependは先頭(子側)に追加していく。

    module M1; end
    module M2; end
    
    class A
      include M1  # A -> M1
      include M2  # A -> M2 -> M1 (Aのすぐ後ろに追加していく)
    end
    
    class B
      prepend M1  # M1 -> B
      prepend M2  # M2 -> M1 -> B (先頭に追加していく)
    end
    
    p A.ancestors  # 自身と親クラスの配列を返す
    # => [A, M2, M1, Object, Kernel, BasicObject]
    
    p B.ancestors  # 自身と親クラスの配列を返す
    # => [M2, M1, B, Object, Kernel, BasicObject]
    

    Module#includeはメソッドの前に何かを実行するってことができなかったけど、Module#prependではそれができる。

    メソッドの前後に処理を(自然に?)追加できるので便利ですね!

    参考リンク

    Ruby 2.0 : Module#prepend

    Module#prepend - alias_method_chainが滅ぶ日 - I am Cruby!


              Oral History of Avie Tevanian   
    Interviewed by David Brock, Hansen Hsu and John Markoff on 2017-02-21 in Mountain View CA, X8111.2017 © Computer History Museum Born of Armenian parents in 1961, into a working class, entrepreneurial family, Avadis "Avie" Tevanian grew up in New England, the oldest of three boys. His dad a machinist, from a young age, Avie and his brothers were into building things, but Avie alone showed a particular aptitude for mathematics. Having been introduced to a PDP-8 in high school, Avie enrolled at the University of Rochester after discovering they had a lab of Xerox Altos, on which he wrote several games and contributed to research. Avie continued on to graduate school at Carnegie Mellon University. Working under Professor Rick Rashid, another Rochester graduate, Avie started the Mach microkernel project, which eventually grew to a group of 12 people. Based on concepts from Rashid's Accent operating system, Mach was to be an improvement on Accent by targeting parallel processors, be highly portable, and be able to run BSD Unix programs. Engineers at Steve Jobs’ NeXT Computer decided they wanted to use Mach for NeXT’s operating system after they saw the work presented at a UNIX conference in 1986. Avie later attended a dinner in Palo Alto where Steve first relayed that interest. After finishing his d... === Original video: https://m.youtube.com/watch?v=vwCdKU9uYnE Downloaded by http://huffduff-video.snarfed.org/ on Sat, 24 Jun 2017 11:50:57 GMT Available for 30 days after download
              Intel(R) 100 series chipset family CSME for WIndows 7   
    1. You need to install this driver manually and this method tested and worked with HP 15 Notebook.
    2. Kernel-Mode Driver Framework 1.11 (KB 2685811) must first be installed if you are using Windows 7*.
    3. To install this driver download the driver package from Intel website and unzip the driver file. We will need it later.

    Download Link:
    https://downloadcenter.intel.com/download/25395/Intel-ME-11-Management-Engine-Driver-for-Intel-NUC

    File Name : ME_Consumer_Win7_8.1_10_11.0.4.1186.zip

    4. Then open device manager, look for Intel(R) 100 series chipset family CSME device (normally listed under other device).


    5. Right click and select update driver, then select browse my computer for driver software.
    6. Click browse then browse to the driver folder we unzipped earlier. Make sure to select include subfolder.
    7. Continue with on screen installation until finish and reboot.
              How to Find Out Which Version of Linux You Are Running   
    This story has moved to NerdBoys.com. Please read this story at its new location.

              Displaying Masqueraded Connections   
    This story has moved to NerdBoys.com. Please read this story at its new location.

              Followup to Testing with Mocks   
    Thanks to everyone who came to the Testing with Mocks TLC session at TechEd.

    Here are some links of things we touched on during the class. Please leave a comment if I missed something :)

    Test Smells:
    TDD Anti Patterns -- Be sure to read the comments, there are some valuable smells there too!

    Books:
    The Art of Unit Testing
    Working Effectively with Legacy Code
    TDD by Example (Kent Beck's book)

    Rhino Mocks:
    Ayende (creator of rhino mocks)
    Google Group
    Reference Guide

    Other .NET Testing Frameworks:
    nMock
    TypeMock
    Moq

    Test Runners:
    nUnit
    mbUnit

    Tools:
    Resharper
    TeamCity
    CruiseControl.NET
    CruiseControl.rb

    IoC Containers:
    Castle MicroKernel/Windsor
    Spring.Net
    StructureMap

    Community:
    Alt.Net
              Comment on Next Leap 42.3 Snapshot Equates to Release Candidate by Roman   
    A lot of patches and features have been backported by the kernel devs from 4.9, 4.10 and 4.11 to kernel 4.4x LTS in Leap 42.3
              Slackware 13.37 – /proc/sys/kernel/dmesg_restrict   
    O d wersji jądra 2.6.37 wprowadzono mechanizm CONFIG_SECURITY_DMESG_RESTRICT (Restrict unprivileged access to the kernel syslog), który umożliwia określanie, czy zwykli użytkownicy systemu powinni mieć dostęp do polecenia dmesg służącego do wyświetlania informacji z dziennika bufora jądra. Jak zauważył Kees Cook z Ubuntu Security Team – syslog jądra (wspomniany dmesg[8]) pozostał jednym z ostatnich miejsc w […]
              Candy Crap Corn   
    Posted By Fly Boy


    Costume parties, horror-movie marathons, haunted houses, daddy's beatings; these are all components of the greatest holiday known to man, Halloween. There is one item, synonymous to Halloween, that plagues the holiday vibe more than the greedy little douchebag that takes all the candy when the sign clearly states "Please take one." That item is none other than the infamous Candy Corn.

    Everything about this corn syrup and sugar combination screams DISAPPOINTING, from its unrepresentative name to its displeasing array of colors. Candy Corn is the treat kidnapping rapists give their captives for being "a good girl." Its the candy your grandmother gives you every year, because she's had an industrial size bag since 1987. Its the corn that every mythical creature would find in their stool if they were to exist. Candy Corn is what you eat if you hate yourself.

    I read somewhere that one company in Texas produces enough Candy Corn each year to circle the earth 4.25 times if the kernels were laid end to end. What?...Why?...What population are they tending too? I am currently purchasing a plane ticket to Dallas to burn this factory to the ground. I will then bask in the fumes of charred sugar and corn syrup while I make ash angels in its remains. Meanwhile, everyone else should do their part and stop purchasing Candy Corn. If you are a fan of Candy Corn, and are amongst the population questioned above, then I graciously ask of you two favors...1. cut yourself and 2. get your fix during another Holiday, like Kwanzaa. Halloween doesn't need Candy Corn in its arsenal of awesomeness.

              Still happy with the ASUS EeePC 701   
    Recently Eric asked on the LUG Vorarlberg mailing list about netbook experience. I wrote a lengthy reply summarizing my experiences with the ASUS EeePC 701. And I thought this is something I probably should share with more people than only one LUG:

    I ordered an ASUS EeePC 701 (4G) with US keyboard layout at digitec in Spring 2008, got it approximately one month later and posted a first resumé after one month in my blog.

    I’m still very happy with the EeePC 701, despite two commonly mentioned drawbacks (the small screen resolution and the small SSD – which I both don’t see as real problems) and some other minor issues.

    What matters

    • Very robust and compact case. And thanks to a small fan being the only moving part inside, the EeePC 701 is also very robust against mobile use.
    • Very pleasing always-in-my-daypack size (despite the 7" screen it’s the typical 9" netbook size) and easily held with one hand.
    • Black. No glossy display. Neither clear varnish nor piano laquer. Short: No bath room tile. Textured surface, small scratches don’t stick out and don’t matter.
    • Debian (previously Lenny, now Sid) runs fine on it, even the webcam works out-of-the-box.
    • Despite all those neat features, it was fscking cheap at that time. And it was available without Windows.

    Nice to have

    • There’s power on the USB sockets even if the EeePC is turned off but the power supply is plugged in.
    • The speakers are impressingly good and loud for their size. (But my demands with regards to audio are probably not too high, so audiophiles shouldn’t run to ebay because of this. ;-)
    • It has three external USB sockets.

    What doesn’t matter

    • The small 7" 800×480 screen: I like small fonts and do most things inside a terminal anyway. And even with 800×480, those terminals are still much bigger than 80×25 characters. Only some applications and webpages have no heart for small screens.
    • The small disk size: Quite a lot of programs fit on 4 GB of disk space. Additionally I use tmpfs a lot. And music and video files are either on a external 500 GB Western Digital 2.5" “My Passport” disk (which I need quite seldomly) or much more come via sshfs and IPv6 from my home server anyway. :-)
    • The small keyboard: I just don’t have any problems with the size or layout (right shift right of the cursor up key, etc.) of the keyboard. Well, maybe except that any standard sized keyboard feels extremely large after having used the EeePC exclusively for some weeks. ;-)
    • The to 630 MHz underclocked 900 MHz Intel Celeron: It’s enough for most of the things I do with the EeePC. Also the original 512 MB RAM are somehow ok, but for using tmpfs, but no swap space at all, 1 GB or 2 GB are surely the better choice.
    • A battery runtime of 2.5h to 3h is fine for me.

    What’s not so nice

    • The “n” key needs to be pressed slighty stronger than other keys, otherwise no “n” appears. So if one of my texts in average misses more “n” than other letters, I typed it on the EeePC. ;-)
    • Home, End, Page-Up, and Page-Down need the Fn key. This means that these keys can only be used with two hands (or one very big hand and I have quite small hands). This is usually no problem and you get used to it. It’s just annoying if you hold the EeePC with one hand and try to type with the other.
    • What looks like a single mouse button is a seesaw and therefore two mouse buttons below one button. This makes it quite hard to press both at the same time, e.g. for emulating a middle mouse button press. It usually works in about half of all cases I tried it. My solution was to bind some key combination to emulate a middle mouse button in my window manager, ratpoison:
      bind y ratclick 2
      And that mouse button bar already fell off two times.
    • The battery reports only in 10% steps, and reporting in percentage instead of mAh is an ACPI standard violation because reporting in percentage is only allowed for non-rechargable batteries. It also doesn’t report any charging and discharging rates. But in the meanwhile nearly all battery meter can cope with these hardware bugs. This was quite a problem in the early days.
    • Now, after approximately 1.5 years, the battery slowly fritzes: When charging there are often only seconds between 10% and 40%. Rigorously using up all power of the battery helped a little bit. Looks like some kind of memory effect althought the battery is labeled Li-Ion and not Ni-MH and Li-Ion batteries are said to have no memory effect.
    • The SD card reader only works fine if you once completed the setup of the original firmware or set the corresponding BIOS switch appropriately. No idea why.

    Similar models

    Technically, most of this also counts for the EeePC 900SD (not 901) which only differs in screen, resolution and disk size as well as CPU, but not on the the case. So same size, same robustness, same battery, same mainboard, bigger screen, resolution, disk and faster CPU. (The 901 has a different CPU, a different battery, and a different, glossy and partially chromed case.) See Wikipedia for the technical specifications of all EeePC models.

    ASUS’ only big FAILure

    Stopping to sell most EeePCs with Linux and cowardly teaming up with Microsoft after having shown big courage to come out with a Linux only netbook. Well, you probably already know, but it’s better without Windows

    So basically you no more get these really neat netbooks from ASUS anymore and you get nearly no netbooks with Linux from ASUS in the stores anymore. It’s a shame.

    Would I buy it again?

    Sure.

    Well, maybe I would also buy the 900SD, 900AX (replacing the harddisk with an SSD) or 702 (8G) instead of the 701, but basically they’re very similar. See Wikipedia for the differences between these EeePC models. And of course I still prefer the versions without Windows.

    But despite the low price, the EeePC 701 is surprisingly robust and still works as on the first day (ok, except battery, the mouse button bar and the “n” key ;-), so I recently bought a second power supply (only white ones were available *grrrr*) and ordered a bigger third party battery plus an adapter to load the battery directly from the (second) power supply without EeePC inbetween.

    What desktop do I use on the EeePC?

    None.

    I use ratpoison as window manager, uxterm, urxvt, and yeahconsole as terminal emulators (running zsh with grml based .zshrc even as root’s login shell :-), wicd-curses as network manager and xmobar (previously dzen2) with i3status as text-only panel. Installed editors are GNU Emacs 23, GNU Zile and nvi. (No vim. :-)

    And of course a netbook wouldn’t be a netbook if it wouldn’t have a lot of network applications installed. For me the most important ones are: ssh, scp, autossh, sshfs, miredo, conkeror, git, hg, and rsync.

              Can't resist this meme   
    Just stumbled over this meme at Adrian (the meme seems to be started by madduck involuntarily), and since I’m fascinated by how people choose hostnames since my early years at university, I can’t resist to add my two cents to this meme.

    To be exact, I have two schemes, one for servers out there somewhere (Hetzner, xencon, etc.) and they’re all wordplays on their domain name noone.org, e.g. symlink.to.noone.org (short name “sym” :-), gateway.to.noone.org (usually an alias for one of the machines below), virtually.noone.org (always a virtual machine, initially UML, soon a Xen DomU), etc. So nothing for a quiz here.

    My other scheme is for all my machines at home and my mobile machines. I’ll start this list with the not so obvious hostnames, so the earlier you guess the scheme, the better you are (or the better you know me ;-). One more hint in advance: “(*)” means this attribute or fact made me choose the name for the machine and therefore can be used as hint for the scheme. :-)

    azam
    My first PC at all, a 386 with 25 MHz and MS-DOS. (Got named retroactively(*). Hadn’t hostnames at that time.)
    ak (pronounced as letters)
    Got it from my brother after he didn’t need it anymore. It initially was identical to azam, but once was upgraded to a 486. Still have the 386 board, though.
    azka
    My first self-bought computer, a pure SCSI system with a AMD K5-PR133 and 32 MB RAM. Initially had SuSE 4.4 and Windows 95 on. Still my last machine which had a Windows installed! :-)
    m35
    Same case and same speed as azka. Used it for experimenting(*) with Sid years ago.
    azu
    Initially also an AMD K5-PR133, later replaced by a Pentium 90 and used as DSL router.
    azl
    An HP Vectra 386/25N book size mini desktop I saved from the scrapyard at Y_Plentyn before his (first) move to Munich. The cutest(*) 386 I ever saw.
    ayce
    A 386 with 387 co-processor(*) and solded 8 MB of RAM.
    ayca
    A 1992 Toshiba T6400C 486 laptop bought at VCFe 5.0.
    bijou
    My 1996 ThinkPad 760ED, which is still working and running Debian GNU/Linux 5.0 Lenny (I started with Debian 3.0 Woody on it and always dist-upgraded it! :-)
    gsa (pronounced as letters)
    My long-time desktop after azka. A Pentium II with 400 MHz and 578 MB of RAM at the end. Bought used at LinuxTag 2003, it worked until end of last year when it started to suddenly switch off more and more often and now refuses to boot at all. Hasn’t been replaced yet though. I mostly use my laptops at home since then.
    gsx (pronounced as letters)
    An AMD K6 with 500 MHz I got from maol and which was used as Symlink test server more than once. (It was the machine initially named symlink.to.noone.org because of that.)
    hy
    My 32 bit Sparc, a Hamilton Hamstation.
    hz (pronounced as letters)
    My 64 bit Sparc, an UltraSparc 5.
    tub
    An HP Apollo 9000 Series 400, model 400t from 1990.
    tpv (pronounced as letters, too ;-)
    My Zaurus SL-5500G.
    tryane
    A Unisys Acquanta CP mini desktop with a passively cooled(*) 200 MHz Pemtium MMX. Used as DSL router for while, but the power supply fan was too noisy.
    lna (pronounced as letters)
    A 233 MHz Alpha
    loadrunner
    An IBM ThinkPad A31 running Sid. I use it as beside terminal.
    pony
    A Compaq LTE5100 laptop with a Pentium 90 running Sid.
    dagonet
    A Sony Vaio laptop which ran Debian GNU/kFreeBSD until it broke.

    Those who know me quite good should already have guessed the scheme, even if they can’t assign all the names. For all others, here’s one name which doesn’t exactly fit into the scheme, but still is related in someway, but you need to knowledge of the theme’s subject to know the relation:

    colani
    A big tower from the early 90s designed by Colani.

    Ok, and now the more obvious hostnames:

    rosalie
    A very compact Toshiba T1000LE 8086 laptop running ELKS and FreeDOS.
    amisuper
    Also an old Symlink test server from maol. He named it “dual”. 2x(*) Pentium I with 166 MHz. Unfortunately doesn’t boot anymore.
    visa
    An IBM NetVista workstation running Debian GNU/kFreeBSD. My current IRC host.
    nemo
    My ASUS EeePC running Debian 5.0 Lenny.
    pluriel
    My current WLAN router running FreeWRT.
    c1
    My MicroClient JrSX, an embedded 486SX compatible machine with 300 Mhz for VESA mountings.
    c2
    My MicroClient Jr, an embedded Pentium MMX compatible machine with 200 Mhz for VESA mountings.
    c-crosser
    My Lenovo ThinkPad T61 running Debian 5.0 Lenny.
    c-cactus and c-metisse
    The KVM based virtual(*) machines on c-crosser running Sid and Debian GNU/kFreeBSD.
    jumper
    My NAS(*) at home, currently a TheCus N4100. Soon to be replaced by some Mini-ITX box.

    Any one who hasn’t guessed the scheme yet? For those understanding German it’s explained at the end of my old hardware page. For all others I suggest either to look at the domain name in my e-mail address (no, it’s usually not noone.org).

    Still not clear? Well, feel free to ask me for all the gory details or mark the following white box to see the scheme as well as the explanations for nearly all hostnames hidden in there:

    All the machines are named after Citroëns. Old machines after old Citroëns, current hardware after current Citroën models or prototypes.

    Those names starting with “A” are 2CV derivatives since the 2CV was Citroëns “A” model. “AZ” was the 2CV, AZU and AK were 2CV vans and everything starting with AY (e.g. AYA, AYA2, AYB – but those don’t sound that nice ;-) is Dyane based, but I currently only use Méhara names (AYCA is the normal Méhari, AYCE the 4x4 version). Interestingly not everything starting with AYC is a Méhari: AYCD was the Acadiane, the Dyane van.

    HY and HZ are variants of Citroëns “H van” (HX, HW and H1600 as well, but they don’t sound that nice), TUB was the pre-WWII “H van” prototype and later the nickname of the “H van” in France.

    TPV was the name of the pre-WWII 2CV prototype and an abbreviation for Toute Petite Voiture (French for “Very Small Car”), hence the Zaurus, my smallest Linux box, got that name. Rosalie was the nickname of a rear-wheel drive pre-WWII Citroën.

    M35 was a Wankel engine prototype of the Ami 8 and the Ami Super was the 4 cylinder version of the Ami 8. Bijou was a 2CV based coupé build by Citroën UK in the late 50s and early 60s.

    Visa and LNA were 2CV predecessors which were available with 2CV engines, but were stopped before the 2CV. GSA and GSX are GS late derivatives.

    C1, C2, (C3) Pluriel, C-Crosser, Jumper and Nemo are current Citroën models and C-Cactus and C-Métisse are recent Citroën prototypes and show cars.

    The 2CV Dagonet was an aerodynamically optimised 2CVs by Jean Dagonet in the 50s. The Tryane is an aerodynamic and fuel efficient, three wheeled car by Friend Wood based on the 2CV and with a body of wood. And Colani once dressed a 2CV so that it broke several efficiency world records.

    The Namco Pony was a 2CV based light utility truck (similar to the Méhari, but with steel body) built in Greece under license in many variants.

    And Loadrunner is the name of some CX six-wheeler conversions.

    Some links about the naming items:

    Hope you had fun. I had. ;-)

    Now playing: Willi Astor — Gwand Anham Ära

              编写Linux设备驱动程序   
    序言 Linux是Unix操作系统的一种变种,在Linux下编写驱动程序的原理和思想完全类似于其他的Unix系统,但它dos或window环境下的驱动程序有很大的区别。在Linux环境下设计驱动程序,思想简洁,操作方便,功能也很强大,但是支持函数少,只能依赖kernel中的函数
              Video demo: Live Linux Kernel Patching with kGraft   

    In the 2 weeks since we announced the existence of kGraft, there have been many questions about how this solution for live-patching the Linux kernel works. And because (moving) pictures often speak louder than words, here is a video of kGraft in action on the official SUSE YouTube channel.

    The post Video demo: Live Linux Kernel Patching with kGraft appeared first on SUSE Blog. Bryan Lunduke


              kGraft: Live Kernel Patching   

    It has many names – hot fixing, live patching, runtime patching, rebootless updates, concurrent updates. It’s a holy grail of uptime.

    +read more

    The post kGraft: Live Kernel Patching appeared first on SUSE Blog. Vojtěch Pavlík


              Exploring temporal information in neonatal seizures using a dynamic time warping based SVM kernel   
    Exploring temporal information in neonatal seizures using a dynamic time warping based SVM kernel Ahmed, Rehan; Temko, Andriy; Marnane, William P.; Boylan, Geraldine B.; Lightbody, Gordon Seizure events in newborns change in frequency, morphology, and propagation. This contextual information is explored at the classifier level in the proposed patient-independent neonatal seizure detection system. The system is based on the combination of a static and a sequential SVM classifier. A Gaussian dynamic time warping based kernel is used in the sequential classifier. The system is validated on a large dataset of EEG recordings from 17 neonates. The obtained results show an increase in the detection rate at very low false detections per hour, particularly achieving a 12% improvement in the detection of short seizure events over the static RBF kernel based system.
              blender pancakes   

    A friend mentioned blender pancakes to me a while ago and I finally gave it a try for dinner last night. I will never make another pancake recipe (unless it's pumpkin pancakes or apple cinnamon pancakes!) and this is why;

    Taste. The blender pancake has a very nutty, natural taste while still being light and fluffy. Michael repeatedly commented on how good the pancakes were and the kids ate several each. Gemma, alone, had three!

    Ease. It's even easier to make than pancake mix, you just toss everything in the blender then pour it onto the griddle.

    Nutrition. While the recipe calls for a small amount of sweetener (sugar or honey or agave syrup) and oil, there are no preservatives or unpronounceable ingredients and the freshly ground whole wheat is so good for you.


    bender pancakes
    1 cup milk
    1 cup wheat kernels, whole & uncooked (I used hard red wheat)
    2 eggs
    2 tsp baking powder
    1 tsp salt
    2 tablespoons oil
    2 tablespoons honey or sugar
    1 teaspoon vanilla
    1/2 teaspoon lemon juice

    Put milk and wheat kernels in blender. Blend on highest speed for 4 or 5 minutes or until batter is smooth. Add eggs, oil, baking powder, salt, honey or sugar, vanilla and lemon juice to the batter. Blend on low. Pour batter straight from the blender (less to wash later!) onto a hot griddle. Cook; flipping pancakes when bubbles pop and create holes.


    recipe lightly adapted from everyday food storage.net
              Gratis Software USB Secure 1.6.6 plus Crack   


    Memang benar penyebaran virus paling efektif dan paling cepat..selain melalui media internet Flash disc/usb...juga sangat rentan Apa lagi jatuh ketngan orang orang yang tak bertanggung jawab yang dengan senak jidat nya utak atik isi falsh disc kita tanpa Perlindungan Password,Oleh karna itu kita mesti super hati 2 dan jangan asal colok usb ke komputer tanpa perlindungan yang memadai..kali ini aqu ingin share sebuah Software untuk melindungi USB kita dari berbagai ancaman virus juga Password protectiff..yaitu USB Secure 1.6.6 semoga dengan adanya perlindungan terhadap flash disc kita ,semua data yang tersimpan di dalam nya terselamat kan dari berbagai serangan Virus..yang menyebalkan...ok Kita lihat fitur fitur nya yuukk...:


    Features and Benefits:
    - Password Protection: USB Secure is a powerful tool to password protect USB drive and all other external portable media. No matter what type of external storage device you use, USB Secure password protects it within seconds.
    - No Administrator Rights Required: USB Secure doesn’t install any kernel or filter drivers, and therefore doesn’t require any administrator rights to password protect USB drive and other portable media.
    - Compatible Everywhere: The program works on all flavors of Windows i.e. Windows 2000/ Windows XP / Windows Vista / Windows 7.0. USB Secure works perfectly well on all external portable media like USB flash drives, Thumb Drives, Memory Sticks, Memory Cards, Pen Drives and Jump Drives.
    - Autoplay Feature: Full plug and play is supported that lets you automatically protect USB drive and all such external storage devices currently plugged into your PC.
    - Complete USB Security:Whatever information, files, folders and documents you put in your USB drive, USB Secure keeps them completely secured.
    - Reliable and Independent: USB Secure lets you protect USB drive’s data by using several layers of patent pending protection methods. This makes its protection, PC and hardware independent.
    - Peace of Mind: Total peace of mind from security leaks and privacy breaches. Never again fear of what’s happening to your device while it is lost.
    - User Friendly Interface: USB Secure is easy to install, run and use. It doesn’t complicate its users with technical jargon common in other encryption programs.
    - Ease of Use: A very easy to use program with user-friendly interface.
    - Affordable Software: USB Secure is a new addition to our robust collection of affordable and reliable security applications. You need not to shed hundreds of dollars to protect USB drive!

    What's New in This Release:
    This version update of USB Secure resolves compatibility issue with external drives on non-administrative users. A recommended update.

     DOWNLOAD Gratis Software USB Secure 1.6.6 plus Crack
              Sensores de temperatura   

    lm_sensorslm-sensors es un proyecto con utilidades para monitorizar el hardware del equipo, entre las cosas que podemos monitorizar es la temperatura, la configuración es muy sencilla dado que podemos usar un asistente, en primer lugar instalamos el paquete:

    $ sudo apt-get install lm-sensors
    

    Una vez instalado procedemos a configurarlo:

    $ sudo sensors-detect
    # sensors-detect revision 5666 (2009-02-26 17:15:04 +0100)
    # System: VIA Technologies, Inc. VT82C694X
    # Board: Legend QDI Advance-10T
    
    This program will help you determine which kernel modules you need
    to load to use lm_sensors most effectively. It is generally safe
    and recommended to accept the default answers to all questions,
    unless you know what you're doing.
    
    Some south bridges, CPUs or memory controllers contain embedded sensors.
    Do you want to scan for them? This is totally safe. (YES/no):
    Silicon Integrated Systems SIS5595...                       No
    VIA VT82C686 Integrated Sensors...                          Success!
        (driver `via686a')
    VIA VT8231 Integrated Sensors...                            No
    AMD K8 thermal sensors...                                   No
    AMD K10 thermal sensors...                                  No
    Intel Core family thermal sensor...                         No
    Intel AMB FB-DIMM thermal sensor...                         No
    VIA C7 thermal and voltage sensors...                       No
    
    Some Super I/O chips contain embedded sensors. We have to write to
    standard I/O ports to probe them. This is usually safe.
    Do you want to scan for Super I/O sensors? (YES/no):
    Probing for Super-I/O at 0x2e/0x2f
    Trying family `National Semiconductor'...                   No
    Trying family `SMSC'...                                     No
    Trying family `VIA/Winbond/Fintek'...                       No
    Trying family `ITE'...                                      No
    Probing for Super-I/O at 0x4e/0x4f
    Trying family `National Semiconductor'...                   No
    Trying family `SMSC'...                                     No
    Trying family `VIA/Winbond/Fintek'...                       No
    Trying family `ITE'...                                      No
    
    Some systems (mainly servers) implement IPMI, a set of common interfaces
    through which system health data may be retrieved, amongst other things.
    We have to read from arbitrary I/O ports to probe for such interfaces.
    This is normally safe. Do you want to scan for IPMI interfaces?
    (YES/no):
    Probing for `IPMI BMC KCS' at 0xca0...                      No
    Probing for `IPMI BMC SMIC' at 0xca8...                     No
    
    Some hardware monitoring chips are accessible through the ISA I/O ports.
    We have to write to arbitrary I/O ports to probe them. This is usually
    safe though. Yes, you do have ISA I/O ports even if you do not have any
    ISA slots! Do you want to scan the ISA I/O ports? (YES/no):
    Probing for `National Semiconductor LM78' at 0x290...       No
    Probing for `National Semiconductor LM79' at 0x290...       No
    Probing for `Winbond W83781D' at 0x290...                   No
    Probing for `Winbond W83782D' at 0x290...                   No
    
    Lastly, we can probe the I2C/SMBus adapters for connected hardware
    monitoring devices. This is the most risky part, and while it works
    reasonably well on most systems, it has been reported to cause trouble
    on some systems.
    Do you want to probe the I2C/SMBus adapters now? (YES/no):
    Using driver `i2c-viapro' for device 0000:00:07.4: VIA Technologies VT82C686 Apollo ACPI
    WARNING: All config files need .conf: /etc/modprobe.d/display_class, it will be ignored in a future release.
    WARNING: All config files need .conf: /etc/modprobe.d/blacklist, it will be ignored in a future release.
    WARNING: All config files need .conf: /etc/modprobe.d/pnp-hotplug, it will be ignored in a future release.
    Module i2c-dev loaded successfully.
    
    Next adapter: SMBus Via Pro adapter at 5000 (i2c-0)
    Do you want to scan it? (YES/no/selectively):
    Client found at address 0x2d
    Probing for `Myson MTP008'...                               No
    Probing for `National Semiconductor LM78'...                No
    Probing for `National Semiconductor LM79'...                No
    Probing for `National Semiconductor LM80'...                Success!
        (confidence 1, driver `lm80')
    Probing for `National Semiconductor LM85'...                No
    Probing for `National Semiconductor LM96000 or PC8374L'...  No
    Probing for `Analog Devices ADM1027'...                     No
    Probing for `Analog Devices ADT7460 or ADT7463'...          No
    Probing for `SMSC EMC6D100 or EMC6D101'...                  No
    Probing for `SMSC EMC6D102'...                              No
    Probing for `SMSC EMC6D103'...                              No
    Probing for `Winbond WPCD377I'...                           No
    Probing for `Analog Devices ADT7476'...                     No
    Probing for `Andigilog aSC7611'...                          No
    Probing for `Andigilog aSC7621'...                          No
    Probing for `National Semiconductor LM87'...                No
    Probing for `Analog Devices ADM1024'...                     No
    Probing for `National Semiconductor LM93'...                No
    Probing for `Winbond W83781D'...                            No
    Probing for `Winbond W83782D'...                            No
    Probing for `Winbond W83783S'...                            No
    Probing for `Winbond W83791D'...                            No
    Probing for `Winbond W83792D'...                            No
    Probing for `Winbond W83793R/G'...                          No
    Probing for `Winbond W83627HF'...                           No
    Probing for `Winbond W83627EHF'...                          No
    Probing for `Winbond W83627DHG'...                          No
    Probing for `Asus AS99127F (rev.1)'...                      No
    Probing for `Asus AS99127F (rev.2)'...                      No
    Probing for `Asus ASB100 Bach'...                           No
    Probing for `Winbond W83L784R/AR/G'...                      No
    Probing for `Winbond W83L785R/G'...                         No
    Probing for `Genesys Logic GL518SM'...                      No
    Probing for `Genesys Logic GL520SM'...                      No
    Probing for `Genesys Logic GL525SM'...                      No
    Probing for `Analog Devices ADM9240'...                     No
    Probing for `Dallas Semiconductor DS1780'...                No
    Probing for `National Semiconductor LM81'...                No
    Probing for `Analog Devices ADM1026'...                     No
    Probing for `Analog Devices ADM1025'...                     No
    Probing for `Philips NE1619'...                             No
    Probing for `Analog Devices ADM1029'...                     No
    Probing for `Analog Devices ADM1030'...                     No
    Probing for `Analog Devices ADM1031'...                     No
    Probing for `Analog Devices ADM1022'...                     No
    Probing for `Texas Instruments THMC50'...                   No
    Probing for `VIA VT1211 (I2C)'...                           No
    Probing for `ITE IT8712F'...                                No
    Probing for `ALi M5879'...                                  No
    Probing for `SMSC LPC47M15x/192/292/997'...                 No
    Probing for `SMSC DME1737'...                               No
    Probing for `SMSC SCH5027D-NW'...                           No
    Probing for `Fintek F75373S/SG'...                          No
    Probing for `Fintek F75375S/SP'...                          No
    Probing for `Fintek F75387SG/RG'...                         No
    Probing for `Winbond W83791SD'...                           No
    Client found at address 0x50
    Probing for `Analog Devices ADM1033'...                     No
    Probing for `Analog Devices ADM1034'...                     No
    Probing for `SPD EEPROM'...                                 Yes
        (confidence 8, not a hardware monitoring chip)
    Probing for `EDID EEPROM'...                                No
    Client found at address 0x51
    Probing for `Analog Devices ADM1033'...                     No
    Probing for `Analog Devices ADM1034'...                     No
    Probing for `SPD EEPROM'...                                 Yes
        (confidence 8, not a hardware monitoring chip)
    Client found at address 0x52
    Probing for `Analog Devices ADM1033'...                     No
    Probing for `Analog Devices ADM1034'...                     No
    Probing for `SPD EEPROM'...                                 Yes
        (confidence 8, not a hardware monitoring chip)
    
    Now follows a summary of the probes I have just done.
    Just press ENTER to continue:
    
    Driver `lm80':
      * Bus `SMBus Via Pro adapter at 5000'
        Busdriver `i2c-viapro', I2C address 0x2d
        Chip `National Semiconductor LM80' (confidence: 1)
    
    Driver `via686a':
      * Chip `VIA VT82C686 Integrated Sensors' (confidence: 9)
    
    To load everything that is needed, add this to /etc/modules:
    #----cut here----
    # Chip drivers
    lm80
    via686a
    #----cut here----
    If you have some drivers built into your kernel, the list above will
    contain too many modules. Skip the appropriate ones!
    
    Do you want to add these lines automatically to /etc/modules? (yes/NO)
    
    Unloading i2c-dev... OK
    

    Una vez configurado, procedemos a ver la temperatura.

    $ sensors
    via686a-isa-6000
    Adapter: ISA adapter
    CPU core:    +1.51 V  (min =  +0.06 V, max =  +3.10 V)
    +2.5V:       +2.55 V  (min =  +2.36 V, max =  +2.61 V)
    I/O:         +3.44 V  (min =  +3.12 V, max =  +3.45 V)
    +5V:         +4.98 V  (min =  +4.73 V, max =  +5.20 V)
    +12V:       +12.12 V  (min = +11.35 V, max = +12.48 V)
    CPU Fan:    1493 RPM  (min = 42187 RPM, div = 8)
    P/S Fan:       0 RPM  (min = 1048 RPM, div = 8)
    SYS Temp:    +36.0°C  (high = +146.2°C, hyst = -70.9°C)
    CPU Temp:    +34.0°C  (high = +146.2°C, hyst = -70.9°C)
    SBr Temp:    +21.3°C  (high = +34.8°C, hyst = -31.6°C)
    

              TASK_KILLABLE: un nuevo estado para los procesos.   

    He leído en ibm que para la versión 2.6.25 del kernel hay un nuevo estado que se llama TASK_KILLABLE, es un añadido a los ya existententes (TASK_INTERRUPTIBLE y TASK_UNINTERRUPTIBLE). Básicamente este nuevo estado sustituye al ya existente TASK_UNINTERRUPTIBLE pero además nos da la posibilidad de que esos procesos reciban señales fatales. Esto resuelve el problema que había con algunos procesos que se quedaban colgados y la única forma de matarlos era reiniciando, ahora hay que esperar a que se use la nueva función.


              VServer   

    vserverVServer nos permite ejecutar varios kernels de GNU/Linux dentro de nuestra máquina de manera virtual, pero todos bajo el mismo kernel. En nuestro caso le vemos utilidad para en caso de tener varios servicios enjaulados para cada kernel virtual. Por ejemplo queremos tener el Apache (servidor web) y el bind (servidor dns), pero en vez de tenerlos lanzados en un servidor GNU/Linux sin virtualizar, podemos lanzarlos cada uno en un servidor virtual de Vserver. Lo bueno que tiene este sistema es que si por lo que sea hay un ataque a uno de nuestros servicios al estar enjaulado y cae, solo afectaría a ese concreto servicio. Lo malo es que están todos los servidores bajo un mismo hardware de tal forma que si hay un problema caen todos igualmente. Yo no le veo sentido tener el mismo servicio en diferentes servidores virtuales, pero no hay problema se puede hacer.



    En primer lugar vamos a proceder con la instalación:

    # apt-get install util-vserver vserver-debiantools debootstrap linux-image-vserver-686
    

    En nuestro caso tenemos arquitectura Pentium Pro, celeron, Pentium II-IV, ahora tenemos que arrancar con el nuevo kernel.

    vserver MiVserver build -n MiVserver --hostname MiVserver.mydominio.es \
    --interface eth0:192.168.123.123/24 -m debootstrap -- -d sid
    

    En este momento se pondrá a descargar los ficheros necesarios para el nuevo kernel.

    Tenemos dos comandos uno es para arrancar nuestra máquina virtual:

    vserver MiVserver start
    

    Y el otro para entrar en ella:

    vserver MiVserver enter
    

    En nuestra máquina virtual podemos instalar servicios como ssh etc.

    Para que nuestro servidor virtual arranque automáticamente podemos hacer:

    # echo "default" > /etc/vservers/MiVserver/apps/init/mark
    

              Thesis Defense: Learning Kernel-based Approximate Isometries   
    Announcing the Final Examination of Mahlagha Sedghi for the degree of Master of Science

    The increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and discovering the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi-Dimensional Scaling , a least-squares multi-dimensional scaling approach for exploratory data analysis and, when desirable, data visualization. Via the use of sparsity-promoting regularizers, the technique is capable of embedding data on a, typically, lower-dimensional manifold by naturally inferring the embedding dimension from the data itself. In the process, key training samples are identified, whose participation in the embedding map's kernel expansion is most influential. As we will show, such influence may be given interesting interpretations in the context of the data at hand. Furthermore, given appropriate positive-definite kernel functions, its kernel-based nature allows for such embeddings of data even with non-numerical features. The resulting multi-kernel learning, non-convex framework can be effectively trained via a block coordinate descent approach, which alternates between an accelerated proximal average method-based iterative majorization for learning the kernel expansion coefficients and a simple quadratic program, which deduces the multiple-kernel learning coefficients. Experimental results showcase potential uses of the proposed framework on artificial data as well as real-world datasets, that underline the merits of our embedding framework. Our method discovers genuine hidden structures in the data that, in the case of network data, matches the results of the well-known Multi-level Modularity Optimization community structure detection algorithm.

    Committee in Charge: Michael Georgiopoulos (Chair), Georgios Anagnostopoulos (Co-Chair), George Atia, Fei Liu

              Dissertation Defense: Improved Multi-Task Learning Based on Local Rademacher Complexity Analysis   
    Announcing the Final Examination of Niloofar Yousefi for the degree of Doctor of Philosophy

    When faced with learning a set of inter-related tasks from a limited amount of data, learning each task independently may lead to poor generalization performance. Multi-Task Learning (MTL) exploits the latent relations between tasks and overcomes data scarcity limitations by co-learning all these tasks simultaneously to offer improved performance. Although MTL has been actively investigated by the machine learning community, there are only a few studies examining the theoretical justification of this learning framework. These studies provide learning guarantees in the form of generalization error bounds which are considered as important problems in machine learning and statistical learning theory. This importance is twofold: (1) generalization bounds provide an upper-tail confidence interval for the true risk of a learning algorithm the latter of which cannot be precisely calculated due to its dependency to some unknown distribution P from which the data are drawn, (2) this type of bounds can also be employed as model selection tools, which lead to identifying more accurate learning models.

    The generalization error bounds are typically expressed in terms of the empirical risk of the learning hypothesis along with a complexity measure of that hypothesis. Although different complexity measures can be used in deriving error bounds, Rademacher complexity has received considerable attention in recent years, as these complexity measures can potentially lead to tighter error bounds compared to the ones obtained by other complexity measures. However, one shortcoming of the general notion of Rademacher complexity is that it provides a global complexity estimate of the learning hypothesis space, which does not take into consideration the fact that learning algorithms, by design, pick functions belonging to a more favorable subset of this space, and they therefore yield better performing models than the worst case. To overcome the limitation of global Rademacher complexity, a more efficient notion of Rademacher complexity, the so-called local Rademacher complexity, has been considered, which leads to sharper learning bounds, and as such, compared to its global counterpart, guarantees a faster rate of convergence. Also, considering the fact that local bounds are expected to be tighter than the global ones, they can motivate better (more accurate) model selection algorithms.

    While the previous MTL studies provide generalization bounds based on some other complexity measures, in this dissertation, we derive generalization error bounds for some popular kernel-based MTL hypothesis spaces based on the Local Rademacher Complexity (LRC) of those hypotheses. We show that these local bounds have faster convergence rate compared to the previous Global Rademacher Complexity (GRC)-based bounds. We then use our LRC-based MTL bounds to design a new kernel-based MTL model which benefits from strong learning guarantees. An optimization algorithm will be proposed to solve our new MTL problem. Finally, we run simulations on experimental data that compare our MTL model to some classical Multi-Task Multiple Kernel Learning (MT-MKL) models designed based on the GRCs. Since the local Rademacher complexities are expected to be tighter that the global ones, our new model is also expected to show better performance compared to the GRC-based models.

    Committee in Charge: Mansooreh Mollaghasemi (Chair), Michael Georgiopoulos, Luis Rabelo, Qipeng Phil Zheng, Georgios Anagnostopoulos, Petros Xanthopoulos

              Dissertation Defense: Data Representation in Machine Learning Methods with its Application to Compilation Optimization and Epitope Prediction   
    Announcing the Final Examination of Yevgeniy Sher for the degree of Doctor of Philosophy

    In this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with discrete/categorical data, with its representation playing a significant role in the performance of classification algorithms.

    We first present a neuroevolutionary approach which orders optimization phases to generate compiled programs with performance superior to those compiled using LLVM's -O3 optimization level. Performance improvements calculated as the speed of the compiled program's execution ranged from 27% improvement for the ccbench program, to 40.8% for bzip2.

    This dissertation then explores the problem domain of epitope prediction. This problem domain deals with text data, where protein sequences are presented as a sequence of amino acids. DRREP system is presented, which demonstrates how an ensemble of extreme learning machines can be used with string kernels to produce state of the art epitope prediction results. DRREP was tested on the SARS subsequence, the HIV, Pellequer, AntiJen datasets, and the standard SEQ194 test dataset. AUC improvements achieved over the state of the art ranged from 3% to 8%.

    We then present the SEEP epitope classifier, which is an SMV ensemble based classifier which uses contjoint triad feature representation, and produces state of the art classification results. SEEP leverages the domain specific knowledge based protein sequence encoding developed within the protein-protein interaction research domain. Using an ensemble of SVMs, and a sliding window based pre and post processing pipeline, SEEP achieves an AUC of 91.2 on the standard SEQ194 test dataset, a 24% improvement over the state of the art.

    Finally, this dissertation concludes by formulating a new approach for distributed representation of 3D biological data through the process of embedding. Analogously to word embedding, we develop a system that uses atomic and residue coordinates to generate distributed representation of residues. Preliminary results are presented where the Residue Surface Vectors, distributed representations of residues, are used to predict conformational epitopes and protein-protein interactions, with promising proficiency. The generation of such 3D BioVectors, and the proposed methodology, opens the door for substantial future improvements, and application domains.

    Committee in Charge: Shaojie Zhang (Chair), Damian Dechev (Co-Chair), Gary Leavens, Avelino Gonzalez, Degui Zhi

              Dissertation Defense: Weakly Labeled Action Recognition and Detection   
    Announcing the Final Examination of Waqas Sultani for the degree of Doctor of Philosophy

    Research in human action recognition strives to develop increasingly generalized methods that are robust to intra-class variability and inter-class ambiguity. Training such models need thousands of precise spatio-temporal manual annotations, which require many human annotators, hundreds of hours, and are subject to human biases.

    In the first part of this dissertation, we explore the reasons for poor classifier performance when tested on novel datasets, and quantify the effect of scene background on action representations and recognition. We propose a new method to obtain a measure of confidence in each pixel of the video being a foreground region using motion, appearance, and saliency together in a 3D-Markov Random Field (MRF) based framework. We also propose multiple ways to exploit the foreground confidence: to improve bag-of-words vocabulary, histogram representation of a video, and a novel histogram decomposition based representation and kernel.

    The above-mentioned method does not provide the precise spatio-temporal location of the actor and needs manual spatio-temporal annotations to train an action detector. Therefore, in the second part of this dissertation, we propose a weakly labeled approach to automatically obtain spatio-temporal annotations of actors in action videos. We first obtain a large number of action proposals in each video. To capture a few most representative action proposals in each video, we rank them using motion and saliency cues and select a few proposals using MAP based proposal subset selection method. Our next challenge is to iteratively select one proposal from each video so that all proposals are globally consistent. We formulate this as Generalized Maximum Clique Graph problem. The output of our method is the most action representative proposals from each video which are used to train action detector.

    The above-mentioned annotation method uses multiple videos of the same action. Therefore, in the third part of this dissertation, we tackle the problem of spatio-temporal action localization in a video, without assuming the availability of multiple videos or any prior annotations. We present a novel approach for action localization in videos using web images. Given a video, first we generate multiple spatio-temporal action proposals. To obtain the most action representative proposals, we reconstruct action proposals in the video by leveraging the action proposals in images. We solve this optimization problem using the variant of two-metric projection algorithm.

    Finally, we propose a framework to generate a few better action proposals that are ranked properly. We first divide each action proposal into sub-proposals and then use Dynamic Programming based graph optimization scheme to select the optimal combinations of sub-proposals from different proposals and assign each new proposal a score. We propose a new unsupervised image-based actionness detector that leverages web images and employs it as one of the node scores in our graph formulation. We demonstrate that properly ranked proposals produce significantly better action detection as compared to state-of-the-art proposal based methods.

    Our extensive experimental results on different challenging and realistic action datasets, comparisons with several competitive baselines and detailed analysis of each step of proposed methods validate the proposed ideas and frameworks.

    Committee in Charge: Mubarak Shah (Chair), Ulas Bagci, Guo-Jun Qi, Hae-Bum Yun

              Microsoft == x86 && Sometimes !x64   

    Whenever I do use Visual Studio and try to compile something under 64 bit I run into problems. It seems that most MS Devs for Visual Studio and the relevant tool chain are still mainly writing 32 bit applications.

    Here are some of the latest issues I did run into.

     

    COM applications targeting x64 are still using x32 as target platform for the MIDL compiler by default

    Resolution: You have to select in the UI MIDL – Target Environment X64 by yourself. Alternatively you can edit the vcxproj file directly and set the flag there:

      <Midl>
      <TargetEnvironment>X64</TargetEnvironment>

     

    The othere issue was that on my dev machine Visual Studio 2010 SP1 did freeze quite often. The dump showed that it did hang while loading  the assembly Microsoft.VSDesigner. I have filed a Connect issue for this but I got not any helpful feedback yet. In the meantime I did find out by myself why VS was hanging. Once the hang did occur it was freezing quite often which is good for a repro but very annoying if there is nothing you can do about it. I did send my error reports to MS every time hoping that they resolve the issue before VS2012 is released. The Connect support person requested another dump from me this time taken with VS directly. I have to remember that for my own trouble shooting that VS can now also take dumps which really helps. The interesting thing was that the VS call stack was more helpful since it seemed to resolve the symbols better. The deadlock has frozen the UI thread while some stack frames from MSI were on it.

     

        ntdll.dll!_ZwWaitForSingleObject@12() + 0x15 bytes   
        ntdll.dll!_ZwWaitForSingleObject@12() + 0x15 bytes   
        kernel32.dll!_WaitForSingleObjectExImplementation@12() + 0x43 bytes   
        msenv.dll!_VsCoCreateAggregatedManagedObject() + 0xe2 bytes   
        msenv.dll!_VsLoaderCoCreateInstanceUnknown() + 0x8e bytes   
        msenv.dll!CVsLocalRegistry4::CreateInstance() + 0x4a bytes   
        msenv.dll!CXMLMemberIndexService::GetCulture() + 0x17f5 bytes   
        msenv.dll!CXMLMemberIndexService::LocateAndOpenXMLFile() + 0x14 bytes   
        msenv.dll!CXMLMemberIndexService::CreateXMLMemberIndex() + 0x78 bytes   
        cslangsvc.dll!CMetaDataLoader::EnqueueMemberIndexRequest() + 0xe80cc bytes   
        cslangsvc.dll!CMetaDataTypeData::GetDocumentationComment() + 0x1b bytes   
        cslangsvc.dll!CSymbolDescription::CPartCollector::ConditionallyAddDocCommentParts<CTypeData>() + 0x32 bytes   
        cslangsvc.dll!CSymbolDescription::CTypeProviderHelper::TryExecute() + 0x278 bytes   
        cslangsvc.dll!CSymbolDescription::CNameProviderVisitor::Visit() + 0x56 bytes   
        cslangsvc.dll!CTypeProvider::Accept() + 0x13 bytes   
        cslangsvc.dll!CSymbolDescription::CNameProviderVisitor::TryExecute() + 0x2a bytes   
        cslangsvc.dll!CSymbolDescription::TryGetDescription() + 0x37 bytes   
        cslangsvc.dll!CSymbolDescription::TryAppendDescription() + 0x61 bytes   
        cslangsvc.dll!CSpanBinder::TryExecuteAndGetDescription() + 0x88 bytes   
        cslangsvc.dll!CQuickInfo::TryGetRawIntelliSenseQuickInfo() + 0x81 bytes   
        cslangsvc.dll!CQuickInfo::TryGetFullIntelliSenseQuickInfo() + 0x4d bytes   
        cslangsvc.dll!CQuickInfo::TryExecute() + 0x3e bytes   
        cslangsvc.dll!CEditFilter::GetDataTipText() + 0xdd bytes   
        cslangsvc.dll!CVsEditFilter::GetDataTipText() + 0x52 bytes   
        user32.dll!_InternalCallWinProc@20() + 0x23 bytes   
        user32.dll!_UserCallWinProcCheckWow@32() + 0xb7 bytes   
        user32.dll!_DispatchMessageWorker@8() + 0xed bytes   
        user32.dll!_DispatchMessageW@4() + 0xf bytes   
        msi.dll!MsiUIMessageContext::RunInstall() + 0x21231 bytes   
        msi.dll!RunEngine() + 0xb3 bytes   
        msi.dll!ConfigureOrReinstallFeatureOrProduct() + 0xfa bytes   
        msi.dll!_MsiReinstallFeatureW@12() + 0x66 bytes   
        msi.dll!ProvideComponent() + 0x10957 bytes   
        msi.dll!ProvideComponentFromDescriptor() + 0x154 bytes   
        msi.dll!_MsiProvideAssemblyW@24() + 0x437 bytes   
        msenv.dll!_VsCoCreateAggregatedManagedObject() + 0xe2 bytes
       
        msenv.dll!_VsLoaderCoCreateInstanceUnknown() + 0x8e bytes   
        msenv.dll!CVsLocalRegistry4::CreateInstance() + 0x4a bytes   
        msenv.dll!CXMLMemberIndexService::GetCulture() + 0x17f5 bytes   
        msenv.dll!CXMLMemberIndexService::LocateAndOpenXMLFile() + 0x14 bytes   
        msenv.dll!CXMLMemberIndexService::CreateXMLMemberIndex() + 0x78 bytes   
        cslangsvc.dll!CMetaDataLoader::EnqueueMemberIndexRequest() + 0xe80cc bytes   
        cslangsvc.dll!CMDMemberData::GetDocumentationComment() + 0x51 bytes   
        cslangsvc.dll!CSymbolDescription::CPartCollector::ConditionallyAddDocCommentParts<CMemberData>() + 0x2f bytes   
        cslangsvc.dll!CSymbolDescription::CMemberProviderHelper::TryExecute() + 0xdc bytes   
        cslangsvc.dll!CSymbolDescription::CNameProviderVisitor::Visit() + 0x53 bytes   
        cslangsvc.dll!CAbstractNameProviderBoolDefaultVisitor::Visit() + 0x2b bytes   
        cslangsvc.dll!CAggregateMemberProvider::Accept() + 0x16 bytes   
        cslangsvc.dll!CSymbolDescription::CNameProviderVisitor::TryExecute() + 0x2a bytes   
        cslangsvc.dll!CSymbolDescription::TryGetDescription() + 0x37 bytes   
        cslangsvc.dll!CSymbolDescription::TryAppendDescription() + 0x61 bytes   
        cslangsvc.dll!CSpanBinder::TryExecuteAndGetDescription() + 0x88 bytes   
        cslangsvc.dll!CQuickInfo::TryGetRawIntelliSenseQuickInfo() + 0x81 bytes   
        cslangsvc.dll!CQuickInfo::TryGetFullIntelliSenseQuickInfo() + 0x4d bytes   
        cslangsvc.dll!CQuickInfo::TryExecute() + 0x3e bytes   
        cslangsvc.dll!CEditFilter::GetDataTipText() + 0xdd bytes   
        cslangsvc.dll!CVsEditFilter::GetDataTipText() + 0x52 bytes   

    The language service tries to get some tool tip (GetDataTipText) text and tries to load an assembly. For reasons unknown to me MSI is used to get an assembly and install it if it is not already present. Since the requested assembly was not found an installation was performed. While the installation is running MSI does pump again window messages which does let VS again to call GetDataTipText since my mouse was still hovering over some code element in the editor. MSI would be called again to resolve the same assembly but since the installation is already running VS does block. There is another helper thread running at cslangsvc.dll!CBackgroundQueue::ExecuteRequests() which seems wait for the msi installation to finish on the UI thread but since MSI cannot report any progress or failure back via the Window message loop we have a classic hang. You could ask how I do know about the other thread holding the same lock?

    Starting with Windows Vista WCT (Wait Chain Traversal) was added to the Kernel which is accessible via some Windbg extensions or much easier via the Windows Resource Monitor (at the comamnd line enter: perfmon.exe /res). Hung processes are marked as red and are displayed at first in the process list (very nice). From there it is a simple right click on the hung process to analyze the locks. There you can see directly which thread is waiting for which other thread/s. No more kernel debugging!

    The question that remains is why the MSI installation did never complete anyway which would have caused only one VS hang. To see what is going on on the MSI side you need to enable MSI logging. With a little practice you can quickly find the interesting lines:

    === Verbose logging started: 05.12.2011 15:57:51 Build type: SHIP UNICODE 5.00.7600.00 Calling process: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe ===
    Command Line: REINSTALL=SysClrTypFeature REINSTALLMODE=pocmus CURRENTDIRECTORY=C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE CLIENTUILEVEL=3 CLIENTPROCESSID=11280
    MSI (s) (5C:38) [06:29:54:573]: SOURCEMGMT: Failed to resolve source
    MSI (s) (5C:38) [06:29:54:573]: Product: Microsoft SQL Server System CLR Types (x64) -- Error 1706. An installation package for the product Microsoft SQL Server System CLR Types (x64) cannot be found. Try the installation again using a valid copy of the installation package 'SQLSysClrTypes_amd64_enu.msi'.

    The MSI which was tried to patch was Microsoft SQL Server System CLR Types (x64). For some reason the original MSI was no longer cached in the \Windows\Installer directory which caused the installation silently to fail. From there it is easy to resolve the issue: Install the original Microsoft SQL Server System CLR Types (x64) MSI and you should see no more failed patching while VS is running.

    This issue was really annoying but since then VS 2010 is running happily on my machine without any hangs. I am really happy that I was able to resolve this problem so easily. The only issue I still do have that VS2010 is a memory hog and I do pity the poor people with slow hard discs. On these machines VS 2010 is no fun to use.


              Carboxymethyl Tamarind   
    Carboxymethyl Tamarind Kernel Powder (CMT or CMTKP) is an anionic water soluble polymer; it is derived from Tamarind Kernel Powder (TKP), which is made cold water soluble by a chemical reaction (Carboxymethylation). Cold water solubility is achieved by introducing carboxymethyl groups (-CH2-COOH) along the polysaccharide chain, which makes hydration of the molecule possible in cold water. CMT is bio-compatible, bio-degradable, non- toxic & modified natural polymer. CAS NO.: 68647-15-4 EICS NO.: 271-943-5 APPLICATIONS CMT widely used as textile printing thickener. CMT also used for sizing in jute yarn & cotton warp. CMT has extensive use in paper & explosive industry where it is used as viscosity builder. CMT used as a core binder in foundries & mosquito coils. CMT used as a sizing material in paper industry. CMT very effectively used these days by oil drilling companies as soil stabilizer. CMT has numerous other applications & its low cost gives it an edge over other thickening agents.
              Carboxymethyl Tamarind (CMT or CMTKP)   
    Carboxymethyl Tamarind Kernel Powder (CMT or CMTKP) is an anionic water soluble polymer; it is derived from Tamarind Kernel Powder (TKP), which is made cold water soluble by a chemical reaction (Carboxymethylation). Cold water solubility is achieved by introducing carboxymethyl groups (-CH2-COOH) along the polysaccharide chain, which makes hydration of the molecule possible in cold water. CMT is bio-compatible, bio-degradable, non- toxic & modified natural polymer. CAS NO.: 68647-15-4 EICS NO.: 271-943-5 APPLICATIONS CMT widely used as textile printing thickener. CMT also used for sizing in jute yarn & cotton warp. CMT has extensive use in paper & explosive industry where it is used as viscosity builder. CMT used as a core binder in foundries & mosquito coils. CMT used as a sizing material in paper industry. CMT very effectively used these days by oil drilling companies as soil stabilizer. CMT has numerous other applications & its low cost gives it an edge over other thickening agents.
              How does popcorn work?   
    Popcorn is a ubiquitous snack, but there's nothing commonplace about its creation. How does a kernel of corn become a puffed white treat? Find out in this podcast from HowStuffWorks.com.
              How does making bread work?   
    Bread is a technology for turning hard kernels into a soft foodstuff. Learn more about bread and yeast in this HowStuffWorks podcast.
              Shellcode: pierwszy kod – odpalanie Kalkulatora   
    W ostatnich moich zapiskach udało mi się wydobyć adres bazowy modułu kernel32.dll oraz opracować funkcję (kod) do iteracji i szukania niezbędnych adresów funkcji z załadowanego modułu w pamięci. Teraz, gdy mam te niezbędne elementy każdego typowego shellcodu pod Windą, wreszcie nadeszła pora na napisanie jakiegoś bardziej sensownego kawałek kodu. Dla przykładu wybrałem sobie odpalanie standardowego … Czytaj dalej Shellcode: pierwszy kod – odpalanie Kalkulatora
              Shellcode: EAT i funkcja GetProcAddress   
    Gdy już w swoich rękach mam adres bazowy modułu kernel32.dll (zlokalizowany na przykład sposobem opisanym w poprzednim moim wpisie) kolejnym krokiem jest poznanie adresu dowolnej funkcji znajdującej się w tym module. W wielu sytuacjach wystarczy dorwać się tylko do GetProcAddress i LoadLibrary, co ułatwi wykorzystanie dowolnej innej funkcji z Windows API lub innej biblioteki. W … Czytaj dalej Shellcode: EAT i funkcja GetProcAddress
              Shellcode: PEB i adres bazowy modułu kernel32.dll   
    Pisząc jakieś shellkody lub inne tego typu paskudztwa napotyka się na problem interakcji z systemem lub jego API. Aby cokolwiek zrobić sensownego wymagany jest dostęp do kilku kluczowych funkcji znajdujących się w kernel32.dll, będących niejako kluczem do świata systemu. Takimi funkcjami są oczywiście LoadLibrary/GetModuleHandle, GetProcAddress, itp… Mając dostęp do tych funkcji możemy zrobić praktycznie wszystko … Czytaj dalej Shellcode: PEB i adres bazowy modułu kernel32.dll
              Kommentar zu Auch Rosneft von dersimi   
    Danke. Aber Linux alleine schützt wohl nicht. Updaten und ab und an die Hardware erneuern muss schon drin sein: "Schuld an der Misere ist laut den Experten von Trend Micro vor allem uralte Technik: Der Hoster nutzte die fast zehn Jahre alten Linux-Kernel 2.6.24.2 und die elf Jahre alten Versionen von Apache (1.3.36) und PHP (Version 5.1.4) – die für ihre unzähligen Lücken im System bereits bekannt sind." Den Kernel mach ich immer fix drauf, wenns einen neuen gibt :)
              OKL4 now supports the ARMv6 architecture   

    Open Kernel Labs, a global provider of embedded systems software and virtualization technology, and part of the ARM Connected Community, announces that its flagship microkernel OKL4 now supports the ARMv6 architecture. ARM technology is at the heart of the worlds leading-edge mobile communications devices.

    read more


              The Power of Love - Discovering the Love That Lies Within Us All   

    To experience love for person and their love in tax return is the most fantastic experience of our lives. When we undergo love, we experience joyousness and fulfillment, but when it is absent we rapidly go unhappy and disillusioned. The hunt for love defines our lives and plays a critical function in the quality of our relationships. Love really makes do our human race travel round!

    Unfortunately we are rarely given any instruction about love, and yet with a small apprehension and consciousness it can transform our lives. Love can work out jobs and mend emotional pain, but only when we let ourselves to undergo its powerfulness - we must ask for love into our lives.

    Consider for a minute the modern times when you have got fallen in love or felt the love of a parent, kid or friend. It is almost impossible to depict those cheering feelings of connexion and well-being. Notice how anxiousnesses and jobs autumn away to be replaced by solutions, easiness and confidence. There is a timeless quality about love that buoys you up and protects you in even the most desperate of times. You are experiencing the powerfulness of love to mend and convey joyousness and success into your life.

    Love is the cardinal truth of life. We are born to love and be loved. It is our natural state. Some people prefer to see this love as a feature of our world while others prefer to see it as the manifestation of a Godhead or Negro spiritual beginning of love. Whatever our personal belief, the powerfulness of love is experienced when we link open-heartedly with others and encompass our natural connections.

    Our emotional and human relationship jobs consequence from our denial of love - our separation from the love that chemical bonds all people and separation from our higher or Negro spiritual beginning of love. This denial usually begins when we are very immature and have a detrimental impact on our lives. Perversely, we contrive all mode of negative thoughts, feelings and behaviors to deflect us from the love that we already possess. In romanticist human relationships we then hunt for love from another person, or seek to derive fulfilment from stuff possessions, to replace the love that we believe is lacking within. This is a awful error because until we have got rediscovered self-love, we cannot give or have love fully from person else.

    Luckily the best topographic point to work on determination self-love is within a supportive relationship. All human relationships have got their challenges and it is by working with our spouse through the hard modern times that a partnership is strengthened. Relationships neglect because of our inability to acquire to the core emotional issues that make the separation. These volition be our fearfulnesses and insecurities and deficiency of self-belief. In our effort to conceal away any sense of low self-worth we do ourselves unavailable to our partner. It is like edifice a fortress around ourselves - we believe it protects us but in world it damages or even destructs our relationships.

    The manner to encompass our loving kernel and our natural connexions with others is to be willing to experience all our emotions and pass on about them maturely to our partners. We can also inquire them about there feelings and perpetrate to working with them to mend any fear.. Normally we will happen that they have got just the same fearfulnesses and insecurities as us, but may play them out in different styles. Getting to these core issues is the cardinal to healing the hurting and fearfulness in a human relationship and to becoming more than than than bonded.

    As we accomplish such as healing within our human relationships we volition automatically detect more success in our lives, we will experience more fulfilled and this will do us happier. We can all make this if we can happen the courageousness to experience our emotions and uncover them within our relationships. As our Black Maria unfastened we will experience all the love that have been hidden behind our defences and our human relationships will travel from strength to strength.


              Device Drivers, Part 10: Kernel-Space Debuggers in Linux   
    Debug that!

    This article, which is part of the series on Linux device drivers, talks about kernel-space debugging in Linux. Shweta, back from hospital, was relaxing ...

    The post Device Drivers, Part 10: Kernel-Space Debuggers in Linux appeared first on Open Source For You.


              Excellent News   
    This is excellent news. Let's hope it's implemented in the Linux kernel. I also hope the Linux implementation will be compatible with the Tru64 implementation so that we don't have another XFS-like on disk incompatibility.
              I wonder if this move is to allow...   
    HP to compete with Sun's ZFS file system. If HP can get their file system into the kernel before Sun can GPL ZFS then HP will have some bragging rights.
              RE[2]: Excellent News   
    I've downloaded this stuff and AdvFS v2 is actually a port to HP-UX, not Linux. There aren't any makefiles, and though the userland part may compile quite easily under Linux (not tested though) the kernel part is a tougher one...
              An undeniable 85-song sampler of the year in hip-hop   
    The best hip-hop tracks of 2014 from the Kernel, the Daily Dot's digital Sunday magazine.
              Int.-Sr. Kernel Developer (CARD1009) - Fortinet - British Columbia   
    Significant experience implementing of modifying networking internals code including the IP stack, Routing, Sockets API, network security, link load balancing,...
    From Fortinet - Tue, 09 May 2017 21:52:51 GMT - View all British Columbia jobs
              Moving On   
    Reg Braithwaite was writing not long ago about how we can be the biggest obstacle to our own growth. It made me realize how I've dropped things that I was once a staunch supporter of.

    I was once a Borland Pascal programmer, and I believed that it was better than C or even C++. I believed that the flexibility of runtime typing would win over the static typing of C++ templates, as computers got faster. I belived that RPC were a great idea, and even worked on an RPC system that would work over dial-up connections (because that's what I had back then). I put in a lot of time working on object persistence and databases. I thought that exceptions were fundamentally bad. I believed that threads were bad, and that event-driven was the way to go.

    Now, I believe in message-passing and in letting the OS kernel manage concurrency (but I don't necessarily believe in threads, it's just what I happen to need in order to get efficient message-passing inside a concurrent application that lets the kernel do its work). I wonder when that will become wrong? And what is going to become right?

    I like to think I had some vision, occasionally. For example, I once worked on an email processing system for FidoNet (thanks to Tom Jennings, a beacon of awesome!), and my friends called me a nutjob when I told them that I was designing the thing so that it was possible to send messages larger than two gigabytes. What I believed was that we'd get fantastic bandwidth someday where messages this large were feasible (we did! but that was an easy call), and that you'd be able to subscribe to television shows for some small sum, where they would send it to you by email and you'd watch it to your convenience. That's never gonna happen, they said! Ha! HTTP (which I think is used in the iTunes Store) uses the very same chunked encoding that I put in my design back then...

    Note that in some cases, I was partly right, but the world changed, and what was right became wrong. For example, the 32-bit variant of Borland Pascal, Delphi, is actually a pretty nice language (ask apenwarr!), and while it isn't going to beat C++ in system programming, like I believed it could, it's giving it a really hard time in Windows application programming, and that level of success despite being an almost entirely proprietary platform is quite amazing. Even Microsoft is buckling under the reality that openness is good for language platforms, trying to have as many people from the outside contributing to .NET (another thing to note: C# was mainly designed by some of the Delphi designers). Imagine what could happen if Borland came to its sense and spat out a Delphi GCC front-end (and use it in their products, making it "the real one", not some afterthought)?

    I doubt that's going to happen, though. For application development, I think it's more likely that "scripting languages" like Ruby, Python and JavaScript are going to reach up and take this away from insanely annoying compiled languages like C++ (and maybe even Java).

    But hey, what do I know? I once thought RPC was going to be the future!
              Following Up On The End Of The World   
    Being the end of the world and all, I figure I should go into a bit more details, especially as [info]omnifarious went as far as commenting on this life-altering situation.

    He's unfortunately correct about a shared-everything concurrency model being too hard for most people, mainly because the average programmer has a lizard's brain. There's not much I can do about that, unfortunately. We might be having an issue of operating systems here, rather than languages, for that aspect. We can fake it in our Erlang and Newsqueak runtimes, but really, we can only pile so many schedulers up on each others and convince ourselves that we still make sense. That theme comes back later in this post...

    [info]omnifarious's other complaint about threads is that they introduce latency, but I think he's got it backward. Communication introduces latency. Threads let the operating system reduce the overall latency by letting other runs whenever it's possible, instead of being stuck. But if you want to avoid the latency of a specific request, then you have to avoid communication, not threads. Now, that's the thing with a shared-everything model, is that it's kind of promiscuous, and not only is it tempting to poke around in memory that you shouldn't, but sometimes you even do it by accident, when multiple threads touch things that are on the same cache line (better allocators help with that, but you have to be careful still). More points in the "too hard for most people" column.

    His analogy of memcached with NUMA is also to the point. While memcached is at the cluster end of the spectrum, at the other end, there is a similar phenomenon with SMP systems that aren't all that symmetrical, multi-cores add another layer, and hyper-threading yet another. All of this should emphasize how complicated writing a scheduler that will do a good job of using this properly is, and that I'm not particularly thrilled at the idea of having to do it myself, when there's a number of rather clever people trying to do it in the kernel.

    What really won me over to threading is the implicit I/O. I got screwed over by paging, so I fought back (wasn't going to let myself be pushed around like that!), summoning the evil powers of mlockall(). That's where it struck me that I was forfeiting virtual memory, at this point, and figured that there had to be some way that sucked less. To use multiple cores, I was already going to have to use threads (assuming workloads that need a higher level of integration than processes), so I was already exposed to sharing and synchronization, and as I was working things out, it got clearer that this was one of those things where the worst is getting from one thread to more than one. I was already in it, why not go all the way?

    One of the things that didn't appeal to me in threads was getting preempted. It turns out that when you're not too greedy, you get rewarded! A single-threaded, event-driven program is very busy, because it always finds something interesting to do, and when it's really busy, it tends to exhaust its time slice. With a blocking I/O, thread-per-request design, most servers do not overrun their time slice before running into another blocking point. So in practice, the state machine that I tried so hard to implement in user-space works itself out, if I don't eat all the virtual memory space with huge stacks. With futexes, synchronization is really only expensive in case of contention, so that on a single-processor machine, it's actually just fine too! Seems ironic, but none of it would be useful without futexes and a good scheduler, both of which we only recently got.

    There's still the case of CPU intensive work, which could introduce trashing between threads and reduced throughput. I haven't figured out the best way to do this yet, but it could be kept under control with something like a semaphore, perhaps? Have it set to the maximum number of CPU intensive tasks you want going, have them wait on it before doing work, post it when they're done (or when there's a good moment to yield)...

    [info]omnifarious is right about being careful about learning from what others have done. Clever use of shared_ptr and immutable data can be used as a form of RCU, and immutable data in general tends to make good friends with being replicated (safely) in many places.

    One of the great ironies of this, in my opinion, is that Java got NIO almost just in time for it to it to be obsolete, while we were doing this in C and C++ since, well, almost forever. Sun has this trick for being right, yet do it wrong, it's amazing!
               Estimation and inference in regression discontinuity designs with asymmetric kernels    
    Fé, Eduardo (2014) Estimation and inference in regression discontinuity designs with asymmetric kernels. Journal of Applied Statistics , 41 (11). pp. 2406-2417. ISSN 0266-4763
              GARUDA OS 2011 - INDONESIAN OPERATING SYSTEM   

    Jika selama ini kita hanya tau Operating System Windows, Linux, dan Mac OS yang notabene adalah produk luar negeri, sekarang ada lagi Operating System Open Source hasil modifikasi anak bangsa yang katanya Distro dari Linux yang diberi nama Garuda OS yang baru dirilis 20 Mei kemarin.
    Jika dilihat dari tampilan trailernya, jujur bikin saya tertarik ingin mencobanya. Pasalnya walaupun ini Open Source, tampilannya Keren Mirip Windows 7, fiturnya pun cukup lengkap. Mau coba? Di cek aja yuk.
    Penasaran seperti apa Previewnya?
    Mendingan di Cek Nih Screenshot :


    Tampilan 3D Desktop + Transparan





    Tampilan Office Garuda


     
    Tampilan Start Menu Aplikasi


    Mau Download? Sabar Dulu .. Baca dulu System Requirment dan Fiturnya yuk.

    FITUR GARUDA :
    ·                                 Inti (kernel) sistem operasi : 2.6.38.7
    ·                                 Desktop : KDE 4.6.3
    ·                                 Dukungan driver VGA (Nvidia, ATI, Intel, dll)
    ·                                 Dukungan Wireless untuk berbagai perangkat jaringan
    ·                                 Dukungan perangkat printer lokal ataupun jaringan
    ·                                 Dukungan banyak format populer multimedia (flv, mp4, avi, mov, mpg, mp3, wma, wav, ogg, dll …)
    ·                                 Dukungan bahasa Indonesia dan bahasa Inggris serta lebih dari 60 bahasa dunia lainnya (Jepang, Arab, Korea, India, Cina, dll…)
    ·                                 Dukungan untuk instalasi berbagai macam program aplikasi dan game (online) berbasis Windows
    ·                                 Dukungan untuk berbagai macam dokumen dari program populer berbasis Windows (seperti Photoshop, CorelDraw, MS Office, AutoCAD, dll)
    ·                                 NEW : Dukungan Font Aksara Indonesia .
    ·                                 NEW : Dukungan ratusan Font Google Web .
    KEBUTUHAN PERANGKAT KERAS :
    ·                                 Processor : Intel Atom; Intel atau AMD sekelas Pentium IV atau lebih
    ·                                 Memory : RAM minimum 512 MB, rekomendasi 1 GB.
    ·                                 Hard disk : minimum 8 GB, rekomendasi 20 GB atau lebih jika ingin menginstal program lain
    ·                                 Video card : nVidia, ATI, Intel, SiS, Matrox, VIA
    ·                                 Sound card : Sound Blaster, kartu AC97 atau HDA
    PROGRAM APLIKASI :
    Perkantoran :
    ·                                 LibreOffice 3.3 – disertai kumpulan ribuan clipart, kompatibel dengan MS Office dan mendukung format dokumen SNI (Standar Nasional Indonesia)
    ·                                 Scribus – desktop publishing (pengganti Adobe InDesign, Page Maker)
    ·                                 Dia – diagram / flowchart (pengganti MS Visio)
    ·                                 Planner – manajemen proyek (pengganti MS Project)
    ·                                 GnuCash, KMyMoney – program keuangan (pengganti MYOB, MS Money, Quicken)
    ·                                 Kontact – Personal Information Manager / PIM
    ·                                 Okular, FBReader – universal document viewer
    ·                                 dan lain-lain …
    Internet :
    ·                                 Mozilla Firefox 4.0.1, Chromium, Opera – web browser (pengganti Internet Explorer)
    ·                                 Mozilla Thunderbird – program email (pengganti MS Outlook)
    ·                                 FileZilla – upload download / FTP
    ·                                 kTorrent – program bittorrent
    ·                                 DropBox – Online Storage Program (free 2 Gb)
    ·                                 Choqok, Qwit, Twitux, Pino – aplikasi microblogging
    ·                                 Google Earth – penjelajah dunia
    ·                                 Skype – video conference / VOIP
    ·                                 Gyachi, Pidgin – Internet messenger
    ·                                 xChat – program chatting / IRC
    ·                                 Kompozer, Bluefish – web / html editor (pengganti Dreamweaver)
    ·                                 Miro – Internet TV
    ·                                 dan lain-lain …
    Multimedia :
    ·                                 GIMP – editor gambar bitmap (pengganti Adobe Photoshop)
    ·                                 Inkscape – editor gambar vektor (pengganti CorelDraw)
    ·                                 Blender – Animasi 3D
    ·                                 Synfig, Pencil – Animasi 2D
    ·                                 XBMC – multimedia studio
    ·                                 kSnapshot – penangkap gambar layar
    ·                                 Digikam – pengelola foto digital
    ·                                 Gwenview – Photo Viewing Client
    ·                                 Amarok – audio player + Internet radio
    ·                                 Kaffeine – video / movie player
    ·                                 TVtime – television viewer
    ·                                 Audacity – audio editor
    ·                                 Cinelerra, Avidemux – video editor
    ·                                 dan lain-lain …
    Edukasi :
    ·                                 Matematika – aljabar, geometri, plotter, pecahan
    ·                                 Bahasa – Inggris, Jepang, permainan bahasa
    ·                                 Geografi – atlas dunia, planetarium, kuis
    ·                                 Kimia – tabel periodik
    ·                                 Logika Pemrograman
    Administrasi Sistem :
    ·                                 DrakConf – Computer Control Center
    ·                                 Synaptic – Software Package Manager
    ·                                 Samba – Windows sharing file
    ·                                 Team Viewer – remote desktop & online meeting
    ·                                 Bleachbit – pembersih sistem
    ·                                 Back in Time – backup restore sistem
    ·                                 dan lain-lain …
    Program Bantu :
    ·                                 Ark – program kompres file (pengganti Winzip, WinRar)
    ·                                 K3b – pembakar CD/DVD (pengganti Nero)
    ·                                 Dolphin – file manager
    ·                                 Cairo Dock – Mac OS menu dock
    ·                                 Compiz Fusion + Emerald
    ·                                 Emulator DOS + Windows
    ·                                 dan lain-lain …
    Game :
    ·                                 3D Game Maker
    ·                                 Mahjong, Tetris, Rubik, Billiard, Pinball, BlockOut, Sudoku, Reversi
    ·                                 Solitaire, Heart, Domino, Poker, Backgammon, Chess, Scrabble
    ·                                 Frozen Bubble, Flight Simulator, Tron, Karaoke
    ·                                 City Simulation, Fighter, Doom, Racing, Tremulous FPS
    ·                                 DJL, Play on Linux, Autodownloader – game manager / downloader 
    d                        dan lain-lain ..

    ·                       
    Dan diluar program-program yang sudah terinstal diatas, masih ada lebih dari 10.000 program tambahan dalam berbagai kategori yang tersedia di repository(pustaka program) Synaptic.

    Yang mau coba bisa langsung download (3,6 GB)
    Silahkan Klik Link Dibawah ini untuk Download :


    Keterangan Lebih Lanjut Mengenai Garuda OS
    Bisa Klik


    Special Thanks to DYTOSHARE

              Episode 053 - Listener Feedback   

    In this episode: LR fliers at LUGRadio Live; the Ohio Linux Fest; a Listener Tip on the Slax Live CD; a ton of audio and written listener comments and feedback, including one on the book "Linux Kernel in a Nutshell".


              Episode 005 - Version Numbering   

    In this episode: over 100 pins on the LR Frappr map; international Linux adoption; listener feedback; my two favorite beers; version numbering as it applies to the Linux kernel and Linux distributions; how the movie Toy Story is relevant to the Debian GNU/Linux distribution; Ubuntu naming and numbering conventions.


              Añeja Vulnerabilidad en el Kernel de Windows   
    Cuando leí los primeros reportes pensé que se trataba de hoax, por lo que no hice caso al principio. Mas sin embargo resulta que es verdad. Se trata de una vulnerabilidad hecha pública el martes pasado en seclists.org y firmado … Seguir leyendo
              Зависание   
    Прошу совета грамотных людей.

    За последние несколько дней ноут три раза вдруг полностью зависал. Двигался только курсор, но ничего не нажималось. Ctrl-alt-del не работал, перезагрузку сделать было невозможно. Шума не было. Лечилось только полным выключением. Windows 7 Pro. Антивирус молчит, проверил дополнительно Malwarebytes, все чисто.

    Что бы это могло быть, и как предотвратить эти выходки в дальнейшем?

    UPD. Опять. Подождал некоторое время, появился синий экран с надписью KERNEL_DATA_INPAGE_ERROR

    UPD2.


              Does Jim Zemlin harm the Linux Foundation?   
    Jim Zemlin recently in an InfoWorld article claimed Is Sun Solaris on its deathbed?

    In this article, Zemlin only gives Linux and Microsoft a chance for the
    future. He then continues with the well known stereotypes we already
    read many times before from people who believe the best way to support
    Linux is to belittle other OpenSource projects.

    If DTrace was a minor feature as Zemlin claims, would FreeBSD, Apple
    and IBM adopt it? If ZFS was a minor feature, would FreeBSD and Apple
    adopt it? DTrace and ZFS have been adopted by others because the people
    behind FreeBSD Apple and IBM believe that they are important
    innovations and because the license is free enough to allow them
    to use DTrace and ZFS with their OS.

    At the same time, some people from the Linux camp still try to hide
    their missing will to integrate behind a so-called "license
    incompatibility". A license like the CDDL that allows to combine code
    under CDDL with code under any other license is supposedly incompatible
    with Linux? Do some people from the Linux camp really believe that the
    GPL is a non-free license? Well, the GPL is a free license and thus
    cannot require other projects to change their license if they are just
    delivered together with GPL code.

    There is no license incompatibility but a VFS incompatibility between
    ZFS and the Linux kernel. A code incompatibility can be resolved if there
    is a will.

    Some non-open-minded people cannot make a free license like the GPL
    non-free. POSIX compliant operating systems (like Solaris) and system
    that are similar to POSIX (like Linux) should not be enemies. People
    who develop OpenSource software should cooperate against non POSIX
    systems like Microsofts OS. People like Zemlin who like to drive a
    spearhead between different OpenSource projects have no place in
    our world. They should resign to allow other open-minded people to
    take their place.

    Our OpenSource world does not need Zemlin but visionary people who
    sopport OpenSource.
              SchilliX is real now   
    SchilliX is an OpenSolaris-based live CD and distribution that
    is intended to help people discover OpenSolaris. When installed
    on a hard drive, it also allows developers to develop and compile
    code in a pure OpenSolaris environment.

    After 4 months of hard work, the first OpenSolaris based
    UNIX distribution is ready for download at schillix.berlios.de.

    Well, I should mention that the project started in December 2003
    with the first discussions with Sun about a Solaris Live CD.
    Then in September 2004, there was a OpenSolaris summit in
    Santa Clara and the OpenSolaris Piolot started with a growing
    number of people (at last ~150) talking about the background.
    We needed to find a License and Sun did make a great job
    with cheking more than 9 million lines of code for encumberences.

    Let me describe what OpenSolaris is and what the differences
    to Schillix are. OpenSoplaris is currently the Sun O/N Source
    tree for Solaris. This source tree is much more than a kernel
    but a few things are missing in order to allow a boot to the
    multi user mode. The following pieces of code are missing:

    Libm
    The source is part of the Sun Compiler suite but Sun
    did OpenSource a 1993 version for BSD-4.4Lite.
    The effort to port a recent FreeBSD version was 5 days.

    bzip2/gzip
    These programs are free software and needed for Solaris, so
    they need to be added

    The Netscape LDAP libs
    They are needed for PAM and must be compiled from sources...

    LibXml2
    This lib is a major prerequisite for SMF and needs to be
    compiled from sources.

    Some of the SMF tools
    are part of the Suninstall sources and needed to be replaced.

    Some small programs
    needed to be devloped to make a CD boot with few RAM possible.

    libz
    is of couse also needed

    The NIC drivers from Masayuki Murayama
    are nice to have and have been added

    Unzip
    is nice to have and has been added

    Wget
    is nice to have and has been added

    /opt/schily/bin/*
    is nice to have and even needed for some of the
    Sun Replacements. As /usr/ccs/bin/make is part
    of the Sun Compiler Sources, it had to be replaced
    by my 'smake' that is _the_ OpenSource "make"
    implementation that is closest to Sun Make.

    The main goal was to implement as much source/binary
    compatibility to Sun Solaris as possible. Something
    that was not simple, giving the fact of the missing
    libm.

    Load SchilliX from Berlios and enjoy
    SchilliX. If you like it and if you like to help
    bus as a volunteer, please send me a mail...
              First pure OpenSolaris based boot CD   
    Today, I managed to get a first shell prompt from a pure
    OpenSolaris (x86) based boot CD.

    Solaris x86 now boots using grub and a multiboot compliant
    kernel loader. Previous Solaris x86 versions did boot using
    a closed source 16 bit boot loader that roughly implemented
    a OpenFirmware interface to the kernel. For every boot device,
    there was a need to write and maintain a 16 bit driver.

    The boot CD I did build has been completely set up from
    scratch only using the compilation results. If you like
    to help us working on SchilliX - the first OpenSolaris
    based UNIX distribution, check schillix.berlios.de
    and write me a mail.
              Cultural wars (Dreams of a Linux Bigot)   
    Tom Adelstein recently did write an article on LXer Linux News with the title

    Linux Threat Posed by Microsoft and Sun: In Your Dreams


    He claims that Linux keeps building momentum and claims that companies like
    Sun spread disinformation about Linux. As he is well informed, I would tend to
    believe him if his article would contain less disinformation.

    Let us discuss the main disinformation he tries to spread, note that he
    tries to show us his disinformation as questions so he could later tell you
    that it was you who did give answers. Querying in a suggestive way however
    is just a clever way to hide the fact of spreading disinformation.

  • "What percent of the Opensolaris.org project is actually made up of members of
    the Solaris team? And, does that constitute a community of developers or has Sun
    simply populated their so called community with Sun paid employees so that it
    looks like the broader open-source developers have embraced the project? "

    I am a member of the OpenSolaris Pilot and I know the people who are in
    the Pilot. There are a lot of highly skilled people from all over the world.
    We have people from USA: 70, India: 10, UK: 8, Germany: 7, France: 7, China: 5
    Australia: 5, Canada: 3, Poland: 2, Israel: 1, Belgium: 1, New Zealand: 1.

  • "What percent of Sun's infrastructure actually runs Linux internally?"

    From what I've seen, it seems to be a negligible amount (much less than 1%).

  • Did Sun roll out JDS Linux internally as described or did Sun only offer
    it to Laptop users? Which version does Sun use?

    The Java Desktop system is not a Linux distribution but a GUI with better
    multi media support. JDS is part of Solaris 10 and may be selected as
    the default Solaris 10 desktop.

  • What do you use on your desktop and laptop, Jonathan Schwartz?

    From the "cultural" experiences I got from looking inside Sun, I would
    expect him to run Solaris 10 on a Ferrari amd64 notebook.

    Sun does not run a major risk when competing with Linux, going back to Solaris
    brings Sun back to the roots; back to the ideas of a company that has been very
    successful with Operating system design, implementation and support.

    Linux is currently suffering from lack of competition in the OpenSource OS
    market. There are other OS operating systems but they do not have a good
    marketing. When OpenSolaris will be ready for everyone, this will change
    dramatically and it seems that the Linux bigots are in fear of this date.

    More and more people who work on the Linux kernel get tired from the way
    development is managed. Even people like Alan Cox now warn that there is
    a need for a change.

    People don't like Linux to be a "Kingdom" where a monarch or a small number
    courtiers govern the future. People with hacking skills rather like to make
    sure decisions are technology driven. Everybody who has the needed skill/knowledge
    for a specific subject should get the chance to be listned to.

    After OpenSolaris is available to everyone in Q2-2005, Solaris will not be governed
    by Sun anymore but by the CAB, a group of 5 people, 3 of them being not from Sun.
    The election period ends today and the names will be shown soon...

    My impression is that the fact that Sun does not like to dominate Solaris
    Like Linus Torvalds dominates Linux is the real fear of the Linux bigots.
  •           SPJ backs student newspaper’s plan to appeal open records decision   

    Journalists say a judge’s ruling supporting the University of Kentucky in a lawsuit against its independent student newspaper, The Kentucky Kernel, raises concerns about transparency and open government. Continue reading


              is there a way to check for updates from cli for scripting ex conky?   
    is there a way to check for updates from cli for scripting (for example conky)? I would like to know about newest kernel and security updates right when they are released (by using conky) but "apt upgrade -s | grep linux-image" will always return "WARNING: apt does not have a stable CLI interface....
              Principal Software Engineer (Kernel Networking/Driver Development )   

              More on the Grammys and Spotify    
    I think one of the biggest things that keeps us divided as humans is our unwillingness to be wrong, and the inability of people to see different sides of an argument. It's a sign of emotional intelligence to be able to hear other opinions and to leave room for your own opinion to evolve.

    I posted this rant yesterday. A friend came at me with proverbial fists raised, telling me this attitude was a horribly entitled one, and that I'm an artist, and essentially I should know better. I respect her opinion wholly, so I was open to what she was saying. We chatted for a bit about it and it turns out, I was wrong about what was said at the Grammys, and my response was to what I thought was said. When I read that the president of the Recording Academy said "Isn't music worth more than a penny?", I took it to be an admonition for all Spotify users, even paid ones. But as my friend pointed out, that's not what they were saying. In fact, they encourage paid subscriptions to music. It's the free streaming (which Spotify offers if you're willing to listen to ads) that they don't like.

    I know music is available to me for free if I want it for free, but I don't want it for free, so I pay for Spotify and will gladly buy any music that is not available there. I do not feel entitled to free music. At all. Apparently, even the paid Spotify subscription isn't all that fair to artists. And what my friend suggested was that Common and Neil Portnow were preparing us, the consumers, for an inevitable price hike in services like Spotify, because the current model isn't sustainable. And it's not that it's unsustainable for people like Common. It's unsustainable for lesser-known musicians, who need the money to make a living, not to buy a 4000 square foot house in Beverly Hills. And yes, to me, making music is as important a job as any, and it should be a way for talented musicians to make a living.

    It's hard for me to see a room full of millionaires applauding the idea that musicians don't make enough money. But I also understand that there is no better place to say it, because the reach is wide. I forget that people still steal media (what is this, the early aughts?!), but TONS of people do it. And again, I am against this. Music, films, television - these are all products, and you have to pay for products.

    For the record (MUSIC PUN), I'd happily pay more money for Spotify. And I will always buy music I want that is unavailable on Spotify, and I will even buy stuff on vinyl if I like it enough. I watch stuff on YouTube that I can't find anywhere else - mostly live performances.

    Not that I have a huge readership to deal with here, but I apologize if my post was perceived differently than I intended. Are we cool now?
              Grammys and Spotify   
    Preface: when you commit to writing 30 posts in 30 days, no topic is off limits. So now, I present you a blog post on a topic I would've most likely never written about: the Grammys, and more specifically, streaming music services. 

    I didn't watch the Grammys, because the last time I watched them, I felt like I was 100 years old. It was like lights out in the owlery. "Who is that? Now who is THAT? Oh come on, who is that person?!" Yeah, yeah, Hamilton. I'll just watch it on YouTube. I pay to stream the CD and I'll buy a ticket for it when it's in Los Angeles. That's all you get from me. 

    Apparently, someone I don't know (I just looked it up - it was Neil Portnow, the president of the Recording Academy) made some remark on the Grammys about how music so great and aren't our singles worth more than a penny? Subtweeting at Spotify, it seems. I am pretty anti-piracy. I pay for movies, I pay for music. I want artists to make money off of their stuff. I always see these things online about how you can get cable for free through a so-and-so device, but I don't want free cable, because someone made that cable, and it's worth more than nothing. Well, some of it is, anyway.

    I understand wanting to make money off of your music. I get that. But I'm sorry if I don't feel bad that you don't think it's enough money. Make better music. How about that? Also, HOW MUCH MONEY DO YOU NEED TO MAKE? You all seem to be doing fine, arriving at your awards ceremony in limos wearing beautiful clothes, sitting court side at basketball games, vacationing wherever and whenever you want, talking on the red carpet about what you're wearing when so many of us DGAF. Maybe your music isn't as highly valued as it once was. Times change. Maybe we don't want to pay $15 for a CD that has 3 good songs on it. Get on the bus and stop your bitching, or quit making music. It's not up to us to make sure you still live the life you want to live. If you don't like Spotify, stay off of it. Isn't it important to get as many ears on your music as possible? It's not like we're talking poverty wages here, folks. These artists are doing just fine.  How much money did the person who made your iPhone make? Or your sneakers? I'm about to pay $500 for two Beyonce concert tickets, so I think you'll all manage just fine.

    I'm not someone who bitches that celebrities make too much money and teachers don't make enough. I mean, that is OBVIOUSLY true. But celebrities are part of a for-profit machine, and if they're making a lot of money, it's because they are worth that much money to the people putting out their art. If Julia Roberts is making $20 million a movie, then it's because her movies tend to make a lot of money in the theatre. No one is getting rich while leaving the makers of this art poor. 

    Yes, you work hard. Yes, you are reaching a wide audience. Yes, you deserve your financial success. But when you start complaining that you're not making ENOUGH, that your art has somehow devalued, and you're implying that it's on me to help keep your "dying" art form alive, I check out. Nope. I'll be over here, in a dual-income two-bedroom RENTAL HOUSE, with my kids in full-time daycare, and my travel budget maxed out on trips to Ohio. 

    It's hard out here for a pimp. The future is different than imagined - it's the American dream. 

    **Edited to add this article, which explains artists' issues with Spotify, and why the model of streaming music is broken.

    ***Also, edited to add the link to my follow-up post on this subject.


              Dustin Kirkland: Still have questions about Bash and Ubuntu on Windows?   
    Still have questions about Ubuntu on Windows?
    Watch this Channel 9 session, recorded live at Build this week, hosted by Scott Hanselman, with questions answered by Windows kernel developers Russ Alexander, Ben Hillis, and myself representing Canonical and Ubuntu!

    For fun, watch the crowd develop in the background over the 30 minute session!

    And here's another recorded session with a demo by Rich Turner and Russ Alexander.  The real light bulb goes off at about 8:01.


    Cheers,
    :-Dustin
              Manuel de la Pena: ReadDirectoryChangesW and Twisted   

    Last week was probably one of the best coding sprints I have had since I started working in Canonical, I’m serious!. I had the luck to pair program with alecu on the FilesystemMonitor that we use in Ubuntu One on windows. The implementation has improved so much that I wanted to blog about it and show it as an example of how to hook the ReadDirectoryChangesW call from COM into twisted so that you can process the events using twisted which is bloody cool.

    We have reduce the implementation of the Watch and WatchManager to match our needs and reduce the API provided since we do not use all the API provided by pyinotify. The Watcher implementation is as follows:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    
    class Watch(object):
        """Implement the same functions as pyinotify.Watch."""
     
        def __init__(self, watch_descriptor, path, mask, auto_add, processor,
            buf_size=8192):
            super(Watch, self).__init__()
            self.log = logging.getLogger('ubuntuone.SyncDaemon.platform.windows.' +
                'filesystem_notifications.Watch')
            self.log.setLevel(TRACE)
            self._processor = processor
            self._buf_size = buf_size
            self._wait_stop = CreateEvent(None, 0, 0, None)
            self._overlapped = OVERLAPPED()
            self._overlapped.hEvent = CreateEvent(None, 0, 0, None)
            self._watching = False
            self._descriptor = watch_descriptor
            self._auto_add = auto_add
            self._ignore_paths = []
            self._cookie = None
            self._source_pathname = None
            self._process_thread = None
            # remember the subdirs we have so that when we have a delete we can
            # check if it was a remove
            self._subdirs = []
            # ensure that we work with an abspath and that we can deal with
            # long paths over 260 chars.
            if not path.endswith(os.path.sep):
                path += os.path.sep
            self._path = os.path.abspath(path)
            self._mask = mask
            # this deferred is fired when the watch has started monitoring
            # a directory from a thread
            self._watch_started_deferred = defer.Deferred()
     
        @is_valid_windows_path(path_indexes=[1])
        def _path_is_dir(self, path):
            """Check if the path is a dir and update the local subdir list."""
            self.log.debug('Testing if path %r is a dir', path)
            is_dir = False
            if os.path.exists(path):
                is_dir = os.path.isdir(path)
            else:
                self.log.debug('Path "%s" was deleted subdirs are %s.',
                    path, self._subdirs)
                # we removed the path, we look in the internal list
                if path in self._subdirs:
                    is_dir = True
                    self._subdirs.remove(path)
            if is_dir:
                self.log.debug('Adding %s to subdirs %s', path, self._subdirs)
                self._subdirs.append(path)
            return is_dir
     
        def _process_events(self, events):
            """Process the events form the queue."""
            # do not do it if we stop watching and the events are empty
            if not self._watching:
                return
     
            # we transform the events to be the same as the one in pyinotify
            # and then use the proc_fun
            for action, file_name in events:
                if any([file_name.startswith(path)
                            for path in self._ignore_paths]):
                    continue
                # map the windows events to the pyinotify ones, tis is dirty but
                # makes the multiplatform better, linux was first :P
                syncdaemon_path = get_syncdaemon_valid_path(
                                            os.path.join(self._path, file_name))
                is_dir = self._path_is_dir(os.path.join(self._path, file_name))
                if is_dir:
                    self._subdirs.append(file_name)
                mask = WINDOWS_ACTIONS[action]
                head, tail = os.path.split(file_name)
                if is_dir:
                    mask |= IN_ISDIR
                event_raw_data = {
                    'wd': self._descriptor,
                    'dir': is_dir,
                    'mask': mask,
                    'name': tail,
                    'path': '.'}
                # by the way in which the win api fires the events we know for
                # sure that no move events will be added in the wrong order, this
                # is kind of hacky, I dont like it too much
                if WINDOWS_ACTIONS[action] == IN_MOVED_FROM:
                    self._cookie = str(uuid4())
                    self._source_pathname = tail
                    event_raw_data['cookie'] = self._cookie
                if WINDOWS_ACTIONS[action] == IN_MOVED_TO:
                    event_raw_data['src_pathname'] = self._source_pathname
                    event_raw_data['cookie'] = self._cookie
                event = Event(event_raw_data)
                # FIXME: event deduces the pathname wrong and we need to manually
                # set it
                event.pathname = syncdaemon_path
                # add the event only if we do not have an exclude filter or
                # the exclude filter returns False, that is, the event will not
                # be excluded
                self.log.debug('Event is %s.', event)
                self._processor(event)
     
        def _call_deferred(self, f, *args):
            """Executes the defeered call avoiding possible race conditions."""
            if not self._watch_started_deferred.called:
                f(args)
     
        def _watch(self):
            """Watch a path that is a directory."""
            # we are going to be using the ReadDirectoryChangesW whihc requires
            # a directory handle and the mask to be used.
            handle = CreateFile(
                self._path,
                FILE_LIST_DIRECTORY,
                FILE_SHARE_READ | FILE_SHARE_WRITE,
                None,
                OPEN_EXISTING,
                FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED,
                None)
            self.log.debug('Watching path %s.', self._path)
            while True:
                # important information to know about the parameters:
                # param 1: the handle to the dir
                # param 2: the size to be used in the kernel to store events
                # that might be lost while the call is being performed. This
                # is complicated to fine tune since if you make lots of watcher
                # you migh used too much memory and make your OS to BSOD
                buf = AllocateReadBuffer(self._buf_size)
                try:
                    ReadDirectoryChangesW(
                        handle,
                        buf,
                        self._auto_add,
                        self._mask,
                        self._overlapped,
                    )
                    reactor.callFromThread(self._call_deferred,
                        self._watch_started_deferred.callback, True)
                except error:
                    # the handle is invalid, this may occur if we decided to
                    # stop watching before we go in the loop, lets get out of it
                    reactor.callFromThread(self._call_deferred,
                        self._watch_started_deferred.errback, error)
                    break
                # wait for an event and ensure that we either stop or read the
                # data
                rc = WaitForMultipleObjects((self._wait_stop,
                                             self._overlapped.hEvent),
                                             0, INFINITE)
                if rc == WAIT_OBJECT_0:
                    # Stop event
                    break
                # if we continue, it means that we got some data, lets read it
                data = GetOverlappedResult(handle, self._overlapped, True)
                # lets ead the data and store it in the results
                events = FILE_NOTIFY_INFORMATION(buf, data)
                self.log.debug('Events from ReadDirectoryChangesW are %s', events)
                reactor.callFromThread(self._process_events, events)
     
            CloseHandle(handle)
     
        @is_valid_windows_path(path_indexes=[1])
        def ignore_path(self, path):
            """Add the path of the events to ignore."""
            if not path.endswith(os.path.sep):
                path += os.path.sep
            if path.startswith(self._path):
                path = path[len(self._path):]
                self._ignore_paths.append(path)
     
        @is_valid_windows_path(path_indexes=[1])
        def remove_ignored_path(self, path):
            """Reaccept path."""
            if not path.endswith(os.path.sep):
                path += os.path.sep
            if path.startswith(self._path):
                path = path[len(self._path):]
                if path in self._ignore_paths:
                    self._ignore_paths.remove(path)
     
        def start_watching(self):
            """Tell the watch to start processing events."""
            for current_child in os.listdir(self._path):
                full_child_path = os.path.join(self._path, current_child)
                if os.path.isdir(full_child_path):
                    self._subdirs.append(full_child_path)
            # start to diff threads, one to watch the path, the other to
            # process the events.
            self.log.debug('Start watching path.')
            self._watching = True
            reactor.callInThread(self._watch)
            return self._watch_started_deferred
     
        def stop_watching(self):
            """Tell the watch to stop processing events."""
            self.log.info('Stop watching %s', self._path)
            SetEvent(self._wait_stop)
            self._watching = False
            self._subdirs = []
     
        def update(self, mask, auto_add=False):
            """Update the info used by the watcher."""
            self.log.debug('update(%s, %s)', mask, auto_add)
            self._mask = mask
            self._auto_add = auto_add
     
        @property
        def path(self):
            """Return the patch watched."""
            return self._path
     
        @property
        def auto_add(self):
            return self._auto_add

    The important details of this implementations are the following:

    Use a deferred to notify that the watch started.

    During or tests we noticed that the start watch function was slow which would mean that from the point when we start watching the directory and the point when the thread actually started we would be loosing events. The function now returns a deferred that will be fired when the ReadDirectoryChangesW has been called which ensures that no events will be lost. The interesting parts are the following:

    define the deferred

    31
    32
    33
    
           # this deferred is fired when the watch has started monitoring
            # a directory from a thread
            self._watch_started_deferred = defer.Deferred()

    Call the deferred either when we successfully started watching:

    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    
                buf = AllocateReadBuffer(self._buf_size)
                try:
                    ReadDirectoryChangesW(
                        handle,
                        buf,
                        self._auto_add,
                        self._mask,
                        self._overlapped,
                    )
                    reactor.callFromThread(self._call_deferred,
                        self._watch_started_deferred.callback, True)

    Call it when we do have an error:

    139
    140
    141
    142
    143
    144
    
                except error:
                    # the handle is invalid, this may occur if we decided to
                    # stop watching before we go in the loop, lets get out of it
                    reactor.callFromThread(self._call_deferred,
                        self._watch_started_deferred.errback, error)
                    break

    Threading and firing the reactor.

    There is an interesting detail to take care of in this code. We have to ensure that the deferred is not called more than once, to do that you have to callFromThread a function that will fire the event only when it was not already fired like this:

    103
    104
    105
    106
    
        def _call_deferred(self, f, *args):
            """Executes the defeered call avoiding possible race conditions."""
            if not self._watch_started_deferred.called:
                f(args)

    If you do not do the above, but the code bellow you will have a race condition in which the deferred is called more than once.

                buf = AllocateReadBuffer(self._buf_size)
                try:
                    ReadDirectoryChangesW(
                        handle,
                        buf,
                        self._auto_add,
                        self._mask,
                        self._overlapped,
                    )
                    if not self._watch_started_deferred.called:
                        reactor.callFromThread(self._watch_started_deferred.callback, True)
                except error:
                    # the handle is invalid, this may occur if we decided to
                    # stop watching before we go in the loop, lets get out of it
                    if not self._watch_started_deferred.called:
                        reactor.callFromThread(self._watch_started_deferred.errback, error)
                    break

    Execute the processing of events in the reactor main thread.

    Alecu has bloody great ideas way too often, and this is one of his. The processing of the events is queued to be executed in the twisted reactor main thread which reduces the amount of threads we use and will ensure that the events are processed in the correct order.

    153
    154
    155
    156
    157
    158
    
                # if we continue, it means that we got some data, lets read it
                data = GetOverlappedResult(handle, self._overlapped, True)
                # lets ead the data and store it in the results
                events = FILE_NOTIFY_INFORMATION(buf, data)
                self.log.debug('Events from ReadDirectoryChangesW are %s', events)
                reactor.callFromThread(self._process_events, events)

    Just for this the flight to Buenos Aires was well worth it!!! For anyone to see the full code feel free to look at ubuntuone.platform.windows from ubuntuone.


              Manuel de la Pena: Exasperated by the Windows filesystem   

    At the moment some of the tests (and I cannot point out which ones) of ubuntuone-client fail when they are ran on Windows. The reason for this is due to the way in which we get the notifications out of the file system and the way the tests are written. Before I blame the OS or the tests, let me explain a number of facts about the Windows filesystem and the possible ways to interact with it.

    To be able to get file system changes from the OS the Win32 API provides the following:

    SHChangedNotifyRegister

    This function was broken up to Vista when it was fixed, Unfortunately AFAIK we also support Windows XP which means that we cannot trust this function. On top of that taking this path means that we can have a performance issue. Because the function is build on top of Windows messages, if too many changes occur the sync daemon would start receiving roll up messages that just state that something changed and it would be up to the sync daemon to decide what really happened. Therefore we can all agree that this is a no no, right?

    FindFirstChangeNotification

    This is a really easy function to use which is based on ReadDirectoryChangesW (I think is a simple wrapper around it) that lets you know that something changed but gives no information about what changed. Because if is based on ReadDirectoryChangesW it suffers from the same issues.

    ReadDirectoryChangesW

    This is by far the most common way to get the notification changes from the system. Now, in theory there are two possible cases which can go wrong that would affect the events raised by this function:

    1. There are too many events and the buffer gets overloaded and we start loosing events. A simple way to solve this issues is to process the events in a diff thread asap so that we can keep reading the changes.
    2. We use the sync version of the function which means that we could have the following issues:
      • Blue screen of death because we used too much memory from the kernel space.
      • We cannot close the handles used to watch the changes in the directories. This makes the threads to end up blocked.

    As I mentioned this is the theory and therefore makes perfect sense to choose this option as the way to get notified by the changes until… you hit a great little feature of Windows called write-behind caching. The idea of write-behind caching is the following one:

    When you attempt to write a new file on your HD Windows does not directly modify the HD. Instead it makes a not of the fact that your intention is to write on disk and saves your changes in memory. Ins’t that smart?

    Well, that lovely feature does come set as default AFAIK from XP onwards. Any smart person would wonder how does that interact with FindFirstChangeNotification/ReadDirectoryChangesW, well after some work here is what I have managed to find out:

    The IO Manager (internal to the kernel) is queueing up disk-write requests in an internal buffer, and the actual changes are not physically committed until some condition is met which I believe is for the “write-behind caching” feature. The problem appears to be that the user-space callback via FileSystemWatcher/ReadDirectoryChanges does not occur when disk-write requests are inserted into the queue, but rather occurs when they are leaving the queue and being physically committed to disk. For what I have been able to manage through observation, the life time of a queue is based on:

    1. Whether more writes are being inserted in the q.
    2. Is another app request a read from an item in the q.

    This means that when using FileSystemWatcher/ReadDirectoryChanges the events are fired only when the changes are actually committed and as for a user-space program this follows a non-deterministic process (insert spanish swearing here). a way to work around this issue is to use the FluxhFileBuffers function on the volume, which does need admin rights, yeah!

    Change Journal records

    Well, this allows to track the changes that have been committed in an NTFS system (that means that we do not have support to FAT). This technique allows to keep track of the changes using an update sequence number that keeps track of changes in an interesting manner. At first look, although parsing the data is hard, this solution seems to be very similar to the one used by pyinotify and therefore someone will say, hey, let just ell twisted to do a select on that file and read the changes. Well, no, is not that easy, files do not provide the functionality used for select, just sockets (http://msdn.microsoft.com/en-us/library/aa363803%28VS.85%29.aspx) /me jumps of happiness

    File system filterr

    Well, this is an easy one to summarize, you have to write a driver like piece of code. Means C, COM and being able to crash the entire system with a nice blue screen (although I can change the color to aubergine before we crash)

    Conclusion

    At this point I hope I have convinced a few to believe that ReadDirectoryChangesW is the best option to take but might be wondering why I mentioned the write-behind caching feature, well here comes my complain towards the tests. We do use the real file system notifications for testing and the trial test cases do have a timeout! Those two facts plus the lovely write-behind caching feature mean that the tests on Windows fail just because the bloody evens are not raise until the leave the q from the IO manager.


              Manuel de la Pena: A look alike pyinotify for Windows   

    Before I introduce the code, let me say that this is not a 100% exact implementation of the interfaces that can be found in pyinotify but the implementation of a subset that matches my needs. The main idea of creating this post is to give an example of the implementation of such a library for Windows trying to reuse the code that can be found in pyinotify.

    Once I have excused my self, let get into the code. First of all, there are a number of classes from pyinotify that we can use in our code. That subset of classes is the below code which I grabbed from pyinotify git:

    #!/usr/bin/env python
     
    # pyinotify.py - python interface to inotify
    # Copyright (c) 2010 Sebastien Martini <seb@dbzteam.org>
    #
    # Permission is hereby granted, free of charge, to any person obtaining a copy
    # of this software and associated documentation files (the "Software"), to deal
    # in the Software without restriction, including without limitation the rights
    # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    # copies of the Software, and to permit persons to whom the Software is
    # furnished to do so, subject to the following conditions:
    #
    # The above copyright notice and this permission notice shall be included in
    # all copies or substantial portions of the Software.
    #
    # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    # THE SOFTWARE.
    """Platform agnostic code grabed from pyinotify."""
    import logging
    import os
     
    COMPATIBILITY_MODE = False
     
     
    class RawOutputFormat:
        """
        Format string representations.
        """
        def __init__(self, format=None):
            self.format = format or {}
     
        def simple(self, s, attribute):
            if not isinstance(s, str):
                s = str(s)
            return (self.format.get(attribute, '') + s +
                    self.format.get('normal', ''))
     
        def punctuation(self, s):
            """Punctuation color."""
            return self.simple(s, 'normal')
     
        def field_value(self, s):
            """Field value color."""
            return self.simple(s, 'purple')
     
        def field_name(self, s):
            """Field name color."""
            return self.simple(s, 'blue')
     
        def class_name(self, s):
            """Class name color."""
            return self.format.get('red', '') + self.simple(s, 'bold')
     
    output_format = RawOutputFormat()
     
     
    class EventsCodes:
        """
        Set of codes corresponding to each kind of events.
        Some of these flags are used to communicate with inotify, whereas
        the others are sent to userspace by inotify notifying some events.
     
        @cvar IN_ACCESS: File was accessed.
        @type IN_ACCESS: int
        @cvar IN_MODIFY: File was modified.
        @type IN_MODIFY: int
        @cvar IN_ATTRIB: Metadata changed.
        @type IN_ATTRIB: int
        @cvar IN_CLOSE_WRITE: Writtable file was closed.
        @type IN_CLOSE_WRITE: int
        @cvar IN_CLOSE_NOWRITE: Unwrittable file closed.
        @type IN_CLOSE_NOWRITE: int
        @cvar IN_OPEN: File was opened.
        @type IN_OPEN: int
        @cvar IN_MOVED_FROM: File was moved from X.
        @type IN_MOVED_FROM: int
        @cvar IN_MOVED_TO: File was moved to Y.
        @type IN_MOVED_TO: int
        @cvar IN_CREATE: Subfile was created.
        @type IN_CREATE: int
        @cvar IN_DELETE: Subfile was deleted.
        @type IN_DELETE: int
        @cvar IN_DELETE_SELF: Self (watched item itself) was deleted.
        @type IN_DELETE_SELF: int
        @cvar IN_MOVE_SELF: Self (watched item itself) was moved.
        @type IN_MOVE_SELF: int
        @cvar IN_UNMOUNT: Backing fs was unmounted.
        @type IN_UNMOUNT: int
        @cvar IN_Q_OVERFLOW: Event queued overflowed.
        @type IN_Q_OVERFLOW: int
        @cvar IN_IGNORED: File was ignored.
        @type IN_IGNORED: int
        @cvar IN_ONLYDIR: only watch the path if it is a directory (new
                          in kernel 2.6.15).
        @type IN_ONLYDIR: int
        @cvar IN_DONT_FOLLOW: don't follow a symlink (new in kernel 2.6.15).
                              IN_ONLYDIR we can make sure that we don't watch
                              the target of symlinks.
        @type IN_DONT_FOLLOW: int
        @cvar IN_MASK_ADD: add to the mask of an already existing watch (new
                           in kernel 2.6.14).
        @type IN_MASK_ADD: int
        @cvar IN_ISDIR: Event occurred against dir.
        @type IN_ISDIR: int
        @cvar IN_ONESHOT: Only send event once.
        @type IN_ONESHOT: int
        @cvar ALL_EVENTS: Alias for considering all of the events.
        @type ALL_EVENTS: int
        """
     
        # The idea here is 'configuration-as-code' - this way, we get
        # our nice class constants, but we also get nice human-friendly text
        # mappings to do lookups against as well, for free:
        FLAG_COLLECTIONS = {'OP_FLAGS': {
            'IN_ACCESS'        : 0x00000001,  # File was accessed
            'IN_MODIFY'        : 0x00000002,  # File was modified
            'IN_ATTRIB'        : 0x00000004,  # Metadata changed
            'IN_CLOSE_WRITE'   : 0x00000008,  # Writable file was closed
            'IN_CLOSE_NOWRITE' : 0x00000010,  # Unwritable file closed
            'IN_OPEN'          : 0x00000020,  # File was opened
            'IN_MOVED_FROM'    : 0x00000040,  # File was moved from X
            'IN_MOVED_TO'      : 0x00000080,  # File was moved to Y
            'IN_CREATE'        : 0x00000100,  # Subfile was created
            'IN_DELETE'        : 0x00000200,  # Subfile was deleted
            'IN_DELETE_SELF'   : 0x00000400,  # Self (watched item itself)
                                              # was deleted
            'IN_MOVE_SELF'     : 0x00000800,  # Self(watched item itself) was moved
            },
                            'EVENT_FLAGS': {
            'IN_UNMOUNT'       : 0x00002000,  # Backing fs was unmounted
            'IN_Q_OVERFLOW'    : 0x00004000,  # Event queued overflowed
            'IN_IGNORED'       : 0x00008000,  # File was ignored
            },
                            'SPECIAL_FLAGS': {
            'IN_ONLYDIR'       : 0x01000000,  # only watch the path if it is a
                                              # directory
            'IN_DONT_FOLLOW'   : 0x02000000,  # don't follow a symlink
            'IN_MASK_ADD'      : 0x20000000,  # add to the mask of an already
                                              # existing watch
            'IN_ISDIR'         : 0x40000000,  # event occurred against dir
            'IN_ONESHOT'       : 0x80000000,  # only send event once
            },
                            }
     
        def maskname(mask):
            """
            Returns the event name associated to mask. IN_ISDIR is appended to
            the result when appropriate. Note: only one event is returned, because
            only one event can be raised at a given time.
     
            @param mask: mask.
            @type mask: int
            @return: event name.
            @rtype: str
            """
            ms = mask
            name = '%s'
            if mask & IN_ISDIR:
                ms = mask - IN_ISDIR
                name = '%s|IN_ISDIR'
            return name % EventsCodes.ALL_VALUES[ms]
     
        maskname = staticmethod(maskname)
     
     
    # So let's now turn the configuration into code
    EventsCodes.ALL_FLAGS = {}
    EventsCodes.ALL_VALUES = {}
    for flagc, valc in EventsCodes.FLAG_COLLECTIONS.items():
        # Make the collections' members directly accessible through the
        # class dictionary
        setattr(EventsCodes, flagc, valc)
     
        # Collect all the flags under a common umbrella
        EventsCodes.ALL_FLAGS.update(valc)
     
        # Make the individual masks accessible as 'constants' at globals() scope
        # and masknames accessible by values.
        for name, val in valc.items():
            globals()[name] = val
            EventsCodes.ALL_VALUES[val] = name
     
     
    # all 'normal' events
    ALL_EVENTS = reduce(lambda x, y: x | y, EventsCodes.OP_FLAGS.values())
    EventsCodes.ALL_FLAGS['ALL_EVENTS'] = ALL_EVENTS
    EventsCodes.ALL_VALUES[ALL_EVENTS] = 'ALL_EVENTS'
     
     
    class _Event:
        """
        Event structure, represent events raised by the system. This
        is the base class and should be subclassed.
     
        """
        def __init__(self, dict_):
            """
            Attach attributes (contained in dict_) to self.
     
            @param dict_: Set of attributes.
            @type dict_: dictionary
            """
            for tpl in dict_.items():
                setattr(self, *tpl)
     
        def __repr__(self):
            """
            @return: Generic event string representation.
            @rtype: str
            """
            s = ''
            for attr, value in sorted(self.__dict__.items(), key=lambda x: x[0]):
                if attr.startswith('_'):
                    continue
                if attr == 'mask':
                    value = hex(getattr(self, attr))
                elif isinstance(value, basestring) and not value:
                    value = "''"
                s += ' %s%s%s' % (output_format.field_name(attr),
                                  output_format.punctuation('='),
                                  output_format.field_value(value))
     
            s = '%s%s%s %s' % (output_format.punctuation('<'),
                               output_format.class_name(self.__class__.__name__),
                               s,
                               output_format.punctuation('>'))
            return s
     
        def __str__(self):
            return repr(self)
     
     
    class _RawEvent(_Event):
        """
        Raw event, it contains only the informations provided by the system.
        It doesn't infer anything.
        """
        def __init__(self, wd, mask, cookie, name):
            """
            @param wd: Watch Descriptor.
            @type wd: int
            @param mask: Bitmask of events.
            @type mask: int
            @param cookie: Cookie.
            @type cookie: int
            @param name: Basename of the file or directory against which the
                         event was raised in case where the watched directory
                         is the parent directory. None if the event was raised
                         on the watched item itself.
            @type name: string or None
            """
            # Use this variable to cache the result of str(self), this object
            # is immutable.
            self._str = None
            # name: remove trailing '\0'
            d = {'wd': wd,
                 'mask': mask,
                 'cookie': cookie,
                 'name': name.rstrip('\0')}
            _Event.__init__(self, d)
            logging.debug(str(self))
     
        def __str__(self):
            if self._str is None:
                self._str = _Event.__str__(self)
            return self._str
     
     
    class Event(_Event):
        """
        This class contains all the useful informations about the observed
        event. However, the presence of each field is not guaranteed and
        depends on the type of event. In effect, some fields are irrelevant
        for some kind of event (for example 'cookie' is meaningless for
        IN_CREATE whereas it is mandatory for IN_MOVE_TO).
     
        The possible fields are:
          - wd (int): Watch Descriptor.
          - mask (int): Mask.
          - maskname (str): Readable event name.
          - path (str): path of the file or directory being watched.
          - name (str): Basename of the file or directory against which the
                  event was raised in case where the watched directory
                  is the parent directory. None if the event was raised
                  on the watched item itself. This field is always provided
                  even if the string is ''.
          - pathname (str): Concatenation of 'path' and 'name'.
          - src_pathname (str): Only present for IN_MOVED_TO events and only in
                  the case where IN_MOVED_FROM events are watched too. Holds the
                  source pathname from where pathname was moved from.
          - cookie (int): Cookie.
          - dir (bool): True if the event was raised against a directory.
     
        """
        def __init__(self, raw):
            """
            Concretely, this is the raw event plus inferred infos.
            """
            _Event.__init__(self, raw)
            self.maskname = EventsCodes.maskname(self.mask)
            if COMPATIBILITY_MODE:
                self.event_name = self.maskname
            try:
                if self.name:
                    self.pathname = os.path.abspath(os.path.join(self.path,
                                                                 self.name))
                else:
                    self.pathname = os.path.abspath(self.path)
            except AttributeError, err:
                # Usually it is not an error some events are perfectly valids
                # despite the lack of these attributes.
                logging.debug(err)
     
     
    class _ProcessEvent:
        """
        Abstract processing event class.
        """
        def __call__(self, event):
            """
            To behave like a functor the object must be callable.
            This method is a dispatch method. Its lookup order is:
              1. process_MASKNAME method
              2. process_FAMILY_NAME method
              3. otherwise calls process_default
     
            @param event: Event to be processed.
            @type event: Event object
            @return: By convention when used from the ProcessEvent class:
                     - Returning False or None (default value) means keep on
                     executing next chained functors (see chain.py example).
                     - Returning True instead means do not execute next
                       processing functions.
            @rtype: bool
            @raise ProcessEventError: Event object undispatchable,
                                      unknown event.
            """
            stripped_mask = event.mask - (event.mask & IN_ISDIR)
            maskname = EventsCodes.ALL_VALUES.get(stripped_mask)
            if maskname is None:
                raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask)
     
            # 1- look for process_MASKNAME
            meth = getattr(self, 'process_' + maskname, None)
            if meth is not None:
                return meth(event)
            # 2- look for process_FAMILY_NAME
            meth = getattr(self, 'process_IN_' + maskname.split('_')[1], None)
            if meth is not None:
                return meth(event)
            # 3- default call method process_default
            return self.process_default(event)
     
        def __repr__(self):
            return '<%s>' % self.__class__.__name__
     
     
    class ProcessEvent(_ProcessEvent):
        """
        Process events objects, can be specialized via subclassing, thus its
        behavior can be overriden:
     
        Note: you should not override __init__ in your subclass instead define
        a my_init() method, this method will be called automatically from the
        constructor of this class with its optionals parameters.
     
          1. Provide specialized individual methods, e.g. process_IN_DELETE for
             processing a precise type of event (e.g. IN_DELETE in this case).
          2. Or/and provide methods for processing events by 'family', e.g.
             process_IN_CLOSE method will process both IN_CLOSE_WRITE and
             IN_CLOSE_NOWRITE events (if process_IN_CLOSE_WRITE and
             process_IN_CLOSE_NOWRITE aren't defined though).
          3. Or/and override process_default for catching and processing all
             the remaining types of events.
        """
        pevent = None
     
        def __init__(self, pevent=None, **kargs):
            """
            Enable chaining of ProcessEvent instances.
     
            @param pevent: Optional callable object, will be called on event
                           processing (before self).
            @type pevent: callable
            @param kargs: This constructor is implemented as a template method
                          delegating its optionals keyworded arguments to the
                          method my_init().
            @type kargs: dict
            """
            self.pevent = pevent
            self.my_init(**kargs)
     
        def my_init(self, **kargs):
            """
            This method is called from ProcessEvent.__init__(). This method is
            empty here and must be redefined to be useful. In effect, if you
            need to specifically initialize your subclass' instance then you
            just have to override this method in your subclass. Then all the
            keyworded arguments passed to ProcessEvent.__init__() will be
            transmitted as parameters to this method. Beware you MUST pass
            keyword arguments though.
     
            @param kargs: optional delegated arguments from __init__().
            @type kargs: dict
            """
            pass
     
        def __call__(self, event):
            stop_chaining = False
            if self.pevent is not None:
                # By default methods return None so we set as guideline
                # that methods asking for stop chaining must explicitely
                # return non None or non False values, otherwise the default
                # behavior will be to accept chain call to the corresponding
                # local method.
                stop_chaining = self.pevent(event)
            if not stop_chaining:
                return _ProcessEvent.__call__(self, event)
     
        def nested_pevent(self):
            return self.pevent
     
        def process_IN_Q_OVERFLOW(self, event):
            """
            By default this method only reports warning messages, you can
            overredide it by subclassing ProcessEvent and implement your own
            process_IN_Q_OVERFLOW method. The actions you can take on receiving
            this event is either to update the variable max_queued_events in order
            to handle more simultaneous events or to modify your code in order to
            accomplish a better filtering diminishing the number of raised events.
            Because this method is defined, IN_Q_OVERFLOW will never get
            transmitted as arguments to process_default calls.
     
            @param event: IN_Q_OVERFLOW event.
            @type event: dict
            """
            log.warning('Event queue overflowed.')
     
        def process_default(self, event):
            """
            Default processing event method. By default does nothing. Subclass
            ProcessEvent and redefine this method in order to modify its behavior.
     
            @param event: Event to be processed. Can be of any type of events but
                          IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
            @type event: Event instance
            """
            pass
     
     
    class PrintAllEvents(ProcessEvent):
        """
        Dummy class used to print events strings representations. For instance this
        class is used from command line to print all received events to stdout.
        """
        def my_init(self, out=None):
            """
            @param out: Where events will be written.
            @type out: Object providing a valid file object interface.
            """
            if out is None:
                out = sys.stdout
            self._out = out
     
        def process_default(self, event):
            """
            Writes event string representation to file object provided to
            my_init().
     
            @param event: Event to be processed. Can be of any type of events but
                          IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
            @type event: Event instance
            """
            self._out.write(str(event))
            self._out.write('\n')
            self._out.flush()
     
     
    class WatchManagerError(Exception):
        """
        WatchManager Exception. Raised on error encountered on watches
        operations.
     
        """
        def __init__(self, msg, wmd):
            """
            @param msg: Exception string's description.
            @type msg: string
            @param wmd: This dictionary contains the wd assigned to paths of the
                        same call for which watches were successfully added.
            @type wmd: dict
            """
            self.wmd = wmd
            Exception.__init__(self, msg)

    Unfortunatly we need to implement the code that talks with the Win32 API to be able to retrieve the events in the file system. In my design this is done by the Watch class that looks like this:

    # Author: Manuel de la Pena <manuel@canonical.com>
    #
    # Copyright 2011 Canonical Ltd.
    #
    # This program is free software: you can redistribute it and/or modify it
    # under the terms of the GNU General Public License version 3, as published
    # by the Free Software Foundation.
    #
    # This program is distributed in the hope that it will be useful, but
    # WITHOUT ANY WARRANTY; without even the implied warranties of
    # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
    # PURPOSE.  See the GNU General Public License for more details.
    #
    # You should have received a copy of the GNU General Public License along
    # with this program.  If not, see <http://www.gnu.org/licenses/>.
    """File notifications on windows."""
     
    import logging
    import os
    import re
     
    import winerror
     
    from Queue import Queue, Empty
    from threading import Thread
    from uuid import uuid4
    from twisted.internet import task, reactor
    from win32con import (
        FILE_SHARE_READ,
        FILE_SHARE_WRITE,
        FILE_FLAG_BACKUP_SEMANTICS,
        FILE_NOTIFY_CHANGE_FILE_NAME,
        FILE_NOTIFY_CHANGE_DIR_NAME,
        FILE_NOTIFY_CHANGE_ATTRIBUTES,
        FILE_NOTIFY_CHANGE_SIZE,
        FILE_NOTIFY_CHANGE_LAST_WRITE,
        FILE_NOTIFY_CHANGE_SECURITY,
        OPEN_EXISTING
    )
    from win32file import CreateFile, ReadDirectoryChangesW
    from ubuntuone.platform.windows.pyinotify import (
        Event,
        WatchManagerError,
        ProcessEvent,
        PrintAllEvents,
        IN_OPEN,
        IN_CLOSE_NOWRITE,
        IN_CLOSE_WRITE,
        IN_CREATE,
        IN_ISDIR,
        IN_DELETE,
        IN_MOVED_FROM,
        IN_MOVED_TO,
        IN_MODIFY,
        IN_IGNORED
    )
    from ubuntuone.syncdaemon.filesystem_notifications import (
        GeneralINotifyProcessor
    )
    from ubuntuone.platform.windows.os_helper import (
        LONG_PATH_PREFIX,
        abspath,
        listdir
    )
     
    # constant found in the msdn documentation:
    # http://msdn.microsoft.com/en-us/library/ff538834(v=vs.85).aspx
    FILE_LIST_DIRECTORY = 0x0001
    FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
    FILE_NOTIFY_CHANGE_CREATION = 0x00000040
     
    # a map between the few events that we have on windows and those
    # found in pyinotify
    WINDOWS_ACTIONS = {
      1: IN_CREATE,
      2: IN_DELETE,
      3: IN_MODIFY,
      4: IN_MOVED_FROM,
      5: IN_MOVED_TO
    }
     
    # translates quickly the event and it's is_dir state to our standard events
    NAME_TRANSLATIONS = {
        IN_OPEN: 'FS_FILE_OPEN',
        IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE',
        IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE',
        IN_CREATE: 'FS_FILE_CREATE',
        IN_CREATE | IN_ISDIR: 'FS_DIR_CREATE',
        IN_DELETE: 'FS_FILE_DELETE',
        IN_DELETE | IN_ISDIR: 'FS_DIR_DELETE',
        IN_MOVED_FROM: 'FS_FILE_DELETE',
        IN_MOVED_FROM | IN_ISDIR: 'FS_DIR_DELETE',
        IN_MOVED_TO: 'FS_FILE_CREATE',
        IN_MOVED_TO | IN_ISDIR: 'FS_DIR_CREATE',
    }
     
    # the default mask to be used in the watches added by the FilesystemMonitor
    # class
    FILESYSTEM_MONITOR_MASK = FILE_NOTIFY_CHANGE_FILE_NAME | \
        FILE_NOTIFY_CHANGE_DIR_NAME | \
        FILE_NOTIFY_CHANGE_ATTRIBUTES | \
        FILE_NOTIFY_CHANGE_SIZE | \
        FILE_NOTIFY_CHANGE_LAST_WRITE | \
        FILE_NOTIFY_CHANGE_SECURITY | \
        FILE_NOTIFY_CHANGE_LAST_ACCESS
     
     
    # The implementation of the code that is provided as the pyinotify
    # substitute
    class Watch(object):
        """Implement the same functions as pyinotify.Watch."""
     
        def __init__(self, watch_descriptor, path, mask, auto_add,
            events_queue=None, exclude_filter=None, proc_fun=None):
            super(Watch, self).__init__()
            self.log = logging.getLogger('ubuntuone.platform.windows.' +
                'filesystem_notifications.Watch')
            self._watching = False
            self._descriptor = watch_descriptor
            self._auto_add = auto_add
            self.exclude_filter = None
            self._proc_fun = proc_fun
            self._cookie = None
            self._source_pathname = None
            # remember the subdirs we have so that when we have a delete we can
            # check if it was a remove
            self._subdirs = []
            # ensure that we work with an abspath and that we can deal with
            # long paths over 260 chars.
            self._path = os.path.abspath(path)
            if not self._path.startswith(LONG_PATH_PREFIX):
                self._path = LONG_PATH_PREFIX + self._path
            self._mask = mask
            # lets make the q as big as possible
            self._raw_events_queue = Queue()
            if not events_queue:
                events_queue = Queue()
            self.events_queue = events_queue
     
        def _path_is_dir(self, path):
            """"Check if the path is a dir and update the local subdir list."""
            self.log.debug('Testing if path "%s" is a dir', path)
            is_dir = False
            if os.path.exists(path):
                is_dir = os.path.isdir(path)
            else:
                self.log.debug('Path "%s" was deleted subdirs are %s.',
                    path, self._subdirs)
                # we removed the path, we look in the internal list
                if path in self._subdirs:
                    is_dir = True
                    self._subdirs.remove(path)
            if is_dir:
                self.log.debug('Adding %s to subdirs %s', path, self._subdirs)
                self._subdirs.append(path)
            return is_dir
     
        def _process_events(self):
            """Process the events form the queue."""
            # we transform the events to be the same as the one in pyinotify
            # and then use the proc_fun
            while self._watching or not self._raw_events_queue.empty():
                file_name, action = self._raw_events_queue.get()
                # map the windows events to the pyinotify ones, tis is dirty but
                # makes the multiplatform better, linux was first :P
                is_dir = self._path_is_dir(file_name)
                if os.path.exists(file_name):
                    is_dir = os.path.isdir(file_name)
                else:
                    # we removed the path, we look in the internal list
                    if file_name in self._subdirs:
                        is_dir = True
                        self._subdirs.remove(file_name)
                if is_dir:
                    self._subdirs.append(file_name)
                mask = WINDOWS_ACTIONS[action]
                head, tail = os.path.split(file_name)
                if is_dir:
                    mask |= IN_ISDIR
                event_raw_data = {
                    'wd': self._descriptor,
                    'dir': is_dir,
                    'mask': mask,
                    'name': tail,
                    'path': head.replace(self.path, '.')
                }
                # by the way in which the win api fires the events we know for
                # sure that no move events will be added in the wrong order, this
                # is kind of hacky, I dont like it too much
                if WINDOWS_ACTIONS[action] == IN_MOVED_FROM:
                    self._cookie = str(uuid4())
                    self._source_pathname = tail
                    event_raw_data['cookie'] = self._cookie
                if WINDOWS_ACTIONS[action] == IN_MOVED_TO:
                    event_raw_data['src_pathname'] = self._source_pathname
                    event_raw_data['cookie'] = self._cookie
                event = Event(event_raw_data)
                # FIXME: event deduces the pathname wrong and we need manually
                # set it
                event.pathname = file_name
                # add the event only if we do not have an exclude filter or
                # the exclude filter returns False, that is, the event will not
                # be excluded
                if not self.exclude_filter or not self.exclude_filter(event):
                    self.log.debug('Addding event %s to queue.', event)
                    self.events_queue.put(event)
     
        def _watch(self):
            """Watch a path that is a directory."""
            # we are going to be using the ReadDirectoryChangesW whihc requires
            # a direcotry handle and the mask to be used.
            handle = CreateFile(
                self._path,
                FILE_LIST_DIRECTORY,
                FILE_SHARE_READ | FILE_SHARE_WRITE,
                None,
                OPEN_EXISTING,
                FILE_FLAG_BACKUP_SEMANTICS,
                None
            )
            self.log.debug('Watchng path %s.', self._path)
            while self._watching:
                # important information to know about the parameters:
                # param 1: the handle to the dir
                # param 2: the size to be used in the kernel to store events
                # that might be lost whilw the call is being performed. This
                # is complicates to fine tune since if you make lots of watcher
                # you migh used to much memory and make your OS to BSOD
                results = ReadDirectoryChangesW(
                    handle,
                    1024,
                    self._auto_add,
                    self._mask,
                    None,
                    None
                )
                # add the diff events to the q so that the can be processed no
                # matter the speed.
                for action, file in results:
                    full_filename = os.path.join(self._path, file)
                    self._raw_events_queue.put((full_filename, action))
                    self.log.debug('Added %s to raw events queue.',
                        (full_filename, action))
     
        def start_watching(self):
            """Tell the watch to start processing events."""
            # get the diff dirs in the path
            for current_child in listdir(self._path):
                full_child_path = os.path.join(self._path, current_child)
                if os.path.isdir(full_child_path):
                    self._subdirs.append(full_child_path)
            # start to diff threads, one to watch the path, the other to
            # process the events.
            self.log.debug('Sart watching path.')
            self._watching = True
            watch_thread = Thread(target=self._watch,
                name='Watch(%s)' % self._path)
            process_thread = Thread(target=self._process_events,
                name='Process(%s)' % self._path)
            process_thread.start()
            watch_thread.start()
     
        def stop_watching(self):
            """Tell the watch to stop processing events."""
            self._watching = False
            self._subdirs = []
     
        def update(self, mask, proc_fun=None, auto_add=False):
            """Update the info used by the watcher."""
            self.log.debug('update(%s, %s, %s)', mask, proc_fun, auto_add)
            self._mask = mask
            self._proc_fun = proc_fun
            self._auto_add = auto_add
     
        @property
        def path(self):
            """Return the patch watched."""
            return self._path
     
        @property
        def auto_add(self):
            return self._auto_add
     
        @property
        def proc_fun(self):
            return self._proc_fun
     
     
    class WatchManager(object):
        """Implement the same functions as pyinotify.WatchManager."""
     
        def __init__(self, exclude_filter=lambda path: False):
            """Init the manager to keep trak of the different watches."""
            super(WatchManager, self).__init__()
            self.log = logging.getLogger('ubuntuone.platform.windows.'
                + 'filesystem_notifications.WatchManager')
            self._wdm = {}
            self._wd_count = 0
            self._exclude_filter = exclude_filter
            self._events_queue = Queue()
            self._ignored_paths = []
     
        def stop(self):
            """Close the manager and stop all watches."""
            self.log.debug('Stopping watches.')
            for current_wd in self._wdm:
                self._wdm[current_wd].stop_watching()
                self.log.debug('Watch for %s stopped.', self._wdm[current_wd].path)
     
        def get_watch(self, wd):
            """Return the watch with the given descriptor."""
            return self._wdm[wd]
     
        def del_watch(self, wd):
            """Delete the watch with the given descriptor."""
            try:
                watch = self._wdm[wd]
                watch.stop_watching()
                del self._wdm[wd]
                self.log.debug('Watch %s removed.', wd)
            except KeyError, e:
                logging.error(str(e))
     
        def _add_single_watch(self, path, mask, proc_fun=None, auto_add=False,
            quiet=True, exclude_filter=None):
            self.log.debug('add_single_watch(%s, %s, %s, %s, %s, %s)', path, mask,
                proc_fun, auto_add, quiet, exclude_filter)
            self._wdm[self._wd_count] = Watch(self._wd_count, path, mask,
                auto_add, events_queue=self._events_queue,
                exclude_filter=exclude_filter, proc_fun=proc_fun)
            self._wdm[self._wd_count].start_watching()
            self._wd_count += 1
            self.log.debug('Watch count increased to %s', self._wd_count)
     
        def add_watch(self, path, mask, proc_fun=None, auto_add=False,
            quiet=True, exclude_filter=None):
            if hasattr(path, '__iter__'):
                self.log.debug('Added collection of watches.')
                # we are dealing with a collection of paths
                for current_path in path:
                    if not self.get_wd(current_path):
                        self._add_single_watch(current_path, mask, proc_fun,
                            auto_add, quiet, exclude_filter)
            elif not self.get_wd(path):
                self.log.debug('Adding single watch.')
                self._add_single_watch(path, mask, proc_fun, auto_add,
                    quiet, exclude_filter)
     
        def update_watch(self, wd, mask=None, proc_fun=None, rec=False,
                         auto_add=False, quiet=True):
            try:
                watch = self._wdm[wd]
                watch.stop_watching()
                self.log.debug('Stopped watch on %s for update.', watch.path)
                # update the data and restart watching
                auto_add = auto_add or rec
                watch.update(mask, proc_fun=proc_fun, auto_add=auto_add)
                # only start the watcher again if the mask was given, otherwhise
                # we are not watchng and therefore do not care
                if mask:
                    watch.start_watching()
            except KeyError, e:
                self.log.error(str(e))
                if not quiet:
                    raise WatchManagerError('Watch %s was not found' % wd, {})
     
        def get_wd(self, path):
            """Return the watcher that is used to watch the given path."""
            for current_wd in self._wdm:
                if self._wdm[current_wd].path in path and \
                    self._wdm[current_wd].auto_add:
                    return current_wd
     
        def get_path(self, wd):
            """Return the path watched by the wath with the given wd."""
            watch_ = self._wmd.get(wd)
            if watch:
                return watch.path
     
        def rm_watch(self, wd, rec=False, quiet=True):
            """Remove the the watch with the given wd."""
            try:
                watch = self._wdm[wd]
                watch.stop_watching()
                del self._wdm[wd]
            except KeyrError, err:
                self.log.error(str(err))
                if not quiet:
                    raise WatchManagerError('Watch %s was not found' % wd, {})
     
        def rm_path(self, path):
            """Remove a watch to the given path."""
            # it would be very tricky to remove a subpath from a watcher that is
            # looking at changes in ther kids. To make it simpler and less error
            # prone (and even better performant since we use less threads) we will
            # add a filter to the events in the watcher so that the events from
            # that child are not received :)
            def ignore_path(event):
                """Ignore an event if it has a given path."""
                is_ignored = False
                for ignored_path in self._ignored_paths:
                    if ignore_path in event.pathname:
                        return True
                return False
     
            wd = self.get_wd(path)
            if wd:
                if self._wdm[wd].path == path:
                    self.log.debug('Removing watch for path "%s"', path)
                    self.rm_watch(wd)
                else:
                    self.log.debug('Adding exclude filter for "%s"', path)
                    # we have a watch that cotains the path as a child path
                    if not path in self._ignored_paths:
                        self._ignored_paths.append(path)
                    # FIXME: This assumes that we do not have other function
                    # which in our usecase is correct, but what is we move this
                    # to other projects evet?!? Maybe using the manager
                    # exclude_filter is better
                    if not self._wdm[wd].exclude_filter:
                        self._wdm[wd].exclude_filter = ignore_path
     
        @property
        def watches(self):
            """Return a reference to the dictionary that contains the watches."""
            return self._wdm
     
        @property
        def events_queue(self):
            """Return the queue with the events that the manager contains."""
            return self._events_queue
     
     
    class Notifier(object):
        """
        Read notifications, process events. Inspired by the pyinotify.Notifier
        """
     
        def __init__(self, watch_manager, default_proc_fun=None, read_freq=0,
                     threshold=10, timeout=-1):
            """Init to process event according to the given timeout & threshold."""
            super(Notifier, self).__init__()
            self.log = logging.getLogger('ubuntuone.platform.windows.'
                + 'filesystem_notifications.Notifier')
            # Watch Manager instance
            self._watch_manager = watch_manager
            # Default processing method
            self._default_proc_fun = default_proc_fun
            if default_proc_fun is None:
                self._default_proc_fun = PrintAllEvents()
            # Loop parameters
            self._read_freq = read_freq
            self._threshold = threshold
            self._timeout = timeout
     
        def proc_fun(self):
            return self._default_proc_fun
     
        def process_events(self):
            """
            Process the event given the threshold and the timeout.
            """
            self.log.debug('Processing events with threashold: %s and timeout: %s',
                self._threshold, self._timeout)
            # we will process an amount of events equal to the threshold of
            # the notifier and will block for the amount given by the timeout
            processed_events = 0
            while processed_events < self._threshold:
                try:
                    raw_event = None
                    if not self._timeout or self._timeout < 0:
                        raw_event = self._watch_manager.events_queue.get(
                            block=False)
                    else:
                        raw_event = self._watch_manager.events_queue.get(
                            timeout=self._timeout)
                    watch = self._watch_manager.get_watch(raw_event.wd)
                    if watch is None:
                        # Not really sure how we ended up here, nor how we should
                        # handle these types of events and if it is appropriate to
                        # completly skip them (like we are doing here).
                        self.log.warning('Unable to retrieve Watch object '
                            + 'associated to %s', raw_event)
                        processed_events += 1
                        continue
                    if watch and watch.proc_fun:
                        self.log.debug('Executing proc_fun from watch.')
                        watch.proc
              Linux 3.10 cải tiến SSD Caching, hỗ trợ đồ họa Radeon   
    Mới đây, Linus Torvalds vừa phát hành phiên bản kernel Linux 3.10 với một số nâng cấp so với bản 3.9 ra mắt hồi tháng Tư. Linux 3.10 được trang bị tính năng SSD caching mới cùng một số cải tiến cho trình điều khiển các chip đồ họa Radeon.
              A One-Pass Sequential Monte Carlo Method for Bayesian Analysis of Massive Datasets   
    For Bayesian analysis of massive data, Markov chain Monte Carlo (MCMC) techniques often prove infeasible due to computational resource constraints. Standard MCMC methods generally require a complete scan of the dataset for each iteration. Ridgeway and Madigan (2002) and Chopin (2002b) recently presented importance sampling algorithms that combined simulations from a posterior distribution conditioned on a small portion of the dataset with a reweighting of those simulations to condition on the remainder of the dataset. While these algorithms drastically reduce the number of data accesses as compared to traditional MCMC, they still require substantially more than a single pass over the dataset. In this paper, we present "1PFS," an efficient, one-pass algorithm. The algorithm employs a simple modification of the Ridgeway and Madigan (2002) particle filtering algorithm that replaces the MCMC based "rejuvenation" step with a more efficient "shrinkage" kernel smoothing based step. To show proof-of-concept and to enable a direct comparison, we demonstrate 1PFS on the same examples presented in Ridgeway and Madigan (2002), namely a mixture model for Markov chains and Bayesian logistic regression. Our results indicate the proposed scheme delivers accurate parameter estimates while employing only a single pass through the data.
              ESRT @jonoberheide - Kernel memory disclosure in grsecurity ...   
    ESRT @jonoberheide - Kernel memory disclosure in grsecurity http://www.secuobs.com/twitter/news/251773.shtml
              ESRT @mikkohypponen - A short update on the kernel.org hack outage, from LKML mailing list - Thanks @jsqf ...   
    ESRT @mikkohypponen - A short update on the kernel.org hack outage, from LKML mailing list - Thanks @jsqf http://www.secuobs.com/twitter/news/251110.shtml
              ESRT @jonoberheide - ksymhunter and kstructhunter tools for linux kernel exploit dev ...   
    ESRT @jonoberheide - ksymhunter and kstructhunter tools for linux kernel exploit dev http://www.secuobs.com/twitter/news/246687.shtml
              ESRT @_argp @jonoberheide - List of all Linux kernel structs and sizes ...   
    ESRT @_argp @jonoberheide - List of all Linux kernel structs and sizes http://www.secuobs.com/twitter/news/245522.shtml
              ESRT @xanda @jedisct1 - Linux Kernel less than 2.6.36.2 Econet Privilege Escalation Exploit ...   
    ESRT @xanda @jedisct1 - Linux Kernel less than 2.6.36.2 Econet Privilege Escalation Exploit http://www.secuobs.com/twitter/news/245303.shtml
              ESRT @xanda @slashdot - Kernel.org Attackers Didn't Know What They Had ...   
    ESRT @xanda @slashdot - Kernel.org Attackers Didn't Know What They Had http://www.secuobs.com/twitter/news/244864.shtml
              FIRST AID BEAUTY RETINOL SERUM & RESURFACING LIQUID REVIEW & GIVEAWAY   


    Today I have a review of the First Aid Beauty Fab Skin Lab Collection which consists of the First Aid Retinol Serum (0.25 % pure concentrate) and the First Aid Resurfacing Liquid (10 % AHA). Let´s find out how I liked them!


    First Aid Beauty is of course a cruelty free brand!




    First Aid Beauty Retinol Serum $58



    You know that Retinol is the "gold-standard" in anti-aging, right? It helps improve the appearance of aging skin, wrinkles and fine lines. The First Aid Retinol Serum is great- even for those with sensitive skin or "beginners", as Retinol can be somewhat irritating for first time users- hey: no pain no gain. ;-)

    But the First Aid Retinol Serum is gentle enough to not get any irritations- yet it will still give you the right amount of effectiveness to get rid of fine lines. It also contains Peptides that help skin appear more youthful. You should apply Retinol only at night and don´t forget to wear sunscreen during the day, as Retinol will make your skin more suns sensitive (which can lead in an uneven skin-tone).


    It is a creamy yellow lotion that is very easy to apply (some Retinol serums can be a hassle to apply evenly- not so with this one) and even feels moisturizing. No detectable scent.

    Ingredients: Water, Polysorbate 80, Caprylic/Capric Triglyceride, Polysorbate 20, Butylene Glycol, Persea Gratissima (Avocado) Oil, Glycerin, Dimethicone, Retinol, Magnesium Ascorbyl Phosphate, Colloidal Oatmeal, Tocopherol, Hydrolyzed Hyaluronic Acid, Ceramide NP, Acetyl Hexapeptide-8, Linoleic Acid, Allantoin, Phytosteryl Canola Glycerides, Palmitic Acid, Oleic Acid, Glycine Soja (Soybean) Oil, Hydroxypropyl Cyclodextrin, Aloe Barbadensis Leaf Juice, Triolein, Caprylyl Glycol,Acrylates/C10-30 Alkyl Acrylate Crosspolymer, Xanthan Gum, Lecithin, Stearic Acid, Maltodextrin, BHT, BHA, Phenoxyethanol, Sodium Hydroxide, Potassium Sorbate.



    First Aid Resurfacing Liquid $55



    The First Aid Resurfacing Liquid contains a 10% AHA blend of Glycolic, Lactic, Tartaric and Malic Acids that work altogether to exfoliate and reveal younger-looking skin. I swear absolutely by Glycolic Acid. It also contains Licorice root an mulberry root as well as lemon peel extracts to brighten your face even more (get rid of brown spots). 

    Oh yeah and if that still wasn´t enough good stuff: there is  more! It also contains Hyluronic Acid (you know to plump up your tired skin cells), and Vitamin C and E, Aloe, Oatmeal, Allantoin and Ceramides! Holy anti-aging goodness! Is there anything missing in this fabulous Resurfacing Liquid? I don´t think so!

    It is a thin liquid and you only need a thin layer on your face. It takes a couple minutes until absorbed- so in case you need to apply a moisturizer on top, wait it out until it dried to avoid that your skin feels sticky.



    I would use this only at night because this also can make your skin sensitive to the sun- but I would not use the Retinol and the Resurfacing Liquid together- so I would recommend to switch it up every other day: one night Retinol, the next Resurfacing Liquid and so on. ;-)

    Ingredients: Water, Polysorbate 80, Glycolic Acid, Butylene Glycol, Caprylic/Capric Triglyceride, Sodium Hydroxide, Malic Acid, Tartaric Acid, Lactic Acid, Citrus Limon (Lemon) Peel Extract, Colloidal Oatmeal, Magnesium Ascorbyl Phosphate, Hydrolyzed Elastin, Papain, Hydrolyzed Hyaluronic Acid, Soluble Collagen, Avena Sativa (Oat) Kernel Extract, Triolein, Ceramide NP, Cucumis Sativus (Cucumber) Fruit Extract, Glycyrrhiza Glabra (Licorice) Root Extract, Morus Alba Bark Extract, Phytosteryl Canola Glycerides, Allantoin, Lecithin, Lysolecithin, Oleic Acid, Palmitic Acid, Maltodextrin, Aloe Barbadensis Leaf Juice, Glycerin, Tocopherol, Glycine Soja (Soybean) Oil, Caprylyl Glycol, Linoleic Acid, Sclerotium Gum, Xanthan Gum, Pullulan, Mica, Stearic Acid, Leuconostoc/Radish Root Ferment Filtrate, Silica, Phenoxyethanol, Chlorphenesin, Disodium EDTA.


    My verdict:

    Overall the First Aid Beauty Fab Skin Lab Collection provide gentle formulas but powerful anti-aging results! Available at Sephora, Ulta or FirstAidBeauty.com



    GIVEAWAY!

    Enter to win the First Aid Beauty Fab Skin Lab Collection. This is a $113 value! 

    Please follow @firstaidbeauty on Instagram to enter!

    Open to U.S. residents only! Good luck!

    a Rafflecopter giveaway














    Disclaimer: I received the products mentioned above for free. Regardless, I only recommend products or services I use personally and believe will be good for my readers. Contains Affiliate links. Read my full disclosure.

              I9100XWKJ2 Android 2.3.5 Firmware Update for Samsung Galaxy S2 released   
    Here comes another Android 2.3.5 Gingerbread firmware update for Samsung Galaxy S II I9100. This has just been reportedly as an official Kies server firmware release in France, the version build number is I9100XWKJ2 in which got the default CSC file code XEF- France = I9100XEFKJ1 and a build date of October  21, 2011.
    I9100XWKJ2 Android 2.3.5 Gingerbread Firmware for Galaxy S2
    The SBL (Samsung Boot Loader) does not included in the file. So in flashing this in Odin, you may need to flash the old bootloader via PDA (or Bootloader) and then you can reset via USB Jig again.

    I9100XWKJ2 Android 2.3.5 Firmware
    PDA: I9100XWKJ2
    Phone: I9100XXKI4
    CSC: I9100OJPKJ1
    Version: Android 2.3.5 Gingerbread
    Build Date: October 21, 2011

    Build info:
    # autogenerated by buildinfo.sh                                                         
    ro.build.id=GINGERBREAD
    ro.build.display.id=GINGERBREAD.XWKJ2
    ro.build.version.incremental=XWKJ2
    ro.build.version.sdk=10
    ro.build.version.codename=REL
    ro.build.version.release=2.3.5
    ro.build.date=Fri Oct 21 00:52:44 KST 2011
    ro.build.date.utc=1319125964
    ro.build.type=user
    ro.build.user=root
    ro.build.host=DELL144
    ro.build.tags=release-keys
    ro.product.model=GT-I9100
    ro.product.brand=samsung
    ro.product.name=GT-I9100
    ro.product.device=GT-I9100
    ro.product.board=GT-I9100
    ro.product.cpu.abi=armeabi-v7a
    # Samsung Specific Properties
    ro.build.PDA=I9100XWKJ2
    ro.build.hidden_ver=I9100XWKJ2
    ro.build.changelist=676699
    ro.product.cpu.abi2=armeabi
    ro.product.manufacturer=samsung
    ro.product.locale.language=en
    ro.product.locale.region=GB
    ro.wifi.channels=
    ro.board.platform=s5pc210
    # ro.build.product is obsolete; use ro.product.device
    ro.build.product=GT-I9100
    # Do not try to parse ro.build.description or .fingerprint
    ro.build.description=GT-I9100-user 2.3.5 GINGERBREAD XWKJ2 release-keys
    ro.build.fingerprint=samsung/GT-I9100/GT-I9100:2.3.5/GINGERBREAD/XWKJ2:user/release-keys
    # Samsung Specific Properties
    ro.build.PDA=I9100XWKJ2
    ro.build.hidden_ver=I9100XWKJ2
    ro.build.changelist=676699
    ro.tether.denied=false
    ro.flash.resolution=1080
    # end build properties

    What's new on I9100XWKJ2?
    Here's some added features compared from previous releases of Android 2.3.5 like:
    I9100XXKI3
    I9100XWKI8
    I9100XXKI4
    I9100XWKJ1

    Gary Crutcher  posted the CWM flashable I9100XWKJ2 ROM with an expanded power menu included and here's how he comes with the I9100XWKJ2.

    - added CRT effect
    - added 14-button quickpanel - right/left scrollable
    - added AppWidgetPicker-1.2.3.apk to help manage widgets
    - added AppWidgetPicker-1.2.3.apk to help manage widgets
    - added Titanium Backup v4.5.2.2 (free version)
    - added QuickPanelSettings.apk to modify quick panels - allows you to set which quick panel buttons show, the order of the quick panel buttons. Holding a quick panel button (long press) takes you to the buttons menu.
    - replaced Samsung bootanimation with Nexus Prime bootanimation
    - does NOT include sbl (Samsung Boot Loader)
    - no power menu title (except when using QuickPanel buttons)
    - no power off confirm dialog (except when using QuickPanel buttons)
    - reboot, recovery & download options added
    - system/app & system/framework files deodexed & zipaligned
    - removed leading zero on lockscreen clock if in 12-hour mode
    - Siyah Kernel v2.1.1 - includes root and CWM 5.0.2.7 (includes option to backup to external SD card - under backup menu)
    - updated busybox to v1.19.3
    - updated Superuser to v3.0.6
    - updated su to v3.0.3
    - no apps have been removed - left it stock
    - updated Market app to v3.3.11

    You can  grab the CWM packaged and the detailed guide on how to install it,  just see his thread here at Android Modaco.
    You can get the stock ROM and get some more details here at XDA-developer.
                 
    The open source release of Frontier didn't shake the world, or boil the ocean, but it is steadily climbing the Daypop Top 40, indicating there is some interest.
                 
    A picture named bushHead.jpgIt's great to see collaboration among people I really admire, working to make Frontier a better HTTP server. Dave Luebbert was a developer at Microsoft, for years, working on Word. He's roughly my age. Being a good developer is more than knowing how the computer works, it's also about knowing how to get the best work out of yourself and others on the team.
              Judges and abortion: the judicial question becomes political    

    In the United States, as the perceptive de Tocqueville remarked more than 150 years ago, ''There is hardly a political question which does not sooner or later turn into a judicial one.''

    Although judges, as a group, do not attract much of the public's affection or esteem, Americans exhibit an astonishing eagerness to submit their destiny to them.

    Our society expects judges, like old-time country squires, to adjust every dispute and to achieve every necessary compromise and consensus.

    The public regards judges as what the historian Henry Steele Commager called ''the aristocracy of the robe.'' Better we should realize that trial judges are simply uniformed - and frequently uninformed - public employees, hired to perform certain limited tasks.

    Force a judge into different work, and she or he becomes a bungling amateur. When society asks judges to solve political and social difficulties that are really soluble only by other branches of government, the difficulties multiply.

    Some years back, the Massachusetts legislature addressed the question of the under-18-year-old who wishes an abortion but cannot or will not obtain parental consent. (In discussing the resultant statute, which illustrates with particular poignance our strange reliance on judges, I am expressing no view on the underlying issue of abortion - for minors, or anyone else. Process alone, not subject, is what I ask you to consider.)

    The statutory arrangement is simple. If an 18-year-old has not, for whatever reason, obtained the consent of parent or guardian, she may petition the Superior Court for permission. An attorney will be provided for her at state expense; her petition will be sealed to ensure perpetual anonymity; and her application must receive priority, confidential, judicial attention.

    The judge meets the applicant in the presence of only her lawyer, a court clerk, and the court reporter (i.e., stenographer). Just one question is before the court: Should the judge authorize a physician to abort the pregnancy?

    To answer, the judge must follow certain statutory requirements. First, he must determine whether the applicant is ''mature and capable of giving informed consent to the proposed abortion.'' If the conclusion is ''Yes,'' the inquiry ends, and the authorization issues.

    If, however, the answer is ''No,'' the judge must go on to decide whether the abortion ''would be in her best interests.'' If his decision is affirmative, abortion, once again, is mandatory. Now one's views on the appropriateness of affording abortions to or withholding abortions from teen-agers (as they usually are) are completely irrelevant to an evaluation of the statute.

    Here, I am inviting attention not to the morality or amorality of pregnancy termination, but to the social wisdom or folly of leaving the terminating to judges.

    The statute appears to assume that with the robe, a judge acquires omnipotence, omniscience, and omnicompetence in a combination rarely (if ever) found on Earth. Indeed, if one regards abortion as the deliberate taking of life , it requires the judge to determine who shall live and who shall die.

    The legislation, however, permits a considerably less Olympian view of the process. A judge, it says, need only determine whether the applicant is sufficiently mature to be able to understand the abortion procedure and its possible medical consequences. He does not have to conclude that she does in fact understand.

    If the judge is convinced that she is intelligent enough to deal with the facts when a doctor gives them to her, the judicial inquiry ceases.

    Should the judge, however, conclude that the applicant lacks the capacity to understand, then he must decide whether the abortion will be in her best interests.

    If my experience is any sample, one reaches this stage very rarely. When it happens, however, the judge's role becomes entirely nonjudicial, and therefore impossible.

    Deciding whether or not it is in the best interests of an unmarried teen-ager to carry a baby to term against her wishes calls for skills and insights no judge possesses.

    At least in child-custody matters, a judge has the benefit of presentations by opposing sides, and frequently an independent one on behalf of the child. In abortion hearings, nobody makes any effort to put the facts before the judge.

    The teen-ager, speaking not under oath, says whatever she thinks the judge ought to hear. Her attorney, whose professional duty it is to help the client achieve her litigational goal, gives the judge only such information as the lawyer believes will persuade the judge to grant permission. Any inquiry by the judge which departs from the narrow language of the statute is impermissible.

    It is no coincidence that in the four-plus years of the statute's existence, judges have authorized an estimated 1,000 abortions and have denied them on only a handful of occasions, each time meeting prompt appellate reversal.

    Put another way, any teen-ager seeking an abortion is virtually certain to get it. The legislature might just as well have authorized abortion on demand. Yet, reading the statute suggests that a youngster can obtain an abortion only if she meets certain strict requirements, and first convinces a judge.

    Thus we come to the kernel of the matter: The issue of under-18 abortions is not judicial at all; it is, as de Tocqueville would have put it, entirely ''political.'' That is, the question ought to be decided not by judges, but by the Commonwealth, speaking through legislators applying whatever standards of morality, ethics, and judgment normally go into resolving any difficult public problem.

    Our statute, however, avoids the issue. It says to abortion opponents: ''The tests, fairly applied, would limit abortion; if they do not, blame those careless judges.'' Simultaneously, it says to abortion proponents: ''The tests are so simple that anyone can qualify; true, there will be some inconvenience and embarrassment, but the process will be cost-free, and the abortion certain.''

    The statute has not solved the political riddle. It has merely sent the puzzle to court, still looking for the answer that the State House will not give and that the courthouse cannot fashion.

    Become a part of the Monitor community


                 

    found one more thing for the reported crash in /var/log/messages:

    Aug  8 11:11:22 s01en22 kernel: [  725.560792] VirtualBox[7506]: segfault at 2a ip b16e3960 sp b17dcd40 error 4 in VBoxSharedCrOpenGL.so[b169f000+b4000]
    

    i tried again, and hat to emergency reboot the host (!) through sysrq keys. what i did:

    i booted the machine. the VM windows (dual-monitor) where maximized on my two monitors. the vm contents however was only 800x600 (or 1024x786? whatever; it was lower than the window's size) (since windows defaults to that with the new hw..?). i activated aero, and it indeed started correctly this time.

    i tried to minimize and restore the appearance setting window to test the aero effects, which caused a strange artifact on the second monitor (a black rectangle approx. the size of the window i just minimized on the primary monitor).

    after that i wanted to change the resolution to fit the maximized vm window. i have auto-resize on, so i just wanted to restore and re-maximize the window on the host to make it auto-adapt the the resolution. however, after i double clicked the window on the host, it snapped to a very small resolution (below 640x480, i guess), and the host immediately froze completely.

    i tried to get back to a VT by putting the keyboard into raw mode (alt sysrq r, ctrl alt f1), but that didn't work, nothing reacted.

    the syslog only shows my attempts to sysrq-reboot the machine, nothing obviously wrong.

    the Xorg log shows one of my favorite X crashes in conjunction with virtualbox (never saw the crash with anything else):

    [  2193.191] (WW) NVIDIA(0): WAIT (0, 6, 0x8000, 0x00006050, 0x00006050)
    [  2201.208] (WW) NVIDIA(0): WAIT (2, 6, 0x8000, 0x00006870, 0x0000e310)
    [  2201.743] [mi] EQ overflowing. The server is probably stuck in an infinite loop.
    [  2201.743]
    Backtrace:
    [  2201.743] 0: /usr/bin/X (xorg_backtrace+0x3c) [0x80ea48c]
    [  2201.743] 1: /usr/bin/X (mieqEnqueue+0x1a0) [0x80e9da0]
    [  2201.743] 2: /usr/bin/X (xf86PostMotionEventM+0xbd) [0x80c541d]
    [  2201.743] 3: /usr/bin/X (xf86PostMotionEventP+0x59) [0x80c5559]
    [  2201.743] 4: /usr/lib/xorg/modules/input/evdev_drv.so (0xb4cbe000+0x45ce) [0xb4cc25ce]
    [  2201.743] 5: /usr/lib/xorg/modules/input/evdev_drv.so (0xb4cbe000+0x4868) [0xb4cc2868]
    [  2201.743] 6: /usr/bin/X (0x8048000+0x6b510) [0x80b3510]
    [  2201.743] 7: /usr/bin/X (0x8048000+0x12588a) [0x816d88a]
    [  2201.743] 8: (vdso) (__kernel_sigreturn+0x0) [0xb773c400]
    [  2208.208] (WW) NVIDIA(0): WAIT (1, 6, 0x8000, 0x00006870, 0x0000e310)
    [  2216.271] (WW) NVIDIA(0): WAIT (2, 6, 0x8000, 0x00006870, 0x0000e320)
    [  2223.271] (WW) NVIDIA(0): WAIT (1, 6, 0x8000, 0x00006870, 0x0000e320)
    [  2231.339] (WW) NVIDIA(0): WAIT (2, 6, 0x8000, 0x00006870, 0x00003e80)
    [  2238.339] (WW) NVIDIA(0): WAIT (1, 6, 0x8000, 0x00006870, 0x00003e80)
    [  2243.406] [mi] EQ overflowing. The server is probably stuck in an infinite loop.
    [  2243.406]
    Backtrace:
    [  2243.407] 0: /usr/bin/X (xorg_backtrace+0x3c) [0x80ea48c]
    [  2243.407] 1: /usr/bin/X (mieqEnqueue+0x1a0) [0x80e9da0]
    [  2243.407] 2: /usr/bin/X (xf86PostMotionEventM+0xbd) [0x80c541d]
    [  2243.407] 3: /usr/bin/X (xf86PostMotionEventP+0x59) [0x80c5559]
    [  2243.407] 4: /usr/lib/xorg/modules/input/evdev_drv.so (0xb4cbe000+0x45ce) [0xb4cc25ce]
    [  2243.407] 5: /usr/lib/xorg/modules/input/evdev_drv.so (0xb4cbe000+0x4868) [0xb4cc2868]
    [  2243.407] 6: /usr/bin/X (0x8048000+0x6b510) [0x80b3510]
    [  2243.407] 7: /usr/bin/X (0x8048000+0x12588a) [0x816d88a]
    [  2243.407] 8: (vdso) (__kernel_sigreturn+0x0) [0xb773c400]
    

    i'll attach the log for the session that crashed the host too.


                 

    now for the third try:

    this time the vm aborted immediately when i tried to apply the aero theme after bootup.

    the log again shows the line already previously seen:

    00:01:01.736 OpenGL Error: Assertion failed: conn->pHostBuffer && !conn->cbHostBuffer, file /home/vbox/tinderbox/lnx32-rel/src/VBox/GuestHost/OpenGL/util/vboxhgcm.c, line 568
    

    and the syslog shows another crash (different location now though):

    Aug  8 11:53:55 s01en22 kernel: [  962.929532] VirtualBox[7631]: segfault at 938 ip b50b6b58 sp b1854e60 error 4 in libnvidia-glcore.so.270.41.19[b422e000+16b4000]
    

              A Christmas LED Special   
    Do Christmas LEDs Pay for Themselves?

    Our updated light display
    now at my office
    Each year LED Christmas lights become more widely available and better priced. But they're still on the expensive side. Old-style incandescent lights go for under $3 for a 100-light string, while LEDs start at about 4 times as much at $12 per 100.

    Of course there are numerous reasons to buy LEDs:
    • in theory they should last forever
    • they can have color-changing and other novel features
    • you can string many 100s together on a single circuit
    • they are more durable and shock-resistant without filaments or glass 
    • they generate less heat, so less chance to start a fire (though MythBusters busted that myth)
    • it's just the right thing to do since they use so much less energy
    But do they pay for themselves? More precisely, how long does it take, and how much electricity-cost do they save?

    The old 50 lights burning 21.6 Watts
    Case Study
    We have some really old "Jesus" lights we put together right after we got married. We wanted a simple reminder of the kernel of Christmas in our apartment window, so I got some poster board and a string of 50 lights and spelled out the name Jesus, one letter in each pane of the balcony window. That string of lights is still working after 25 years(!)

    But the lights are sure to burn out someday, and they consume 21.6 Watts of electricity. What about replacing them with LEDs? Even by doubling the lights to a 100-light LED string, the new string uses only 3.9 Watts of power.
    New 100 LEDs burning 3.9 Watts

    Note that the claims of 80% and 90% energy-savings on LED packaging are real, not just hype. 4 Watts of LEDs can replace 44 Watts of incandescents--that's just 9% of the power!

    But what is the cost savings? Assume the lights burn for about 6 weeks (Thanksgiving through Three Kings Day, January 6) for 6 hours a day. Using my rule of thumb (that 1 Watt costs $1 per year), the cost of each string is:

    50-light 22 Watt string:
    21.6W x 42days/365 x 6hours/24 = $0.62

    100-light 4 Watt string:
    3.9W x 42days/365 x 6hours/24 = $0.11

    So the new string saves 1/2 dollar a year. It will pay for itself in just 24 years! We can celebrate the savings at our 50th wedding anniversary. Well...the change has to be justified on principle and other ancillary grounds, such as, that 100 lights look better than the skimpy 50-light array.

    Of course, if replacing 100 lights with 100 lights, the savings would more than double to $1.13 per year (= 0.62 x 2 - 0.11), so the payback period would be cut back to just 11 years.

    Other Factors
    Plus there are other factors that affect the payback calculation:

    New strings
    For a new string, the payback time is based on the cost difference between incandescent and LED strings. For a 100 light comparison, the price difference is $9, while the energy savings is about $1.13 per year, so the payback is about 8 years.

    Discounts for recycling
    Home Depot has an annual recycling program in early November each year at which they give away $3 to $5 coupons per string. This brings the payback time for replacement strings down to the same 8-year level as for new strings.

    Reduction in future prices
    Next year the price for LEDs is likely to continue to drop, further reducing the payback time. If a 100-light string drops to $9, the price difference will be $6, and the payback time will be 5.3 years. So plan ahead and keep an eye on the prices each year.

    Rise in future energy costs
    On the other hand, future energy prices are likely to rise, which also shortens the payback. By the end of the 24 year case study above, I expect energy prices to more than double. If they rise 6% a year, they'll quadruple! Based on 6% increases, a 24-year payback decreases to 14 years before accounting for inflation. If overall inflation runs at 3%, the term extends back out to 17 years. But that case study was skewed--because it doubled the number of lights.

    On a light-for-light comparison with 6% increases in energy cost, the total replacement cost (without a coupon) is recovered in about 8 years, while the extra cost for a new string is recovered in about 6 years. (In each case, the 3% inflation adjustment is only +/-1/2 year.)

    Bottom line
    If you are going to buy new strings, definitely go for LEDs. Payback isn't immediate, but it's measurable. Know it's the right thing to do, and it will pay you back too.

    If you are thinking of upgrading strings of lights, next year just might be the year! Plan ahead. Take advantage of coupons and discounts.

    Christmas-Light Rules of Thumb
    Just to have some ballpark numbers I can remember, let me summarize some rules of thumb.

    100-light stringEach Christmas
    incandescentscost over $1
    LEDscost about 10¢
    LEDssave about $1

    So 20 incandescent strings show up measurably on your January bill by $20+, while 20 LED strings is imperceptible.

    More Considerations
    When choosing LED Christmas lights:
    1. Look for the Energy Star label. Energy Star compliant strings have been independently tested not only for energy use, but also for weatherization (if rated for outdoor use), longevity, protection against overvoltage, and they include a 3-year warranty.
    2. Look for "always on" technology--so the string stays lit even if one bulb is broken, burns out, or is removed. If a string doesn't advertise this feature, assume it doesn't have it.
    3. Stores do not stock many colors, but more colors are available online. Online suppliers also stock white wire and brown wire options along with green.
    4. Watch the whites: do you want blue-white, pure-white, or warm-white? Even among the same designation from the same maker, there may be variations, so be aware as you look.
    Always Recycle!
    Finally, don't just throw old strings into the dump. Even if you don't need the coupon, take your strings to Home Depot next year. Or search for other local or mail-in recycling alternatives. Here are some web alternatives I found that also claim to give discounts: Christmas Light SourceEnvironmental LED, HolidayLEDs.


    Arise, shine, for your light has come,
       and the glory of the LORD shines over you.
    For look, darkness covers the earth,
       and total darkness the peoples;
    but the LORD will shine over you,
       and His glory will appear over you.
    Nations will come to your light,
       and kings to the brightness of your radiance. 
    (Isaiah 60.1-3)

    Our new light display at home

              Blackberry como Modem en Kubuntu 9.10 usando Internet Ilimitado de EntelPCS   
    El asunto es sencillo, en Chaimávida no existe posibilidad alguna de contratar un servicio de Internet decente, salvo aquellos de internet móvil que naveguen a velocidad 3G con capacidad limitada de descarga. Y a decir verdad ninguno de esos servicios me llaman la atención lo suficiente como para estar pagando otra conexión anexa a la que ya estoy pagando. Actualmente mi Blackberry 8300 posee (cuando pago la cuenta del teléfono) Internet Ilimitado contratado con Entel por $ 5.990 CL mensuales.

    Es sabida la ausencia de soporte de parte de RIM para la plataforma Linux, pero como siempre ocurre hay gente dispuesta a hacer algo y ese algo es un programita llamado Barry, el cual dentro de sus funciones nos permitirá cargar la batería del Blackberry (versiones antiguas del kernel no dejaban cargarlo), sincronizar los contactos y lo más importante para este post ayudar a configurar el teléfono como Modem para conectar el computador a Internet.


    Trataré de ir paso por paso con la finalidad de que a alguien le ayude este post, el cual me hubiese encantado encontrar cuando anduve buscando información en la red. Entonces sin más preámbulos los pasos son los siguientes:


    1.- Instalar Barry: en versiones antiguas de Kubuntu (recuerdo la 8.10 y la 9.4) barry se encontraba en los repositorios, por ende con un simple comando de consola : "sudo apt-get install barry" el programa se instalaba completo. Sin embargo en la recién salida versión de Kubuntu (Kubuntu 9.10 Karmic Koala ), por ende existen dos formas de instalar la cuales detallo a continuación:

    1.1.- Descargando los paquetes deb: El día que instale barry fue el mismo día en que salio la nueva versión de Kubuntu, por ende no encontré repositorios para la nueva distribución y como no quería desconfigurar la nueva instalación decidí instalar el programa y sus dependencias desde su página en sourceforge.

    El listado de archivos a descargar es el siguiente:

    libbarry-dev_0.16-0_ubuntu904_i386.deb
    barry-util_0.16-0_ubuntu904_i386.deb
    libbarry0_0.16-0_ubuntu904_i386.deb
    opensync-plugin-barry-dbg_0.16-0_ubuntu904_i386.deb
    opensync-plugin-barry_0.16-0_ubuntu904_i386.deb
    barrybackup-gui-dbg_0.16-0_ubuntu904_i386.deb
    barrybackup-gui_0.16-0_ubuntu904_i386.deb
    barry-util-dbg_0.16-0_ubuntu904_i386.deb
    libbarry0-dbg_0.16-0_ubuntu904_i386.deb


    Los archivos son para la plataforma 386 y si bien los archivos son para la la versión 9.04 de Kubuntu funcionan perfectamente para la 9.10.

    El asunto sería descargar e instalar en el mimo orden.

    Si no quieren hacerlo de esta forma hagase de la siguiente:

    1.2.- Desde repositorios: semanas despues de mi instalación encontré que ya existían los repositorios para Karmic, así que si quieren hacerlo de esa forma el repositorio se encuentra
    aquí.


    2.- Configurar los script de conexión : (info sacado mayormente desde este post)

    Los dos script que hay que configurar se encuentran respectivamente en /etc/ppp/peers/ y el otro en /etc/chatscripts. Barry viene con script preconfigurados para compañias telefónicas gringas y europeas, asi que tomaremos uno de esos archivos y los modificaremos con la información de conexión de Entel.

    2.1.- Modificar el archivo barry-tmobileus que se encuentra en /etc/ppp/peers y copiar el siguiente código


    Código:

    #
    # This file contains options for T-Mobile US Blackberries
    #
    # It is based on a file reported to work, but edited for Barry.
    #

    connect "/usr/sbin/chat -f /etc/chatscripts/barry-entelpcs.chat"

    # You may not need to auth. If you do, use your user/pass from www.t-mobile.com.
    #noauth
    user "entelpcs"
    password "entelpcs"

    defaultroute
    usepeerdns

    noipdefault
    nodetach
    novj
    noaccomp
    nocrtscts
    nopcomp
    nomagic

    #nomultilink
    ipcp-restart 7
    ipcp-accept-local
    ipcp-accept-remote

    # added so not to disconnect after a few minutes
    lcp-echo-interval 0
    lcp-echo-failure 999

    mtu 1492
    debug
    debug debug debug

    pty "/usr/sbin/pppob -l /etc/ppp/peers/error -v"

    # 921600 Works For Me (TM) but won't "speed up" your connection.
    # 115200 also works.
    115200
    local

    Guardar como barry-entelpcs y salir.

    2.2.- Crear el archivo barry-entelpcs.chat en /etc/chatscripts/ con el siguiente código:


    Código:

    ABORT BUSY ABORT 'NO CARRIER' ABORT VOICE ABORT 'NO DIALTONE' ABORT 'NO DIAL TONE' ABORT 'NO ANSWER' ABORT DELAYED ABORT ERROR
    SAY "Initializing\n"
    '' ATZ
    OK AT+CGDCONT=1,"IP","imovil.entelpcs.cl"
    OK-AT-OK ATDT*99#
    CONNECT \d\c

    En esta parte hay que poner ojo ya que si la configuración a Internet es con bam.entelpcs.cl hay que hacer el respectivo cambio en el archivo.

    Nuevamente Guardar como barry-entelpcs.chat y Salir

    3.- Conectar el equipo

    Ahora conectar al equipo, cuando pregunte si quieres ser usado como unidad de almacenamiento masivo poner que no. Abrir la consola y tipear en ella : "sudo pppd call barry-entelpcs" (sin comillas) ingresar la clave de root y esperar que el script conecte y voilá!! ya se puede navegar usando la blackberry como modem.


    Observaciones:

    A mi el equipo se me desconecta automáticamente cuando me llaman, no así cuando me llegan correos o mensajes. Para solucionar esto hay desconectar el script en la consola con Control + Z , luego reseetear la conexión de la berry con el comando "breset" y luego ejecutar nuevamente el comando
    "sudo pppd call barry-entelpcs". Con esto la conexión queda nuevamente reestablecida.

    Konqueror en modo navegador web anda de maravilla. Yo he estado haciendo scroobling a Lastfm con Amarok, hablando a través de Kopete y navegando en páginas livianas. Olvidense de cargar videos o bajar cosas muy pesadas, recuerden que el teléfono navega a traves de la red EDGE que es bastante buena para el teléfono, pero no esta pensada como banda ancha, sin embargo cumple 100 % para sacar de apuro. Además tiene algo de romántico esperar un poco por la carga de páginas ...como volver a esa navegación de mediados de los noventa.

    Si se va a usar Firefox recordar desmarcar la opción "Trabajar en Modo desconectado" en el Menú Archivo.

    P.D : Este post esta redactado en Chaimávida, lugar donde se hace la exquisita Cerveza Artesanal Kurüko, aspi que de paso visite:

    http://www.kuruko.cl


    Publicado en blogger





              Hitler and "Positive Christianity"   
    For postings today I'm providing a copy of the first chapter of my e-book Hitler's Christianity, which is nearing its third birthday.
    **
    Chapter 1 -- Positive Christianity: Doctrines and Background
    The fundamental core of our case is that Hitler and Nazi leaders adhered to a cult system called “Positive Christianity.” By defining Positive Christianity as a cult, we are arguing that its beliefs lie outside the mainstream of orthodox Christianity, to the extent that it would be incorrect to define Hitler as a Christian, or to place the blame for Nazi atrocities on the Christian faith as a religion and as a philosophy.
    Cults and Heresies
    The first step in this process is to ask: What is a cult? The word “cult” today holds sinister connotations of dark-robed figures slitting lizards’ throats in the moonlight, or of murderous, charismatic leaders brainwashing followers into self-immolation. “Cult” brings to mind pictures of the Branch Davidians resisting to the death the forces of the United States Government; or of the followers of the Hale-Bopp comet cult lying dead under purple sheets after ingesting poisoned rice pudding, or of the followers of Jim Jones consuming cyanide-laced punch in the jungles of Guyana. But, from a strictly theological perspective, “cult” can refer to any religious group that is a deviation, or offshoot, from some other major religious group, and which holds to a new or unusual belief or practice that either rejects, or openly contradicts, the beliefs of the parent group. To that extent, it is no longer truly part of the parent group, and becomes properly defined as a new group in its own right.
    A related word in this context, which we will also apply to Positive Christianity, is heresy. Broadly speaking, a heresy is any doctrine that is at odds with what is accepted by the mainstream of a parent religious body. Thus, formally, “cult” refers to the group which deviates from the norm, while “heresy” defines the doctrines that cause that group to be deviants from the norm.
    Arguably, then, the defining of a group as a cult, or of a belief as a heresy, is a matter of the degree of deviation from a mainstream view: The more radically a splinter group departs from the beliefs and practices of its parent group, the more appropriate it becomes to define the splinter group as a cult, or their beliefs as heresies.
    With that, we may now ask: In what way did Positive Christianity deviate from the mainstream of doctrine and enter into heresy? What of its deviations make it sufficient to classify it as a cult?
    There are three areas in which Positive Christianity differed significantly from orthodox Christian viewpoints.
    Deviation #1: A Bowdlerized Bible
    The Bible, consisting of the Old and New Testaments, is widely recognized as the "handbook" of the Christian faith. There are, of course, a range of opinions about its exact role in the Christian life. Some regard it as the inerrant word of God, inspired by God Himself. Some regard it as a human record, but still authoritative in terms of being the key source for Christian doctrine. Some say that its canon is lacking and could stand to add a few books; a few others say there is a book or two that is not as qualified as the others, or could stand to be removed.
    Despite these variations in opinion, however, it is generally recognized that there is a certain extent to which one can go that ends up outside the pale of what is historically and theologically called "Christian." A Buddhist who rejects the authority of the Bible, and sees in it nothing more than perhaps a mishmash of history, moral teachings, and the words of sometimes-mistaken men, is certainly not qualified to be called a Christian on those terms. Muslims who regard the Bible as authoritative but corrupted, and in need of the corrections and clarifications offered by the Quran, also cannot be regarded as Christians.
    Moving closer to the center of the circle, definitions get harder to apply. Groups like the Mormons, Jehovah's Witnesses, or David Koresh's Branch Davidians, are overwhelmingly denied the title of “Christian” in good measure because they declare that the Biblical record has been either corrupted or badly misunderstood, so that they believe it necessary for there to be supplemental revelation, provided either by another inspired book, or some prophetic revelation. These groups may also boldly declare that they are indeed Christians, and may become quite offended when told that this is not the case. Alternatively, a group may declare that it is they who are the true Christians, and it is others in the mainstream church who are not! [xy]
    ([xy] I am particularly familiar with this issue where it concerns Mormonism, and attempts by Mormon apologists to claim the title "Christian" for themselves. See on this point, for example, Daniel Peterson and Stephen Ricks, Offenders for a Word (Foundation for Ancient Research and Mormon Studies, 1998).
    The irony in this is particularly strong, since Joseph Smith himself reported that during his “First Vision” of God and Jesus, he asked them which church he ought to join, and was told that he “must join none of them, for they were all wrong; and the Personage who addressed me said that all their creeds were an abomination in his sight; that those professors were all corrupt; that: ‘they draw near to me with their lips, but their hearts are far from me, they teach for doctrines the commandments of men, having a form of godliness, but they deny the power thereof’ ”. (History of Joseph Smith, 1:19.) Even more strongly, third Mormon President John Taylor (1808-1887) said, “We talk about Christianity, but it is a perfect pack of nonsense...the devil could not invent a better engine to spread his work than the Christianity of the nineteenth century.” (Journal of Discourses 6:167). To that extent, the modern Mormon quest to be called "Christian" departs considerably from the original teachings of Mormonism's founders.
    The diversion of Mormonism here noted is of some relevance, since, as we will see, adherents to Positive Christianity also declared themselves to be restoring a more original and authentic form of the Christian faith. Thus, one of the most important reasons they can be denied the title of “Christian” and deemed a cult, is that they denied that title to everyone else, thereby indicating that they were a separate group.)
    It is beyond our present scope to discuss these other groups listed, but it is clear that with respect to the Bible and its utility, there is a line of demarcation beyond which one cannot pass and still be acceptably termed "Christian." All that said, where does “Positive Christianity" fit on this spectrum?
    The Positive Christian “Canon”
    The two divisions of the Bible, the Old Testament and the New Testament, are regarded as closed collections, or canons, to which nothing can be justifiably added, or appropriately taken away. It is considered a standard hallmark of a Christian cult to in some way change or redefine the contours of these canons, either by claiming that some new revelation has been provided which further defines, or else updates, the prior canon, or else by subtracting from that canon.
    Within this understanding, it is not necessary, as a critic might suppose, to debate whether or not the canon of the Bible was the result of divine intervention. Even if the canon had been assembled by completely natural means, it remains the defining “constitution” of the Christian faith. Thus, by definition, any group that performs surgery on the canon is defining itself as “outside” the Christian faith.
    Positive Christianity defined itself in terms of a particularly radical form of canonical surgery, one that amounted to removing no less than three quarters of the Bible from the Christian canon, and as much as ninety percent of it, depending on individual variations. The minimum surgery consisted of the complete excision of the Old Testament from the Bible, as a document that was “too Jewish” for their tastes, and at a maximum, disposing of the letters of Paul, who was frequently named as a Jewish “corrupter” of the authentic Christian faith.
    In this respect, Positive Christianity imitates a movement widely recognized as heretical by the Christian mainstream. The removal of the Old Testament, as well as select New Testament material, mirrors the actions of the second-century Marcionite heresy, which rejected Jewish influences on the Christian faith. Like the Positive Christians, Marcion rejected the entirety of the Old Testament from his canon. Unlike the Positive Christians, however, his trimming of the New Testament involved keeping Paul rather than rejecting him (all except for the Pastoral letters: 1 and 2 Timothy, and Titus), and rejecting every Gospel except an abridged version of Luke.
    At the same time, the New Testament itself rejects such forced distinctions between itself and the message of the Old Testament. Jesus and the authors of the New Testament clearly quoted and alluded to the Old Testament as historically authoritative, and with great appreciation. They also clearly saw in Jesus an imitation of Old Testament themes and prophecy.
    There can therefore be little doubt that on this accounting alone, that of a radically bowdlerized canon, Positive Christianity must be counted as a pseudo-Christian cult.
    Deviation #2: A De-Judaized Jesus
    If the Bible is Christianity's handbook, then Jesus is Christianity's central figure. No one would say a Muslim qualified as a Christian, not only because of their rejection of the Christian canon as God’s complete revelation, but also in good measure because of their quite different take on who Jesus was, and what he did (or rather, did not do). Any person whose portrait of Jesus departs in some substantial way from the Christian view, also cannot be regarded as a Christian.
    Since the Positive Christians rejected the Old Testament for being a Jewish document, and rejected Paul as a Jewish corrupter of Christianity, it will not be surprising to learn that they also made an effort to redefine Jesus. In mainstream Christianity, Jesus is a Jew – a member of a specific ethnic group, born into that group in accordance with promises related in the Old Testament covenant, concerning a coming Messiah. In order to make Jesus acceptable for their anti-Semitic viewpoints, the Positive Christians redefined the ethnicity of Jesus, turning him into an Aryan (a member of the Nazi “master race”) or a Nordic.
    Other religions and groups have claimed Jesus and reinvented him into a person that all would presumably agree are not "Christian." One of my favorite examples of this is a book titled The Elvis-Jesus Mystery, by Cinda Godfrey. This amazing book declares that Elvis Presley was the "same soul" as Jesus (and as Adam, for good measure!). I cannot imagine even the most insensate critic arguing that this represents a genuinely "Christian" point of view.
    The Jesus of Positive Christianity was perhaps not as radical as Godfrey's. It was, however, a logical extension of their views on the Bible. As Positive Christianity divorced Christianity from the Bible's Jewish elements, it also divorced Jesus from his Jewish heritage.
    Physical Parameters
    The question that may arise now is, "Is Positive Christianity's Jesus truly radical enough to disqualify it as a bona fide Christian sect?" For arguably, one can believe, for example, that Jesus had red or brown hair, or was 6 feet tall rather than 5 feet tall, and not endanger being classified as a Christian.
    Hair color and height, however, are not essential to Jesus' identity as broker of the Christian covenant. On the other hand, Jesus' status as the divine Son of Man, and as incarnate hypostatic Wisdom (that is, as a member of the Trinity) have been widely recognized as being essential to his identity. Groups that deny such doctrines, such as the Jehovah's Witnesses, have been discounted as not being within the Christian fold since the Nicaea Council condemned the heresies of the Arians (i.e., the Jehovah's Witnesses of that day).
    Is Jesus' Jewishness no more important than his hair color? Given Jesus’ professions to be intrinsically linked to the messianic promises of the Old Testament, and similar sentiments by the authors of the other New Testament books, it is clear that to turn Jesus into an Aryan, and deny his Jewishness, is to deny a fundamental fact of Christianity. It is also contrary to clear New Testament professions giving Jesus Jewish or Davidic ancestry (Matthew 1:1-17; Luke 2:11, 3:23-28; Romans 1:3; 2 Timothy 2:8; Hebrews 7:14; Rev. 5:5, 22:16). Jewishness was intrinsic to Jesus' self-identity, and denial of Jesus' Jewishness, as part of the package of "Positive Christianity," puts it outside the pale of historic and theological Christianity.
    The nature of this deviance should be properly understood. Critics may charge that many depictions of Jesus in our churches make him out to be a white Anglo-Saxon, sometimes with perfectly “Nordic” blue eyes and blond hair. But this is not done in order to de-Judaize Jesus. Rather, it is because many modern Christians are not aware that Jews of the first century had dark complexions and dark hair. Pictures of Jesus as a typical “white guy” are designed based on the assumption that the Jews of the first century were also “white guys”, and not in order to deny that they were Jews.
    Deviation #3: Indifference to Doctrine
    The final deviation of Positive Christianity concerns a focus on orthopraxy (right practice) at the expense of orthodoxy (right doctrine or belief). To be a Christian (or a member of any religious group) requires correct adherence to a certain prescribed set of beliefs. Orthodoxy is used to describe one who holds the correct set of beliefs for their spiritual tradition.
    In contrast, orthopraxy is used to refer to the rules of conduct that one must adhere to in order to live as a member of a group. Within Islam, for example, there are five pillars, or obligations, each faithful Muslim must perform to be considered faithful to Islam: belief (meaning, orthodoxy), worship, charitable giving, periodic fasting, and a pilgrimage to Mecca at least once in one’s lifetime. Any Muslim who deviates from this set of duties, without valid justification (e.g., not having the resources to make a trip to Mecca), are regarded as less than faithful to their beliefs.
    Within Christianity, New Testament moral admonitions (particularly the Sermon on the Mount) are regarded as guidelines for Christian behavior. Those who deviate from these guidelines are regarded as either failing to represent orthodoxy (their beliefs), or may, in some cases, be regarded as displaying evidence of not holding right beliefs at all. Of course, this is reckoned as a matter of degree, not as a binary equation; a momentary lapse in orthopraxy is not immediately regarded as a sign of failure to be a member of the group. Positive Christianity strongly emphasized works and action. However, in terms of doctrine, it might be well to say that Positive Christianity not only failed to encourage the formulation of doctrine but it ignored doctrine to the point of annihilation. One searches in vain for any comment by leading Nazis on key doctrinal issues like the atonement, the Trinity, or original sin. Steigmann-Gall elaborates on this point, noting of Nazi commentators, [HR86] "[r]arely did they elaborate on doctrinal questions. Seldom did these party members discuss their thinking on original sin, the resurrection of Christ, or the communion of the saints." Though they believed they were following Christian ethics, and though they could accept Christian dogmas and gain inspiration from the Gospels and from Jesus, "In general...most of them were less concerned with the doctrine of Christianity than with its political ideology." He further states:
    Positive Christianity was not an attempt to make a complete religious system with a dogma or ritual of its own: It was never formalized into a faith to which anyone could convert. Rather, this was primarily a social and political worldview meant to emphasize those qualities of Christianity that could end sectarianism. [HR84] Beyond this, Nazi commentators "said little or nothing about the Augsburg Confession or other signifiers of theological orthodoxy", and were "generally unconcerned with dogma." Stegmann-Gall goes on to say that in spite of this, they "adhered to basic precepts of Christian doctrine, most importantly the divinity of Christ as the son of God." [HR49-50] Nevertheless, this seeming “saving grace” is insufficient to detract from the lack of focus on orthodoxy in Positive Christian writings, especially given the reason for this lack of focus on doctrine.
    Positive Christianity: In the Background

    Just prior to the Nazi era, and even outside of Germany, the phrase “positive Christianity” was used to define a form of Christianity in which the believer was encouraged to act upon their beliefs, instead of merely being content to believe intellectually. A 1897 British journal, The Cambrian, in an article titled, “Prof. Richard T. Ely on Christianity”, says:
    Positive Christianity having eyes and ears perceives wretched social conditions all about us. It knows what vile tenements signify and is aware of the enormous extent of the housing problem. Positive Christianity sees degraded childhood and lost opportunities on every side. Positive Christianity remembers that blindness is sin, that neglect is sin. “Inasmuch as ye did it not,” is the condemnation of negative Christianity.
    Ely’s concern was that “professed Christianity” become “real Christianity” by action. Similar sentiments can be found in other sources of the same period, using the phrase, “positive Christianity.” [xp]
    ([xp] For example, Charles Abram Ellwood, The Reconstruction of Religion: A Sociological View (MacMillan: 1922) and Peter Taylor Forsyth, Positive Preaching and Modern Mind (A. C. Armstrong and Son, 1907). No doubt unaware of the Nazi connotations of the phrase “positive Christianity”, some modern writers have revived it to refer to the practice of Christianity with a “positive attitude.” For example, Zig Ziglar, Confessions of a Happy Christian (Pelican: 1978).)
    There is certainly nothing innately wrong with encouraging orthopraxy. Calling believers to action is part of any healthy system of faith. However, the Positive Christians of the Nazi movement took this a step further, where Orthopraxy was emphasized to the point that orthodoxy was deemed irrelevant. Cults and heresies, under normal circumstances, are termed as such in part because of incorrect doctrine. How much more so should a group be classified as a cult for dispensing with doctrine altogether?
    Why Ignore It?
    Ely’s expression of “positive Christianity” had as its purpose a call to action on the part of those who professed Christian belief. Certainly, the Nazi adherents to their form of Positive Christianity would argue that such was their purpose as well. However, there was much more to it, and much that was designed to aid the Nazis in achieving a Germany unified under their banner. [HR51] "[O]ne of the very purposes of positive Christianity...was to bridge the religious divide by making no specific references to a particular confessional bias." Germany of this era was characterized by a significant population divide between Catholics (who were approximately one third of the population) and Protestants, who were themselves divided into over two dozen denominations. Positive Christianity, a Christianity of action that had no use for dogma, was intended to appeal to "the commonalities that joined Protestants and Catholics," stop sectarianism, and unify the nation under the Nazi banner.
    For this reason, it is not surprising that little or no effort was made to lay out any detailed theology or dogma under Positive Christianity. [HR52] A "generalized and rather diffuse notion of simple Christianity" was best suited for achieving unity, by minimizing potential differences of opinion. Those who advocated Positive Christianity "were particularly unsuccessful in laying out any idea of what the new faith would actually look like; what its dogmas, creeds, or institutions might be, aside from a de facto appropriation of aspects of Protestantism." There was also no evidence that, "they made any particular effort to do so."
    Pre-Nazi Positive Christianity
    The roots of the German incarnation of Positive Christianity go back into history much farther than the Third Reich, and indeed, into the time even before Hitler’s birth. Part of the genesis of Positive Christianity was a hypertrophic German nationalism (a subject we will discuss further in Chapter 5), and its reaction to the practice of ultra-Montanism – a Catholic orientation which placed a strong emphasis on the powers of the Pope.
    An early opponent of ultra-Montanism in Germany was Ignaz von Döllinger (1799-1890), a [CRN20-1] “famed Munich theologian” who viewed ultra-Montanism as “both anti-German and almost pathologically destructive.” Von Döllinger claimed that “God had given Germans in particular the world historical task of reinterpreting Catholic theology for the dawning modern age, and he called on German Catholics to shed the yoke of ultra-Montanism and to assume their predestined role as ‘teachers of all the nations.’ ” In this view, he took for granted the superiority of German “national spirit,” and his views typified a nationalist reaction to ultra-Montanism.
    Von Döllinger was not a “positive Christian” in the Nazi sense. However, he was a reactionary against ultra-Montanism, and [CRN33] other Catholic opponents of ultra-Montanism in Munich found their solution in Positive Christianity. At the time, the phrase was “so commonplace in prewar Reform Catholic circles as to require little explication.” It is not difficult to find examples of its use twenty years and more before the Nazis used it in their platform. In these earlier contexts, it was associated with German nationalism, anti-Semitism, and a strong emphasis on moral purity.
    There are also indications of the three distinctives we have listed, at this early stage. [CRN37] For example, the cover of the April 1902 issue of the journal Renaissance, featured a “blending of Nordic-Aryan imagery and explicitly Catholic visual references” (including the figure of a muscular titan) and also “visually reinforced the primacy of the New Testament, which is illuminated specifically by the torch of the titan, over the (Jewish) Old Testament, which is pushed far to the margins of the image.” A tablet of the Ten Commandments is also featured toppling off into the void.
    Positive Christianity’s “John the Baptist”?
    As we move into the time of the Nazi Party itself, there is a leading figure in the Positive Christian movement who can be found to have definitive ties to the Party. [CRN1] In 1918, a Bavarian Catholic, Franz Schrönghamer Heimdal, authored a book titled The Coming Reich, which laid out plans for “the ecumenical yet distinctly Catholic-oriented spiritual rebuilding of Germany.” The spirit of German hyper-nationalism infected Heimdal’s work (e.g., he was unashamedly anti-Semitic, contrasting the purity of Christ with the “materialist spirit” of the Jews). He also [CRN53] claimed that “Catholic revelation and Nordic legend were in perfect God-ordained harmony, and elsewhere, [CRN71] in a 1919 Christmas devotional written for the newspaper that would become the Nazi Party’s unofficial publication, declared that only in Christ could the Germanic spirit “find its fullest expression.” Heimdal’s radical ideas extended even into the physical realm, anticipating another aspect of the future Nazi program: He [CRN2] foresaw Christians bonded in a racial community that was to be maintained via eugenics.
    The three distinctives of Positive Christianity are plainly evident in Heimdal’s work. The promise of a bowdlerized Bible is clear in that he saw the heroism of Jesus foreshadowed in the ancient Nordic saga Edda, which he supposed might even be divinely inspired, at least to extent that the [CRN54] “inferior” Old Testament was inspired. A de-Judaized Jesus is already present in his writings: He [CRN56] claimed that Jesus was a “Galilean Aryan from Nazareth whose racial identity stock stood in stark contrast to the racially inferior Jews of Jerusalem.” Finally, the emphasis on orthopraxy [CRN72] is made clear in that the central theme offered in his 1919 Christmas devotional was, “common good before individual interest” a sentiment reflected nearly word for word in Point 24 of the Nazi Party program.
    The similarities between Heimdal’s views and Nazi “Positive Christianity” were so obvious that, fifteen years later, in 1933, Heimdal had the courage to openly claim that his book had played a role in the founding of the Nazi movement. In this [CRN74], Heimdal’s estimate of his influence is certainly “overblown,” since the same ideas he promulgated were already widespread in Munich at the time. His claim of direct influence, however, does have a “kernel of truth” to it, to the extent that he was to some degree involved in Nazi affairs, and had the attention of people in the Nazi Party. In 1920 [CRN3] he was the leading writer for the Völkischer Beobachter newspaper, when it was the “unofficial organ” of Nazi movement. He also had two other books that were widely discussed among the early Nazis (in 1918, and 1919), and The Coming Reich earned the praise of Deitrich Eckart, an influential “mover and shaker” in the early Nazi movement, who we will discuss further in Chapter 3. Perhaps, Heimdal’s influence is best summed up by Hastings: He [CRN80] offered the “first programmatic religious statement from a Nazi member following the articulation of Positive Christianity.”
    The earliest history of Positive Christianity as a Nazi phenomenon closes with a peculiar note. After his failed 1923 beerhall putsch, Adolf Hitler was compelled to serve time in prison. Prior to his sentence, Positive Christianity among the Nazis was associated with persons who, like Hitler himself, maintained a spiritually tenuous connection to Roman Catholicism. After Hitler’s release from prison, in 1925, there was a re-founding of the Nazi movement, and from then on, as a reaction to growing anti-Catholic sentiment, the [CRN144] Catholic orientation of Positive Christianity was replaced with a Protestant orientation. So it was that after February 1925 [CRN157], aside from occasional references to Positive Christianity and the heroism of Christ, “Hitler was no longer portrayed either as a believing Catholic or as an energetic advocate of Christianity.” From then on, Positive Christianity would become more greatly associated with Protestantism, and the denominational gauntlet would be taken up by a group called the German Christians, whose story will be further told in Chapter 6. For now, we will turn to discussion of the individual religious beliefs of leading Nazi figures, which will first require a diversion to deflate an all too common myth – that of Hitler and other leading Nazis as practitioners of the occult.
              Strange Wi-Fi and overheating issues with Linux kernel 4.x (And how to fix it the easy way)   
    A while ago I wrote a post on fixing monitor resolutions for my new laptop when booting into Linux. As part of the troubleshooting I upgraded my linux kernel from 3.x to 4.0.x. While this did nothing to fix the issue, I left it that way cause downgrading kernel versions without any reason seems silly at best. Within a day of upgrading it, I noticed issues sometimes when connecting to the wifi after laptop was woken up from sleep or rebooted. This was happening randomly and to make matters worse my laptop used to become unresponsive due to which I had to reboot it, sometimes between important tasks. The issue wasn't happening too frequently, which is why I took so long to finally look into fixing it.

    I started with searching bugs for my wireless card 7260 and wifi driver . This post should give some more details about the issue. I found two commands, and when faced with the problem, I simple created/removed one of these files and re-enable networking to fix it. The commands were found on ubuntu forums, and look something like:

    echo "options rtl8723be fwlps=N ips=N" | sudo tee /etc/modprobe.d/rtl8723be.conf
    sudo sh -c 'echo "options iwlwifi 11n_disable=1" >> /etc/modprobe.d/iwlwifi.conf'

    While the above commands did help reduce the frequency of the issue, the issue still persisted. Hence, out of options, I decided to upgrade my kernel to the latest version (4.8.13 at the time). And Voila! That fixed my wi-fi woes.  However, my laptop started to overheat frequently and hang so I had to reboot it, sometimes several time a day. That seemed far from ideal. I had fixed an issue only to face a slightly bigger and more pervasive issue.

    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298776] CPU2: Core temperature above threshold, cpu clock throttled (total events = 1)
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298777] CPU3: Core temperature above threshold, cpu clock throttled (total events = 1)
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298779] CPU1: Package temperature above threshold, cpu clock throttled (total events = 1)
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298780] CPU0: Package temperature above threshold, cpu clock throttled (total events = 1)
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298782] CPU3: Package temperature above threshold, cpu clock throttled (total events = 1)
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298785] mce: [Hardware Error]: Machine check events logged
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298787] CPU2: Package temperature above threshold, cpu clock throttled (total events = 1)
    Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298789] mce: [Hardware Error]: Machine check events logged

    I first thought of manually adjusting core frequencies to control the temperature when it crosses a certain threshold, but when I went looking for the list of available frequencies, I did not find the file /sys/devices/system/cpu/cpu(x)/cpufreq/scaling_available_frequencies (where x =>  0 .. n).

    Unable to figure out a way to get this list, I reached out to my colleague for help. He suggested that my driver (intel_pstate) might be what is hiding (I believe abstracting is the better term but we gave away with political correctness in 2016 so well.) the available frequencies to prevent the user do the exact sort of thing I was trying to do.

    This made me look closer into the intel_pstate driver. I got to know this driver is what's available by default on most newer intel machines, and it does in fact abstract said information. And when I read a little further than just the introduction, I found the turbo boost option for this driver. Going by the information given here, the turbo option for the driver is more sensitive than it needs to be. Once I turned this to on, that is, do not use turbo, my overheating issue also went away. So now my Carbon is free of heating and wi-fi issues along with running the most up to date kernel version. Victory declared!

    More information about CPU throttling can be found here.


              Resolutions   
    Got a new laptop 2 days ago - a Lenovo Thinkpad Carbon X1 3rd gen. Looks good, feels good. But there's a problem - it's a bit too good. I'm not used to it, and neither is my dear Linux.

    The laptop came with Windows 10 installed. I got 16GB of RAM so I could set up a Linux Mint VM with Virtualbox with Win 10 as a host. I ran into issues. I'm listing them out in case someone else is desperately searching online for solutions, and also so that the readers can have a good laugh at my expense:

    1. I started by installing Virtualbox 5x and using a Linux Mint 17 iso file to create the VM. I noticed that under the 'OS Type' option, all I could see were 32-bit options. I shrugged and thought maybe it was a VirtualBox 5x thing and I'd be fine choosing Ubuntu 32 bit to run my Mint 64 bit VM. Yes, I'm stupid, but to be fair, I really didn't see any 64-bit options.

    2. As one would expect, starting the VM didn't throw any explicit errors but the VM screen just went blank after flashing the Virtualbox icon. Didn't take long to realize that the 32 vs 64 bit issue was causing this.

    3. Turns out Window 10 comes with Virtualization options disabled by default. To turn them on, I went in the BIOS by restarting the machine and hitting Enter. Once in, I navigated to Security>Virtualization and enabled the two virtualization options (Intel-VTx and Virtualization Extensions). I'll add more details later.

    4. Rebooted and the OS dropdown in Virtualbox listed all 64-bit OS version this time. Yay! Chose one and restarted VM. It came up fine, but just one issue - one I couldn't overlook - the resolution was way too high. Windows 10 is optimized for this high res screen, but unfortunately Linux Mint isn't. "Well, lets try Ubuntu 14.04" I thought, and created another VM. Same issue. Resolution way too high.

    5. At this point I played around with resolutions a bit - chose different options in VirtualBox Guest, but every other option only made the guest OS screen smaller. I didn't want that, no one would.

    6. So now I decided I'd try dual booting Linux. The first thing to do for that is to turn UEFI Secure Boot off by going into boot option at restart and finding this option under the Security tab.

    7. Once disabled, I burnt the Mint disc image onto a 2GB USB with Universal USB Installer. Connected it to the laptop and restarted, hit F12 and entered the boot option page. Chose the USB option but it would just keep loading back the boot option screen.

    8. Thought that maybe, just maybe it was a problem with the pendrive and burnt the image again to my external HDD. But againt, the laptop refused to boot from it.

    9. Unfortunately, this being an ultrabook doesn't have a CD/DVD drive. Requested my colleague to get his USB DVD drive the next day, hence bringing Day 1's struggles to a halt.

    10. This morning, got the USB DVD reader and continued with my dual boot trial. Good news was that it booted linux mint trial just fine. But there was no wireless detection. None. Nada. Nil.

    11. Looked around online to see if others had this issue, and they did. In fact there's a video on youtube that shows how to enable a supposedly disabled driver for wireless in Linux Mint.

    12. Optimistically, I did exactly what the video said - search for driver manager, clicked on it and waited till it appeared just as shown int he video. Only on my screen, the window that appeared was blank and there was a pop up saying that I should install Mint first before making any changes. For those of you who have installed Mint in the past, having an internet connection is recommended. I don't know why its recommended, but I didn't want to take a chance, especially since I had no windows backup. Oh wait, I should explain why I didn't make a windows backup first.

    13. According to my colleague, lenovo thinkpads have a "Rescue and Recovery utility" that puts the OS image on a DVD. Only in this Carbon X1 3rd Gen, there is no such utility. The only recovery option there is is to allow windows to backup everything, for which it asked for 43GB of space. Now there's no way a DVD disc would suffice. And from my experience the day before, a burnt image on a USD HDD was not being recognized by the laptop at all. So yeah, I was so frustrated by this time that I threw caution to the wind and proceeded with dual boot anyway.

    14. Coming back to dual boot, I needed an internet connection. After some searching, found an ethernet wire and a port that worked. Then proceeded with the install. It asked me whether it should install Mint on the whole disc, and I chose 'Do something else'. Once I chose that, it brought me to a page where I was expected to choose the discs manually. This didn't scare me too much, until...

    15. I noticed that the amount of free space was only 1 MB. You read it right... 1 MB. Turns out all the space was being occupied by Windows C drive. Rebooted into Windows 10 and chose the 'Disk Management' setting. Once in, right clicking the Window C: showed an option 'Shrink Volume'. I clicked it and for once, something was done for me automatically - Windows determined that it could shrink the C drive to exactly half of what it was currently - 512GB to 250GB. Woot!.

    16. After Making 250GB available, I rebooted to Mint and on the partition page, saw that 'Free Space' now had exactly that - 250GB. Happily, I followed this awesome guide to create / and /HOME and swap partitions. Once done, I briefly looked into 'Device for bootloader installation' option and made sure I didn't choose something that'd overwrite the Windows loader. After some Googling, I was certain that the default internal SSD option /dev/sda was ok to proceed with. With this, my dual boot voes ended. But this wasn't the end of all my problems.

    17. Once I rebooted to Linux Mint, I noticed 3 things:

    • The resolution was abysmally high
    • Still no wireless detection
    • the keys atop the touchpad were just scrolling up and down a line when pressed, not actually clicking anything.
    18. my Linux kernel version was 3.13, and according to Linux Mint and Ubuntu forums, the Intel Wireless 7265 card wasn't supported. So to make it work, I would have to upgrade the kernel. Followed this tutorial to upgrade the kernel to 3.14. Unbelievably, the steps all worked on the first try. 

    19. After rebooting post upgrade, I went online and downloaded the Wireless 7265 driver for the kernel from this page and copied the *.ucode file to /lib/firmware with sudo. And Voila! Wireless started working. 

    20. Still ignoring the resolution issue which was the cause of all this to begin with, my stupid brain decided to resolve the mouse/touchpad buttons issue first. After again reading through forums, it appeared that this bug was reported sometime in April 2015, and even though there was a temporary workaround ( echo "options psmouse proto=imps" > /etc/modprobe.d/psmouse.conf) that involved Synaptics touch to be disabled (no finger scrolling, zooming etc.), the buttons started working but the touchpad was essentially jumpy and useless. 

    21. After I undid the change by removing the  options psmouse proto=imps  line from the config file, I rebooted again and decided it was already time for yet another kernel upgrade to 4.0. Post 4.0 upgrade, the touchpad and mouse issues were fixed and I only had to reinstall the Wireless Card 7265 driver. 

    22. Finally, all my issues except the resolution were resolved. Even with Linux as base OS, changing resolution seemed to only show a smaller screen. At this point, I emailed the salesman I dealt with for purchase my laptop stating "Both the laptop and windows 10 are too new to be supported by open source software critical to my work and study". As a last resort, and because I knew I was exhausted and not thinking well, I asked my colleague if this was the normal behavior upon changing screen resolutions. He suggested I try a particular option 1920 x 1080. And lo and behold... everything was MUCH BETTER. Still not perfect, but very usable. I asked how he knew that particular resolution would work, and he said it was the standard resolution for most screen. At that point it dawned on me the number of times I'd seen this particular resolution everywhere. Once would think I'd know to choose this option when looking at resolutions, but I just didn't realize. 

    And so, here I am typing out this blog post on my shiny new Thinkpad ultrabook with Linux Mint 17. Go ahead and laugh. I'm laughing too. :D

              La evolución de las Shell   
    La Shell en todas las distribuciones GNU/Linux es un interprete de comandos que nos permite interactuar desde la linea de comandos con nuestro sistema operativo. La Shell es un “caparazon” (de ahi el nombre) que se coloca por encima del nucleo de nuestro sistema operativo y oficia de interfáz entre el kernel y el usuario.Sigue leyendo "La evolución de las Shell"
              【Ubuntu】"VboxClient : the VirtualBox kernel service is not running. "   
    VirtualBox を 5.0 から 5.1 にアップグレードしてみた。 ダウンロードページ うちのホストマシンは Ubuntu 16.04.1。なのでアップ…
              kernel :CPU1: Running in modulated clock mode 临时解决   
    kernel :CPU1: Running in modulated clock mode kernel :CPU1: Temperature above threshold 服务器CPU工作温度过高. 查询资料后得知是 2.6 内核的相应阀值过低造成了这种状况。 vi /etc/syslog.conf 注释..【继续阅读全文
              Samsung Omnia II. Nuova vita grazie ad Android 2.1   
    Più volte, nel corso di questi ultimi anni, abbiamo notato come, grazie ad Android e al suo kernel Linux, siano molti i telefoni, anche datati, che tornano a godere di una rinnovata 'vitalità'....
              Wacom tablet issues   
    I'm running F25, 64-bit, with kernel 4.11.6 and KDE. I have an old Wacom Bamboo Fun tablet (CTH-661) with a USB connection. The tablet has been working fine until the last week (4.11 & 4.10 kernels?). Now it seems to work when I first start the machine but slowly develops response problems and gradually (seemingly) drops out of memory all together. If I re-open the system settings, graphic tablet, make a few changes, and save the "changes", the tablet will work for a few more minutes before fading again. dmesg shows the system knows about the tablet systemctl list-units | grep -i manager shows gdm.service running (I was expecting to see kdm.service) Any suggestions on getting this to work again would be appreciated.
              parallels tools   
    En fedora 25 kernel 4.11.6 no puedo instalar Parallels Tools
              Pennsylvania Dutch Chicken Corn Soup   
    *
    A walk in the woods - Boiling Springs, PA
    *



    Souper Susan & Marty

    We went to our souper family – Susan, Marty and Sarah - in Boiling Springs, PA for Mother’s Day. They had just moved into their mountain home after living the Navy life for many years and always on the move. They live on the South Mountain – the last mountain of the Blue Ridge. It was a great adventure walking along the creek on their property– we checked out special trees - the tulip tree and brought back lots of rocks. After the hike, we got busy making this traditional Pennsylvania soup. Susan’s idea was to go on Google and here is what Marty found, a classic favorite – Best Pennsylvania Dutch Chicken Corn Soup. We started on Saturday and finished on Sunday. The prep time is only 10 minutes, but the cooking time is 3 hours 30 minutes. This soup makes a very rich broth and the smell of nutmeg permeates the kitchen. It smelled soo good!! We thought the dumplings would be hard to make, but really they were quite easy. The dumplings are called rivels. This soup is a must if you like chicken and corn!

    Pennsylvania Dutch Chicken Corn Soup
    http://allrecipes.com/Recipe/Best-Pennsylvania-Dutch-Chicken-Corn-Soup/Detail.aspx

    Souper Susan & Marty/Souper Sarah

    "This dumpling soup is made entirely from scratch with fresh corn and stock from the whole checken, seasoned with nutmeg and flecked with hard-cooked egg."

    INGREDIENTS

    Serves 12


    2 (3 pound) whole chickens, cut into pieces
    3 quarts water
    3 onions, minced
    1 cup chopped celery
    2 1/2 tablespoons salt
    1 1/4 teaspoons ground nutmeg
    1/4 teaspoon ground black pepper
    10 ears fresh corn
    3 eggs
    1 cup sifted all-purpose flour
    1/2 cup milk


    Directions:


    1. In a large pot over medium heat, combine chicken, water, onions, celery, salt, nutmeg and pepper. Bring to a boil, then reduce heat and simmer 2 hours, adding water as needed, until chicken is very tender. Remove the chicken from the soup. Refrigerate chicken and soup.


    2. When fat solidifies on surface of soup, remove from refrigerator and remove fat. Remaining soup should equal about 2 1/2 quarts.


    3. Remove corn from cobs by splitting kernels lengthwise with a sharp knife and scraping corn from cob. Combine soup and corn in a large pot over medium heat and bring to a boil. Reduce heat and simmer until corn is tender, 10 to 15 minutes.


    4. Meanwhile, place two of the eggs in a small saucepan and cover with cold water. Bring to a boil and immediately remove from heat. Cover and let eggs stand in hot water for 10 to 12 minutes. Remove from hot water, cool, peel and chop. Set aside.


    5. Chop cooled chicken meat and add to soup.


    6. In a medium bowl, beat remaining egg until light in color. Beat in flour and milk until smooth. Drop batter by partial spoonfuls into hot broth to make small (1/4 -1/2 inch round) dumplings. Cook, stirring constantly, for 2 to 5 minutes, until dumplings hold their shape and float to the surface. Season to taste. Stir in reserved cooked egg.

    Mincing onions




    Put whole chicken in large stock pot




    Adding salt





    Bring to a boil





    Magnalite stock pan - simmer for 2 hours






    Rich chicken broth




    Debone and chop the chicken







    Add chicken to soup






    Making little dumplings (rivels)




    Simmering soup






    Serve soup hot







    Souper Granddaugter Sarah



    Happy Birthday Sarah

    This package comes with a bow!!!


    This soup has a distinct taste with the chicken and sweet corn flavored by the nutmeg. Did you notice the special stock pot? Susan’s mother had given her the Magnalite stock pot and it was so perfect for making this soup. I had no heard about this type of pot, but here is what I found out from Magnalite Cookware. It was originally developed in 1934 and is best known for its distinctive finish, timeless design and commercial durability. It boasts of its extremely thick base, which not only speeds up the cooking but also distributes the heat evenly up the sides. The vapor-tight pot and pan lids, on the other hand, lock in moisture to keep the food flavorful, (which was true).

    Susan and Marty it was so much fun cooking with you in your new kitchen. You have a beautiful new home. What a wonderful way to spend Mother’s day – cooking and being with your family! Thank you Sarah, Susan and Marty for making my day so special!!!

    The thought for the week - Kind words can be short and easy to speak, but their echoes are truly endless." - Mother Teresa



    Happy souping until we meet again!

    Souper Sarah


              Recipes of the week: Summer Salsas and Blackened Snapper on the Grill   

    This is the time when all the freshest ingredients are available locally. Visit your favorite farmer's market and find some ingredients to make a salsa, the perfect accompaniment to grilled meat and fish.

    These recipes, and many more, are available in my cookbook, Barbecue Secrets DELUXE!, available in bookstores and as an e-book from the Apple Store.


    Black Bean and Grilled Corn Salsa

    This salsa is great on grilled fish, but it also stands up on its own as a dip.

    1 14 oz | 398 mL can black beans, rinsed and drained
    3 whole fresh cobs of corn, shucked
    1 tsp | 5 mL minced fresh jalapeño pepper
    2 medium tomatoes, diced
    1 red bell pepper, diced
    1/3 cup | 80 mL chopped fresh cilantro
    1/4 cup | 60 mL red onion, diced
    1/4 cup | 60 mL fresh lime juice (about 2 limes, squeezed)
    1 tsp | 5 mL kosher salt
    1 avocado
    tortilla chips for dipping

    Prepare your grill for direct high heat. Grill the corn until the kernels turn a bright yellow and there’s some nice charring. Remove the cobs from the grill and let them cool long enough so you can handle them. Cut the corn from the cobs with a sharp chef’s knife or a mandoline.

    Combine all the ingredients, except the avocado and chips, in a bowl. Cover and chill the mixture for at least two hours. Dice the avocado and add it just before serving the salsa with the chips.


    Chimichurri

    Makes about three cups | 750 mL

    This is the classic Argentine condiment. It takes various forms, some finer, like a pesto, and some, like this one, chunkier, like a salsa. Chimichurri goes well with almost anything grilled, planked, or barbecued, but I like it best on lamb. Make it at least a day before you’re going to use it to let the flavors come alive.

    1 small bunch flat leaf-parsley, chopped (about 1/2 cup | 125 mL)
    1 medium red onion, finely chopped
    4 cloves garlic, finely minced
    1/2 red bell pepper, seeded and finely diced (optional)
    1 tomato, peeled, seeded and finely chopped (optional)
    2 Tbsp | 25 mL fresh chopped oregano (or 1 Tbsp | 15 ml dried oregano leaves)
    1 Tbsp | 15 mL paprika
    2 bay leaves
    1 Tbsp | 15 mL kosher salt
    1 tsp | 5 mL freshly ground black pepper
    2 tsp | 10 mL crushed dried red chile flakes
    1/2 cup | 125 mL extra virgin olive oil
    1/4 cup | 50 mL sherry vinegar
    1/4 cup | 50 mL water

                Combine all the ingredients except the oil, vinegar, and water in a large bowl and toss them well to make sure the salt is spread evenly throughout. Allow the sauce to rest for 30 minutes to allow the salt to dissolve and the flavors to blend.
                Add the oil, vinegar, and water and mix the sauce well. Make sure that the chimichurri looks nice and wet, like a very thick gazpacho. If not, add equal parts oil, water, and vinegar until the mixture is covered by at least a quarter inch of liquid.
                Transfer the sauce to a non-reactive storage container. Cover it and refrigerate it to allow the flavors to blend overnight. It’s even better after two or three days in the refrigerator.

    Peach and Blackberry Salsa

    Makes about 3 cups | 750 mL

    This salsa, invented by my wife, Kate, is something you should try only when these fruits are at their peak, which on the West Coast of Canada is in August. Paired with planked chicken, it’s a mind-blower.

    4 peaches, peeled and diced, not too small
    1 cup | 250 mL fresh blackberries,
    washed and picked over
    1/4 cup | 50 mL red onion, diced
    1/2 fresh green jalapeño or other hot pepper,
    seeded and minced
    4 tsp | 20 mL fresh lime juice
    salt and freshly ground black pepper

    Combine all the ingredients in a bowl. Let the salsa stand, covered, in the fridge for about an hour.


    Blackened Snapper on the Grill

    Makes 4 servings

    If you’ve ever tried to cook this delicious, spectacular dish indoors, you’ll know it’s a bit of a nightmare. It was invented by the great New Orleans chef, Paul Prudhomme, and was designed to be cooked in a restaurant kitchen where there is industrial-strength ventilation. The combination of butter and an extremely hot pan creates so much white smoke that you may not be able to see your fellow diners by the time the dish is ready to serve. I actually had to crawl from the kitchen into the dining room one time just so I could see where I was going. Cooking this dish takes a special technique that uses a gas grill to preheat cast iron pans to create the same effect as chef Paul’s restaurant kitchen. Don’t cook this dish if you’re worried about smoking out your neighbors!

    Note: You need two 9 inch | 23 cm heavy cast iron skillets to pull off this recipe.

    SAFETY WARNING: It’s extremely easy to severely burn your hand if you absent-mindedly grab the handle of the insanely hot pan when you take the fish off the grill. Please be careful!

    4 8-10 oz | 250-300 g snapper fillets
    ¾ lb | 12 oz. butter
    1 batch Cajun Rub (see recipe below)

    Warm four serving plates and four small ramekins in a low oven.
                Prepare your gas grill (sorry, charcoal grills just don’t generate enough heat for this recipe) for direct high heat. Place two cast iron skillets on the cooking grate with their handles pointed away from you. Let the pans heat up in the grill for at least 10 minutes, until they are extremely hot.
                While the pans are heating, melt the butter in a sauté pan until it is just melted. Turn off the heat but leave the pan on the stovetop to keep warm.
                Dip the snapper fillets in the melted butter and sprinkle them generously on both sides with the rub mixture. Drizzle a little of the remaining butter over the rubbed fillets.
                Open the grill and quickly place the fillets in the pans. This will cause a lot of white smoke and the butter may flame up, so be careful. Cover the grill and cook the fish for just a couple of minutes. Carefully and quickly turn the fillets over with a long spatula and cook them for another minute or two, until the outside of the fish is nicely blackened.
                Put on some oven mitts, just in case you grab a pan handle by mistake. With your spatula, remove the fillets from the pans and place them on the serving dishes. Transfer the remaining butter into the warmed ramekins. Serve the snapper immediately, with the ramekins of butter for dipping. 

    Cajun Rub

    Makes about a half cup of rub.

    I’ve featured this rub in the recipe for Blackened Snapper on the Grill (see page xxx), but it’s a great all-around grilling or blackening rub that showcases the classic flavors of Cajun cooking.

    2 Tbsp | 30 mL sweet paprika
    1 Tbsp | 15 mL kosher salt
    1 Tbsp | 15 mL granulated garlic
    1 Tbsp | 15 mL granulated onion
    1 Tbsp | 15 mL cayenne pepper
    1 Tbsp | 15 mL freshly ground black pepper
    1 Tbsp | 15 mL white pepper
    1 ½ tsp | 7.5 mL dried oregano leaves
    1 ½ tsp | 7.5 mL dried thyme leaves

    Mix the rub ingredients together.



              Barbecue Secrets #3: British BBQ legends and more...   

    Welcome to the third edition of the Barbecue Secrets podcast, a 29:15 minute show celebrating the many pleasures of outdoor cooking. In this edition:

    • 2:07 An interview with Jackie Weight of Mad Cows Barbecue
    • (22:49) Answers to listener questions about warm-up time for your grill, (24:48) BARBECUE SECRET OF THE WEEK: how to avoid food sticking to the grill and (26:09) when to use granulated garlic (22:49)
    • (27:00) Competition Secret of the week: one word: plenitude!

    Photo courtesy Craig "Meathead" Goldwyn.

    Links: Jackie and Rick Weight's website, visit www.americanbbq.co.uk. Also, please drop in and post a message at www.bbqforum.co.uk.

    This week's recipe: Stuffed Tenderloin of Pork

    • Ingredients:
      • 1 whole pork tenderloin (weighing around 1-11/2 lbs)
      • 1 small red onion - finely chopped
      • 5 oz. mushrooms - finely chopped
      • 1 oz. butter or olive oil
      • Pinch of dried sage
      • Pinch of dried thyme
      • 4 oz fresh breadcrumbs
      • Grated rind of 1 lemon
      • 2 tablespoons lemon juice
      • 1oz toasted pine nut kernels
      • 4 tablespoons fresh chopped parsley
      • 6 cardamom pods (seeds only - finely ground)
      • 3 teaspoons of sweet chilli sauce (more if you like it hot)
      • 4 tablespoons fresh chopped coriander (cilantro)
      • 4 oz dried apricots - very finely chopped
      • Fresh Spinach
      • Black Pudding / Blood Sausage
      • Butter for brushing the meat
      • Bacon

    Fry the onion and mushrooms in olive oil or butter until tender, transfer to a bowl and add the sage, thyme, breadcrumbs, lemon rind, lemon juice, pine nut kernels, parsley, cardamom, coriander and chilli sauce; mix well, season to taste.

    Take the pork tenderloin and butterfly it (split lengthways). Place a piece of cling wrap underneath it and one on top and beat it out to a thin square.

    Remove the top piece of cling wrap, brush meat with butter and lay spinach leaves (remove any tough stalks from the spinach leaves) so that the whole meat surface area is covered. Take the filling mix and spread it over the spinach - use your fingers to get an even covering.

    Now take the black pudding / blood sausage, remove casing and cut in half lengthways, mould the finely chopped dried apricots to form it into a full sausage shape again and place along the length of the meat / stuffing area.

    Using the remaining piece of cling wrap to help you, roll the whole thing up (similar to a Swiss roll or roulade). Dispose of cling wrap.

    Once rolled, wrap the bacon around the whole piece of meat in a spiral so that you have completely covered the meat. Roll up with a fresh piece of cling wrap and refrigerate until ready to cook (best to leave this for at least 1 hour to allow the flavours to infuse).

    Cook in a roasting pan, over indirect heat on a barbecue, or in the oven at 350F for approximately 1 hour or until a meat thermometer inserted into the centre reads 170F. Deglaze the roasting pan with a little white wine and add 1 oz of butter to make a sauce if desired.

    Allow meat to rest for at least 15 minutes and serve cut into approx 3/4 inch slices.

    Rockin' Ronnie Shewchuk is the author of Barbecue Secrets: Unbeatable Recipes, Tips & Tricks from a Barbecue Champion, published by Whitecap Books. Find him, and more recipes, at www.ronshewchuk.com and e-mail questions, tips and suggestions to rockinronnie@ronshewchuk.com.


              Fat Folk   
    02-20-2017
    I would include myself in the category of "fat folk" with a BMI of 27 when 20 is normal for my height. And it seems that I have gained at least 5 lb since the beginning of the new year. That emancipated and shriveled up cancer patient image just isn't playing out.

    I mentioned in last week's posting about the fat hair stylist who needed to push her gut into my side or back while attending to my hair. Only a day later, while lingering at the tanning salon only because they had free food because it was the last day of their sale, a blonde fat girl was parading around, and made sure I saw her, and then, when averting my gaze, she paraded closer in front of me to get my attention yet again. I didn't see any particular reason for her parading around at all, as she could of sat down and enjoyed the food like anyone else, but hey, normal folk acting strange in my proximity is nothing new since all this shit rained down 04-2002. Add a fat girl on the vineyard crew and we are set for a new round, so to speak.

    Some back pain today in the afternoon, and no kidney pressure thankfully. The paw-paw was the cause of the latter, and I have been off it for two weeks now. It is just the MMS vs. the cancer. Or, should I say, it is my perception that it is metastasized prostate cancer, as I don't know what else it could be. Those aches, and some pains, shooting up my back from my hips, and sometimes down to my knee. The L side seems to be more afflicted, and even sometimes I get twinges into my shoulder and arms. I see the radiation oncologist in two days, so hopefully I will get a firm diagnosis, even if my condition is a moving target, so to speak. After two doctors fanned on giving me a diagnosis, I have to go through this protracted exercise of diagnostic delays yet again. And too, I get to visit the radio-therapy clinic and all that expensive gear that can "cure" (a much qualified term) and also give cancer too. Ask breast cancer survivors 20 years after treatment.

    02-21-2017
    Tuesday, and the first sun on the vineyard in three weeks, apart from 2 minutes worth two weeks ago when there was a well timed withdrawal of the sun. It came out for when we exited the barn after break, we walked to the pruning site and then the sun was shaded by the large cliff, and when it came out the other side, why, cloud cover had moved in to block it. As mentioned many times in these postings in an attempt to identify perp frequented events, the properties of the sun are of intense interest to the perps; be it falling on foliage, crops, skin or eyes and of course all the color changes that go with it, from sky to all illuminated and shaded objects. If I were to make a "way-out-there" statement on this topic, it would be something like this; the perps have arranged the whole sun, Earth's axis tilt and rotation and the consequent seasons all to deliver sunlight in incrementally changing exposures to further their insane sunlight research objectives. (And too, variation in the increment from the tropics to the poles). How is that for a deep conspiratorial perspective?

    Woo-hoo- TI World Blogspot statistics; Pageviews all time history= 150,154. Assuming these numbers aren't rigged of course, me being the centerpiece of all that is rigged, orchestrated, pre-scripted and otherwise contrived. And now about 8,000 page-views per last month, a rolling count I assume, as it changes every day. And about 250/day. The numbers don't seem to add up on per posting basis when I look at other page statistics, so maybe its those static pages that seems to attract so many viewers. I haven't had many comments from the TI community, so I don't have a firm grasp on the veracity of the Google supplied statistics.

    Vineyard pruning all day today; three different sites as we finished two, and started another. That might of been the reason for repeated helicopter coverage today; at least 14 fly-by visitations, mainly by a Bell 206L for whatever reason. I have no idea how they decide on the helicopter model, but they also put on a Bell 206 and an EC-135. They put on a very noisy Bell 412 overflight a few days ago. This vineyard I work at has a helicopter pad, so no doubt there will be summertime helicopter visits, touching down and bringing in wine tasting folks. Which is better than the stunt they pulled two months ago at this same vineyard when they had an AStar 380 come from over the ridge and fly 40' overhead of me. I was the only person in the vineyard and they had me lined up perfectly before they saw me. What is the etiquette when helicopters are that close and one can see the pilots through the windshield? Does one wave at them? And no, I don't do obscene gestures, even if my whole life is filled with provocations.

    The hip and back aches and pains were still present today, though muted. I still don't like walking fast much, and I declined yoga last night. It was the after effects of yoga of 5 weeks ago that indicated that something was very wrong and getting worse. I keep the MMS treatments up, and have added apricot kernels. The latter might seem like silly fluff, but in fact, they contain vitamin B17, aka laetrile, a subject of much past controversy and FDA suppression. After reading "World Without Cancer" by G. Edward Griffin, and the details of how Sloan Kettering Hospital found laetrile to be one of the most promising anti-cancer substances known, and then buried it after three attempts with a made-to-fail experiment result which received all the publicity, I have become all the more cynical about this world and how controlled and arranged it is. And too, cancer is mass slaughter; 600k go down every year in the US and Canada, and here we have the FDA (and lap dog Health Canada) repeatedly suppressing viable and effective treatments. The book is a scathing indictment of "consensus medicine" and those "catastrophic cures" that are presently deemed orthodox. From what I have learned, cancer can be remotely delivered if Tom Bearden and his scalar wave theorizations are correct (see Priore's healing treatment device and here). I have come to know of all manner of remote delivery incursions these past 14 years, and cancer is just another.

    02-22-2017
    A visit to Kelowna to the cancer clinic today. Per determination of the oncologist, I don't have metastatic prostate cancer. Which then begs the question as to what do I have that has caused such aches, and some pains, in my pelvic region, as well as up my spine and into my R arm for the past month. I feel like shit for two days after yoga, when normally I don't feel anything. And this condition has responded to paw-paw and now MMS to some extent. The timing of all this is straight out the perp play book, under the "create FUD" section. Give the TI victim prostate cancer and then add on conditions that emulate its metstatization. Fucking hilarious, and now over 14 years of these insane incursions, and they add one more unrelated ailment on top of seeming real one. I get another bone scan out of the deal and a second opinion on my biopsy from last year. I am not sure whether to be happy or sad about the above diagnosis; radiation treatment possibly to follow in some months, though I hoping the MMS will nail the prostate cancer by then.

    Another fat girl stalk at the cancer clinic today; big and blonde she was. At the counter and then lingering again outside my exam room. I suppose if they want to ingratiate/inculcate me with fat girls, and if they need some starting leverage, then having them younger and blonde just might be the strategy. (Fat, or obese people are decidedly on the Unfavored demographic list.) But you can see where they are going next; bring on fat males to gangstalk me. On the other hand, today's medical assistant was tall and leggy and on the attractive side, but with a floppy top garment to hide her nice slim thighs. On and on, these selective revelations of attractive features, and too, at the exam room, switching to vignettes of the above mentioned fat blonde girl.

    No progress with the oncologist on my dopamine deficiency theory of all what ails me, including prostate cancer. Another wall still to climb. The perps have decidedly sabotaged my efforts to get useful treatment from urologists; the one in Seattle didn't help, and the present one in this town has been an absolutely unprofessional pill.

    As usual with all doctor visits, there is a protracted wait in the exam room, all to have me watch the masers and plasma beams flit about, the latter usually projecting from denser objects like metals and ceramic tile. I even got a stinging maser strike through my L eye during the consult, and the doctor seemed to flinch in seeing it. I am so fucking fed up with being put through the clinical mill; spoofings, obdurate doctors, fake-out conditions, obstruction of appropriate care even with supporting tests in hand etc. Or more to the point; I am totally fed up being kept as a nonconsensual experimentation victim/subject. And over the most inane things; exposure to fat folk, per above, and the rest of the Unfavoreds. Just leave me the fuck alone; 14.5 years of sustained and intense abuse and being kept at the limit of my tolerance all day long, every day, is just too fucking much.

    02-23-2017
    Pruning the vineyard all day, taking my MMS every hour or so. Taking supplements and the like is another big game for the perps. They seem to be studying the containers they are kept in, modifying my water supply, (and spilling it), and the order of intake, as well as having me skip the odd hourly intake.

    Plenty of helicopter coverage today, though it was consistently 2 or so km away, with very few overflights. It is as if it were doing landing and take off practice at the airport. Really?

    All day to think about what to do about the whole dopamine/urology nexus, and how can I make any forward progress on this front when the whole thing is rigged to my disadvantage.

    Some more pelvic region and back pains this evening, even extending into my L shoulder and arm. Even to the point of walking gingerly when out at the supermarket tonight. So if its not metastatic prostate cancer, just what is it then? Yesterday's doctor visit wasn't too helpful in elucidating a probable cause, so no doubt it is going to be a long running mystery tour. The MMS might be quelling the problem, as was paw-paw before it, but the back pains and the like keep coming back.

    At the specialty grocery store on the way home from work today, a favorite time/circumstance and place to "get me" it seems. A mother and two children about 12 or so, block my egress in the aisles and so I take an alternate route to look at some more items, and then come back to the checkout (an interval of 3 minutes), and there they are still standing there at the blocking location. As I approach the checkout, why, they do too, and get ahead of me. And what is the whole point of all that? I have seen the step-ahead-of-me (queue augmenting) stunt so many times since all this shit began in 04-2002, but to have the obstructing party in obvious stand-there mode for 3 minutes in advance, takes the cake.

    02-24-2017
    Vineyard pruning all day, though the night was more eventful than usual. The assholes woke me twice to urinate, once more than "usual". The second time was memorable in that they put on some leg cramps, both in my thighs (unusual as the imposed cramps go), so I was in major pain while peeing. (But not from urinating). Then later in the night the perps put on this pain in my abdomen, an intense and focused one that just wouldn't go away for 10 minutes of agony.

    Continuing spine and arm pains today, though a little more with a "healing" sensation. That would be akin to a pinched nerve, a long time perp favorite harassment method, except it runs from my hips upward on my back to one arm or the other. I have never had a pinched nerve that ran for three weeks and nor covered such a large region. Which still doesn't explain why I felt so crippled from yoga, twice, all those pelvic twistings and the like. Whether this is a passing perp imposition or another major health crisis, I don't know. In keeping with the latter scenario they ran planted notions as to what dialog would unfold in speaking with emergency physicians. I don't like hospitals, and so they seemed to revel in playing this in mind for much of the morning.

    And what is it about some of my vineyard co-workers that they need to walk back to the end of their row and start their next row which "happens" to be near me? Normally, one finishes a row and starts on the next adjacent row end, no need to walk back and proceed in the same direction, E-W in this case. But as the perps are nuts over all my direction changes, why wouldn't they put on similar acts of perversity for my co-workers? Said co-worker also did a senseless back-and-forth, some 80' worth, while looking at his cell phone, while staring at text messages, and then returning to his row end. Fucking bizarre. The incidences of senseless cell phone stalking, with LED lighted display, as some kind of portable color reference are endless. Same thing "happened" a few days earlier at the adjacent winery when I entered there to take a forced piss a few days ago. The winemaker crosses my path five seconds in advance while looking at his lighted cell phone as I entered the building when there was no need for him to do so given the configuration of the building and phone access there.

    Post Saturday work, having put in 4 hours to get my weekly hours to 40; I do a tan afterward as it is on the way. The "usual" crush in the waiting area on my exit, but none there when I arrived 20 min. earlier. On the way home, still only 3C outside, why, they put a negro kid in his soccer uniform, shorts no less, on the corner where I made my last turn, he being somehow "just standing there", rooted to the spot. Presumably he crossed my path after I made the turn. As mentioned many times, negroes are rare as hen's teeth here, and given the propensity of the perps to plant said skin tones in my proximity, it was just another (managed) coincidence IMHO. And it should not go unmentioned, that there are a considerable number of negroes in the TI community.

    02-26-2017
    Sunday, and a full day off. The generalized aches and pains of my pelvic region seem to have migrated to my shoulders, and causing pinched nerve-like symptoms down my arms. I see the bags under my eyes they gave me in 09-2016 are slowly abating. Just what that is about I have no idea.

    Another checkout obstruction, this one online. After putting a half dozen items in a "shopping cart", why, a 504 gateway error suddenly erupts. And what is the point of that? To wait five minutes and then resume my activity? Disruption games are nothing new.

    My big outing of the day was to get tea towels at a certain department store, as I am down one as the assholes seemed to have relieved me of one of the set in the last laundry. (One of their favorite situations to steal or sabotages my clothes and linens). Anyhow, another very large woman to finish off my week; probably some 240lb, and about 5'6". That she was blonde didn't go unnoticed, and she was even friendly instead of the usual scared shitless demeanor I get. And too, attentive, instead of looking the other way or otherwise being avoidant like my last experience at this same store.

    Still afternoon on Sunday, but I am going to post this for the week and not get jerked into last minute postings. Or else screwed into "forgetting" for a few days. Anything interesting out there in the TI Universe? I don't get into it much; I do know a few credible TI's I can speak with, but their email and phone numbers aren't available to me. Funny how they don't bother to keep up.


              Cool as a Cucumber: Foods That Help You Chill Out   

    Your outdoor patio, deck or backyard can be one of the most relaxing places for summertime lounging. With comfortable patio furniture tempting you to unwind after a long day, a colorful sunset calling your name and a perfect setting for open air family mealtime, the only thing missing is delicious, nutritious food and beverages to complement your summer oasis.

    Take a sample of the following four types of foods that can help you relax and unwind as you stay cool and enjoy the summer season.

    Chilling With Fruits
    An apple a day may keep the doctor away.  But melons, berries, grapes, cherries and citrus are some of the most common go-to summer fruits for hydrating and fueling your body.  Fruits that are high in fiber and water content can provide an excellent combo for keeping you cool. Made up of more than 90 percent water, watermelon can be a refreshing treat that soothes your thirst on a hot summer day while also satisfying your sweet tooth. Adding citrus like lemons to ice cold water will not only add a twist to a summer essential beverage, but it also can naturally add Vitamin C to your body.

    “I tell my clients to put fruit in their water, like citrus and berries, because it tastes good, is good for them and they deserve it,” said Marisa Carter, massage therapist at Elements Medford.

    Slicing and Dicing Veggies
    Nothing says summer like adding fresh garden vegetables to your daily menu. Whether it’s juicy tomatoes, crunchy lettuce and spinach, or crisp cucumbers and carrots, it’s the perfect time of year to take advantage of all the easy and delicious veggies that are ready for the picking from your garden or local farmers market.  Tossing up a quick dinner salad with all of your favorite veggies can be a quick and easy relaxing meal to unwind with after a busy day. Or, you can throw some sliced summer squash, potatoes, onions or corn on the cob on the barbecue to keep the heat out of the kitchen and pack your patio table with a nutrient-rich dinner or lunch side dish.   

    Grilling Meats
    Summer relaxing on the patio and firing up the grill go hand in hand. When picking out good meat for some summertime grilling, lean, free-range, grain-fed and wild breeds are some of the best choices. To spice up your grilling menu, try combining your favorite cut of beef, chicken or seafood with an assortment of grill-friendly vegetables like onions, red and yellow peppers or squash to create colorful, tasteful and nutritious summer kabobs. Another easy grill meal that requires very little preparation time is all-in-one foil meals. Combine your favorite source of protein with potatoes, carrots, onions and seasoning, wrap it all up in individual-serving-size pieces of foil and let dinner simmer on the grill while you relax and unwind as the sun sets on the horizon. 

    Snacking Fun
    When temperatures are on the rise and being outside is borderline unbearable, it's a good time to retreat inside to relax and stay cool in an air conditioned environment. Some of the best summertime retreat pastimes for all ages are movie watching or board game playing. But, these summer fun activities aren’t complete without some deliciously fun snacks.

    “It’s a good idea to have healthy snacks for throughout the day,” said Amy O’Connor, massage therapist at Elements Chandler/Ahwatukee. “Grapes, trail mix and fruit can be good, healthy snack options.”

    To make summer snacking healthy and easy, opt for a batch of homemade popcorn and trail mix. Anticipating the pop of corn kernels on the stove or over a campfire is not only fun for everyone in the family, but preparing this old-time treat by hand is a lot healthier than the processed, high-sodium-and-fat microwave or movie theater options.

    Making your own trail mix also can be a nutritious and fun snack. Satisfy your sweet and salty tooth by mixing your favorite nuts with an assortment of dried fruit and even a small handful of dark chocolate chips. And to keep you cool on a hot summer day, a frozen berry smoothie with low-fat Greek yogurt, milk and fresh fruit is always a popular treat.

    Chill out this summer with these fun and healthy food options that are easy to incorporate into your menu planning. Save time and energy by going fresh, keeping it simple and staying out of the heat of kitchen so you can enjoy the summer relaxing, unwinding and staying cool.


              Elive 2.7.1 beta released   

    The Elive Team is proud to announce the release of the beta version 2.7.1 This new version includes: Audacity (audio wave editor) included by default Timezone detection improved Detector of systems improved and updated to detect last windows installed systems Linux Kernel updated with a lot of new patches for new hardware, bugfixes and improvements ... Read More

    Check more in the Elive Linux website.


              Seared Pepper Crusted Salmon?   
    Okay, so we live in Alaska and we love to fish, but to be honest, we're not crazy about eating salmon. Halibut is an entirely different matter, but salmon....mmmm...not our favorite. But alas, I found a recipe that my husband LOVED. I even thoroughly enjoyed it...not a hint of salmon/fishy taste. I fixed Quinoa with Black Beans, which even Chris tried and said wasn't too bad. Then I also made Trio Salad, which originally came from the EMeals website and Rebekah loved it! I'm glad that this dinner was a success. Chris had a hard day at the office with several serious issues that came up and it made my heart lighter to know that I'm doing my job as his helpmate, that he's at least had a solid meal with which to digest all these changes. Literally. And no, he didn't eat the salad! ;-) I'll share the recipes for those friends of mine who are also foodies and might enjoy trying them.

    {MARINATED SALMON WITH A SEARED PEPPER CRUST} from www.food.com
    2 T. soy sauce
    1 garlic clove, pressed
    2 t. fresh lemon juice
    1 t. sugar
    3/4 lb. salmon fillets
    4 t. pepper
    2 T. olive oil
    ======
    1. In a sealable plastic bag combine the soy sauce, garlic, lemon juice and sugar with the salmon, coating it well and let it marinate in the refrigerator for at least 30 minutes. (I did mine for about two hours)
    2. Remove the salmon from the bag, discarding the marinade and pat dry. Press 2 t. pepper onto each piece of salmon, coating it thoroughly (I didn't measure, I just coated my salmon on all sides)
    3. In a heavy skillet heat the oil over moderately high heat until it is hot but not smoking and saute' the salmon for 2 minutes on each side or until it just flakes.
    4. Transfer the salmon to paper towels and let it drain for 30 minutes.
    {QUINOA WITH BLACK BEANS} from www.allrecipes.com
    1/2 t. vegetable oil
    1/2 chopped onion
    1-1/2 cloves peeled and chopped garlic
    1/4 c. + 2 T. uncooked quinoa
    3/4 c. vegetable broth
    1/2 t. ground cumin
    1/8 t. cayenne pepper
    salt & pepper to taste
    1/2 c. frozen corn kernels (I forgot, but will add it next time)
    15 oz. can black beans, drained and rinsed
    1/4 c. chopped fresh cilantro
    ======
    1. Heat the oil in a medium saucepan over medium heat. Stir in the onion and garlic, saute' until lightly browned.
    2. Mix quinoa into the saucepan and cover with vegetable broth. Season with cumin, cayenne, salt and pepper. Bring the mixture to a boil. Cover, reduce heat and simmer 20 minutes.
    3. Stir frozen corn into the saucepan and continue to simmer about 5 minutes until heated through. Mix in the black beans and cilantro.
    {TRIO SALAD} from the EMealz website & www.beckyhiggins.com
    2 cucumbers, peeled and chopped
    1 pint grape tomatoes (halved if you want)
    1 avocado, peeled and diced
    4 t. cider vinegar
    1 T. olive oil
    salt & pepper to taste
    1. Combine vinegar, oil, salt and pepper. Whisk until thoroughly combined.
    2. Place vegetables in a bowl; add the dressing and toss gently.

              Commentaires sur Cachez cette webcam par Cyril   
    <p>Oui, mais comme ils le disent sur leur propre site : <em>"As with any security tool, direct or proactive attempts to specifically bypass OverSight's protections will likely succeed. Moreover, the current version over OverSight utilizes user-mode APIs in order to monitor for audio and video events. Thus any malware that has a kernel-mode or rootkit component may be able to access the webcam and mic in an undetected manner."</em></p> <p>J'utilise les deux: gaffer et oversight. My two cents.</p>
              Malambo Grassroots (Rose Charities Canada Member Project): Update mid 2011   
    Zambian school children enjoy donated hoola hoops
    2011 update:

    An energetic last visit, from October 2010 to April 2011 is past. Working with donations from our salt-of-the-earth sympathizers, and with Stitchting Mwabuka Zambia, we focus on education, community development, and income generation. We …

    … started builting two teachers' houses for our needy local school Malambu Basic, without which the Ministry will not place much-needed teachers.  We need to find additional funds to complete the second house.

    ...located and found funding to pay two temporary teachers to work in a school that was missing two teachers. The students had been coming to class even though there was no one to instruct.

    … donated exercise and text books, pencils, chart paper and other teachers' aids to two schools. Funded a computer for a school for child-headed households.

    … established the kernel of a library -- 56 new books and a bookshelf in the Malambo Women's Centre. Friends joined enthusiastically, clearing out their children's home shelves, and our local school children devoured them. Needed: a library building and more books, the demand being for science books especially.

    … financed a women's workshop on openly discussing issues that are difficult to talk about, or which people feel must be kept undercover, such as orphan abuse, AIDS, spousal relationships and employer/employee  relationships. (They called the workshop "Not talking the truth".)

    … expanded our chicken business project.

    … started a new chicken business project in Mujika village.

    … administered funding to support 44 students from Grades 8 to college.

    … supported 3 adults in the completion of their education, namely teacher training and tailoring.

    … built a toilet for a handicapped woman in Mujika.

    ...worked with our income generating groups to improve the design of their product lines.

    … supported a man to legally secure a land purchase that would ensure a future for his family.

    … supported the Malambo Women's Centre with building repairs and to manage large orders.

    … helped various people as requested with fertilizer, leaking roofs, transporting the sick (especially children) to hospital, boarding school supplies, the unexpected birth of twins, and a derelict old beggar.

    It was a brisk and spirited six months.

    New goals:

    Our immediate funding goals are to raise funding for...

    ...our library.

    ...teachers housing.  By 2015, the local government school is required to go up to grade 9.  In order to do this, the community, which is very poor, is required to build two houses to government standards, in order to get teacher placements.  This is beyond their ability.  If the school does not achieve this, the students in the school will be placed at the bottom of the list for available spaces for grades 8 and 9 in other schools.

    ... our scholarship program.

    ...funds to cover the cost of shipping a container of medical equipment donated to hospitals, and musical instruments donated to a school.

    ...a vehicle.  Our ancient bread delivery van, which we use to run our programs, is now held together by a lick and a prayer, and is in desparate need of replacement.  We thought we had resolved this when a Delica was given to us, but sadly, it seems the Delica is beyond repair.

    WHO WE ARE:
    Malambo Grassroots oversees a number of projects in southern Zambia, where the BaTonga people live. Our projects assist Zambians as they work toward making a better life for themselves and their families in a drought-stricken part of the country. We focus on income-generating projects, education, community programs, and emergency assistance.
    We are a member-project of Rose Charities Canada. Rose Charities Canada is a registered, non-profit organization with the Canada Revenue Agency, registration number: 859442303RR0001.


    [caption id="attachment_299" align="aligncenter" width="432" caption="Egg cosies made by the Lusumpuko group."][/caption]
              Oracle 10g on SuSE 9.1   
    Finally got around to resolve all issues I encountered while installing the Oracle 10g database on my Linux machine. Tried both Fedora Core 2 and SuSE Linux 9.1. Ended up sticking with a SuSE 9.1/GNOME combination for now.

    I mostly followed these instructions http://www.oracle-base.com/articles/10g/OracleDB10gInstallationOnFedora2.php

    They are for Fedora Core 2 so there are some modifications necessary.

    Here are the major points:

    Check to see that the following packages are installed:
    >rpm -q tcl openmotif compat
    tcl-8.4.6-26
    openmotif-2.2.2-519.4
    compat-2004.4.2-3

    Create the /etc/redhat-release file and edit the content to say:
    redhat-3

    Create or modify /etc/sysctl.conf (I did not have this file on my system) - add the following:
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    vm.disable_cap_mlock = 1

    Then run sysctl -p to activate these settings.

    I also enabled this file to be read during boot - did this via YAST - System - Runlevel Editor - (Expert Mode) - set boot.sysctl to run at the B runlevel

    I did not bother with adding the settings to the /etc/security/limits.conf since I only use this installation for light development duty.

    The rest of the install was done pretty much as indicated in the instructions linked to above.

    I did not perform the post install step where you add the "export DISABLE_HUGETLBFS=1" to an oracle script. The "vm.disable_cap_mlock = 1" in /etc/sysctl.conf solves the same problem.

    The last change was to comment out the entries for port 1830 in /etc/services - without doing this I had trouble getting the web based enterprise manager to install.


              Notes about ret2dir & PaX/Grsecurity   
    A paper "ret2dir: Rethinking Kernel Isolation" was released two years ago. It claimed that ret2dir can bypass modern mitigations including KERNEXEC/UDEREF/SMEP/SMAP/PXN. The author proposed a defensive solution is called eXclusive Page Frame Ownership (XPFO) in the paper. But it was not merge into the vanilla kernel back then. Some guys are trying to merge it again lately.

    ret2dir might be a dramatic exploit technique can be useful to bypass mitigations. But it's not that "perfect" when it comes to PaX/Grsecurity. KERNEXEC does much more things than SMEP/PXN simply does not allow kernel code execution from userspace. I'd like to share a few things( truth?):

    1, Even under kernel <= 3.9, the kernel patched with PaX/Grsecurity can prevent ret2dir attack without enabling any features. ret2dir only works if a few highly situational conditions satisfied. More detail? Plz ask those who did the tricks;-)

    2, The fully ret2dir attack is based on PFN's information. The paper reveals two approaches to get the information:

    1) simply read the info from /proc
    2) physmap spraying

    Unfortunately, all exploits we've found( public exploits & unpacked from malwares) are using the 1st approach in past 18 months. The evidence revealed that all other ret2dir exploits are copycats of these two ret2dir exploit examples( exploit writers aren't work hard?):

    IMOHO, ROP is the only option left for ret2dir attack. Otherwise, creating a ROP chains is not that easy on PaX/Grsecurity kernel even without RAP, isn't it?
              How can we "hardened" an Android eco-system without Google?   
    .cn utilizes shitty firewall blocked every Google services away including Google Play and Nexus+OTA. Android phone vendors are providing their own OTA inside the .cn. For the security aspect, there are a few issues hard to solve.

    1) Qualcomm/Samsung/Huawei build their own BSP based on AOSP source code. If the BSP shipped without basic security mitigation, the cellphone vendors are unlikely to backport it. It will definitely be a problem to those who concerns security and privacy.

    2) Local small vendors upgrade so slow and some even may not have OTA. Security patches are hard to deliver in time.

    3) .cn doesn't have Google Play, which means tons of Android apps never been well tested by malicious check before its online for the end-user.

    Anyway, end-user has been suffering from the philosophy of "A-bug-is-bug". I'm going to share two stories about hardening solution. The 1st one is how I got here with help of PaX/Grsecurity's previous work. The 2nd is from Baidu( I don't talk about reputation here, not today-_-) Millions of Android phones are endangered by wilding malwares contains kernel exploits, such as HummingBad, Godless and Hellfire( Chinese version).

    I'm pretty sure this is only a corner of iceberg. Organizations like underground criminals and intelligences might be use these easy-to-implement exploit to compromise those Android phone without basic mitigations. How can we still don't have ret2usr protection in 2016. Okidoki, welcome to the desert of the real;-)

    I was keeping my eyes on what vulnerability and exploit exist within malware or rooting tools in past 18 months. Then I figure out that some of vulnerabilities are very popular to the offensive side: CVE-2014-3153( Futex vuln), CVE-2015-3636( Pingpong root), CVE-2015-0569( Prima wifi driver) and CVE-2015-1805( iovyroot).

    I was thinking what if there's a solution to prevent those exploit without patch anything. Then I tried to make a prototype old Nexus device with hardened kernel. I did a few things on Nexus 7 2013's kernel( repo based on Jan 2014) last year by doing a couple of things:

    1, Ported a PaX to flo's kernel, which is based on 3.4. Note: What I use is a relative weak version of PaX, without KERNEXEC/UDEREF/RAP and those strong Grsecurity features for x86.

    2, Ported PXN( armv7). Minimal memory mapping restriction might be the 1st step for ret2usr protection and PXN should be the 2nd one.

    3, Backport a security fix for CVE-2014-3153, which is the only one vulnerablity need to be fixed in my kernel. Because this version doesn't have UDEREF/PAN. Fortunately, Kees Cook done a backport of software-based PAN for 4.1.

    4, Prevent infoleak to make exploit writer's life harder a bit.

    I've been using rooting tools like TOWELROOT/KINGROOT/360ROOT to test this hardened version. None of their exploits can work until last month( maybe). I've also modified some need-to-hardcoded public exploit to test and I got the same result. Well, guess it seems not bad( yet).

    Baidu( Google's competitor in .cn) proposed a solution a couple of months ago at HITB AMS and then release more info in Chinese at here and English version at Black Hat USA.

    Unfortunately, they don't share how do they getting the root( so obviously;-)) in the 1st place. A complete steps should be like( let me know if I were wrong):

    1) End-user install their apps

    2) Rooting it via those easy-to-implement exploit( or getting it by reversing-_-)

    3) Insert rootkit( based on inline hook framework) & Luapatch( Policy engine) into the kernel. I'm very curious if Baidu guys co-operate with Huawei. Cu'z Luapatch is looking similar to Huawei's ktap.

    4) Then fixing bug ...let me guess, if you have a rootkit in someone else's kernel...well, shit will happen, as always. Otherwise, the policy engine & rootkit themselves may also have vulnerabilities. It's possible that adversary( criminals/Intelligence) will act if this solution goes popular.


    IMOHO, I prefer the 1st way to solve the problem. But it's hard to convince vendors to merge the hardening patches. The 2nd solution may have potential risk to privacy, no one wants to have someone else's "god mode" in their cellphone, aren't they?

    I've been analyzing different situational hardening solution and exploit method. For the defensive aspect, I hope that making more mitigation lands into AOSP kernel. Otherwise, KSPP is another way to improve Android security. Mitigation is the way to solve those issues once for all.

    Update( Oct 31 2016):  Hardened PoC for Android needed a backport fix for CVE-2016-5195( a.k.a "DirtyCOW"). Cu'z it's a dangerous threat to all Android devices. There are a dozen of public PoCs, so it'd be much easier to attackers to forge their weaponized exploit to target Android devices.
              Is Linux kernel secure?   
    I've read a article "Net of insecurityThe kernel of the argument" from The Washington Post today. It's fuc*ing good one. I've been torturing by the security status of *stable* linux kernel for a fuc*ing long time. I never see one article can talk about the truth like this one. Many commercial customers( especially from financial data centres) has been painful to use commercial GNU/Linux products for years. Remember those 0ld good null-deref exploits and Enlightment framework back in 2000s? What did Linus and these commercial GNU/Linux vendors response back then? They said "A bug is bug" is one thing, while SELinux can protect your asset is another. Unfortunately, they are lies to you, as always.......

    I'm not going to talk about those shitty history right here. You can google if you really want to know the truth. A little advice, you could start from here.

    Well, speaking of the history of mitigation. I'm highly recommend you should go through thinkist's presentation at BH'10. Who the hell can explain the history so detailed like he did;-)

    Black Hat USA 2010: Memory Corruption Attacks: The Almost Complete History

    http://thinkst.com/resources/slides/bh-2010-haroon-meer-keynote.pdf

    1/5: https://www.youtube.com/watch?v=stVz9rhTdQ8
    2/5: https://www.youtube.com/watch?v=HJwg5vdoWCY
    3/5: https://www.youtube.com/watch?v=5vDRCi6OQuw
    4/5: https://www.youtube.com/watch?v=9edv8FwmJzk
    5/5: https://www.youtube.com/watch?v=4XEe5I4Wsrc


    "As long as there is technology, there will be hackers. As long as there are hackers, there will be PHRACK magazine."( Quoted from Phrack Issue 63). As long as there are vulnerabilities, there will be exploits. As long as there are exploits, there will be mitigation.........

    Basically, the possible evolution of a exploitable bug should be look like this:
    ---------------------------------------
    Bug –> exploitable bug(vulnerability) –> poc –> exploit –> reliable/weaponized exploit
    ---------------------------------------

    That's where the problem comes. There are two types of philosophical ideas about how to deal with exploitable bug.

    1, Linus Torvalds represent the philosophy of "A bug is bug", which believes any exploitable bug should be taken care of like the normal bug. When one is being found, just get to fix it. Any security mitigation is fully waste of CPU usage. Developers should've only focus on the features and performance. He( and his followers) even believes bug info's obscurity is the way to prevent attacker and "security through obscurity" is an effective approach for Linux kernel upstream.

    2, PaX Team and spender are the most fascinating guys on the side of security mitigation. They( I) believes numerous exploitable bugs can not be solved once for all by fixing them. But we can design some specific security mitigation to against the specific types of vulnerabilities. That's the only way to solve this issue.

    Well, those two philosophical ideas are totally different. Why the hell happens? IMOHO, one of main reasons is the threat model is totally different. In my own adversary, the attackers may have the weaponized exploits, which developed by digital armory( Vupen, HT?) or underground. While only the skiddies in Linus's threat model( it seems to be at least;-)).

    Some commercial GNU/Linux vendors basically believes public exploit is the most important reason to influence their risk assessment. Don't believe that? They admitted by themselves;-)

    A lot of my customers always says one of reasons they choose GNU/Linux as their alternatives of UNIX, because GNU/Linux is secure. I've been wondering all the time and response like "ARE U fuc*ing serious?". Now GNU/Linux is dive into the next age of Internet, which some people would like to call IoT( internet of the things). But the question is: Is Linux kernel ready to face the tons of cybercriminals? You fuc*ing tell me........

    btw: Kernel/Compiler/Firmware are very important core infrastructures of modern cyber world. A lot of good people are busy to defend our world by their effort. PaX/Grsecurity guys are my heros. Reproducible builds( based on the theory of DDC, by David A. Wheeler) is definitely gonna piss NSA off. CHIPSEC( for firmware) may be the starting point. I do believe only the fined FOSS solution can make this world a little more secure......
              HIGHRES TIMER can be your DoS nightmare   
    This is a real-life story about HIGH RESOLUTION TIMER and how lame
    coders use it to make a self-DoS;-) You should be very cautions if
    your system was written by those type of coders.

    Incident happened:

    1, A dozen of RHEL 6 GNU/Linux servers were extremely slow while
    running some *** applications. The kernel CPU usage was about
    40%--50%.

    2, the "free" item from vmstat was not seems OK. "free" was keep
    increasing but "buff" & "cache" were decreasing when a bunch of data
    went through. Then kernel gave you a *hint* about OOM( Out of Memory):

    "kernel panic - not syncing: Out of memory and no killable processes..."

    Then kernel tried to kill each processes until shit happened, which
    was kernel panic.

    I began this investigation with strace. The result was quite
    strange. Why would the application( malware?) invoke the syscall
    nanosleep() so often? Every 10000ns( 10us)? Seriously? All I can tell
    is the application doesn't need to do real time work.

    --------------------------------------------------------------
    15:30:08.002047 nanosleep({0, 10000}, NULL) = 0 <0 .000082="">
    15:30:08.002175 nanosleep({0, 10000}, NULL) = 0 <0 .000074="">
    15:30:08.002297 nanosleep({0, 10000}, NULL) = 0 <0 .000074="">
    ...
    15:30:09.917557 nanosleep({0, 10000}, NULL) = 0 <0 .000075="">
    15:30:09.917661 nanosleep({0, 10000}, NULL) = 0 <0 .000071="">

    --------------------------------------------------------------

    The customer said it was never happened in 0ld good GNU/Linux systems(
    like RHEL 5). My guts hints me to a direction: High Resolution
    Timer. A type of kernel timer that can provide more accurate time
    measure. I've read Linux Manual and very well explained kernel doc and 
    learned that HIGHRES TIMER was added to the upstream code in
    2.6.21. So I guess..just guess..some lazy & lame coders just want to
    make the program "sleep" in a very "short" time. Then he/she wrote
    this code very confidently:

    usleep(10);

    If you're running linux kernel before 2.6.21, this line of code will
    only sleep between 1ms and 2ms. But..annoying *but* is coming..if
    you're running *modern* GNU/Linux distro with HIGHRES support, the
    same code will sleep 10us, which might cause performance hit. CentOS
    community had the similar issue before:



    From the evidence we have, there are two clues might lead us to the
    crime-scene: High Resolution Timer.

    1, nanosleep() has been invoked >=8k times in every fuc*ing second.

    2, The victim kernel was not running with kdump. But we still have
    some kernel logs. According to the CallTrace, the kernel was playing
    with HIGHRES-related context should not be a coincidence:

    [] ? audit_syscall_exit+0x27e/0x290
    [] ? sysret_audit+0x16/0x20
    [] ? __hrtimer_start_range_ns+0x1a3/0x460
    [] ? sysret_audit+0x16/0x20
    [] ? sysret_audit+0x16/0x20
    [] ? audit_filter_rules+0x2d/0xa10
    [] ? audit_syscall_exit+0x27e/0x290
    [] ? sysret_audit+0x16/0x20
    schedule_timeout: wrong timeout value ffffffffffffb572


    Solution:

    I'm giving you two options:

    1, Modify the source code( if you have) about *sleep*-related
    functions and tell the fuc*ing coders they can go home and fuck
    themselves.

    2, Append "nohz=off highres=off" to the file /etc/grub.conf, to turn
    it fuc*ing off this feature.


    Testing result:

    Unfortunately, we had to test this in a production system..but we did
    it.

    +-----------------------------------------------+
    | Item | HIGHRES ON | HIGHRES OFF |
    +-----------------------------------------------+
    | nanosleep | >8,000 times | 345 times |
    +-----------------------------------------------+
    | buff/cache| Decreasing | Increasing |
    +-----------------------------------------------+
    | %sys | 50% | 6% |
    +-----------------------------------------------+

    Well, I guess we arrested the *perpetrator* this time. Damn...not every 
    business impact caused by security issues;-)

              Happy New Year 2015   
    Time is running on and brings us to another new year. Does this fuc*ing mean another fight? I've been sitting on my butt and watching a lot of presentations of 31C3. Unfortunately, I couldn't be there physically. I'm fuc*ing jealous you guys who were there;-)

    I've learned a lot from these videos. So, I'd like to write down what I thought about some great topics.

    31C3 Opening Event [31c3] mit Erdgeist und Geraldine de Bastion

    Die gesamte IT-Messe wird auf dem Gelände in neue Hallen umziehen, sodass auch die Aussteller ihre Auftritte auf eine neue Basis stellen müssen. Im Zentrum des neuen vierstufigen Konzepts steht eine Art Campus unter dem Expo-Dach („d!campus“), der Digitalisierung und Kultur vereinen soll. Die bisherigen Konferenzen werden im neuen Talkshow-Format („d'talk“) aufgehen und Querdenkern, Visionären und Experten eine Plattform bieten. Das Kernelement („d!conomy“) dient der Geschäftsanbahnung und der neuen Technologie-Präsentation, ein weiteres präsentiert Forschung und junge Unternehmen („d!tec“).

    Cebit 2018: Keine separaten Konferenz-Tickets mehr

    Die Öffnungszeiten werden um eine Stunde nach hinten verschoben, künftig läuft die Messe von 10 bis 19 Uhr; das Gelände wird aber bis 23 Uhr fürs Abendprogramm geöffnet bleiben. Separate Tickets für die bisherige Konferenz-Serie soll es nicht mehr geben. Frese: „Es wird keine Tages-, sondern nur noch Dauertickets geben.“ Allerdings gibt es für den Freitag („Digital Friday“) ein Publikums-Ticket, das eine breite, technikbegeisterte Öffentlichkeit ansprechen soll. Der Messe-Montag soll politischen Veranstaltungen vorbehalten sein – die restlichen drei Tage werden dann ganz dem Business gewidmet sein.

    In diesem Jahr waren über 3.000 Aussteller aus 70 Ländern gekommen – Frese erwartet mit dem neuen Konzept mindestens einen Gleichstand. Seit 1986 fand die Cebit jährlich im Frühjahr vier Wochen vor der größeren Hannover Messe statt, aus der sie einst als eigenständige Veranstaltung hervorgegangen war.

    Ursprünglich mal eine Abkürzung für „Centrum der Büro- und Informationstechnik“, schreibt sich die Cebit künftig durchgehend in Großbuchstaben. Sie findet vom 11. bis 15. Juni 2018 statt und soll auch wieder stärker für Privatbesucher geöffnet werden. In den vergangenen Jahren hatte sie sich von einer Computerschau zu einer reinen Fachmesse für Geschäftsprozesse gewandelt – sie versteht sich heute als Leitmesse der Digitalisierung. dpa

    Passend dazu: „Forget everything you knew about Cebit“: Die Computermesse wird zum Event


              ACPI: Advanced Configuration and Power Interface   

    Author: Emma Jane Hogbin

    Language: English

    Published: 2004

    Outlines how to patch a kernel for ACPI support.


              Re: Could Linux/Mac Security Reputation be Damaged when its Popularity Rises?   

    I allways laugh when I hear that argument.  Popularity has nothing to do with security, but usability has a large deal to do with it.  XP can be a secure and stable system, but only if it is locked down to the point that it is barely usable, hence the reason why nearly everyone runs XP with Admin rights.  Vistas new model tries to address this, but it is a hacked on kludge that has its own new problems.  Linux and Mac have graceful ways of allowing users elevated access without a popup asking for their password. 

    (And yes, Linux is just as easy to use as MS or Apple, it's just different.  It's not poplar because the selection of available games and propritary applications suck.  Most people don't care about the OS, they just want their apps to work)

    The clear separation of users and applications from the kernel allow for *nix clones to control security in many different ways and to be as restrictive or permissive as the needs demand.  MS has only one security model that seems to be more concerned with license activation then real security. 


              Mini FreeNAS Desktop System from Server Case UK   
    Server Case UK has been assembling server systems since 2006, and we’ve worked with lots of software and hardware solutions – mainly creating custom solutions. We’ve recently been working a lot with the FreeNAS software. It’s a superb software package, free to download online, and has an excellent plugin library that offers pretty much everything you would need for a home or small office NAS solution. FreeNAS is a non-commercial product, and is not supported for commercial use, so intended for home users. That being said though, we know a lot of customers who use FreeNAS for business and would still recommend it for a non-critical environment. FreeNAS is based on FreeBSD, a Unix operating system that’s been around for many years. It’s FreeBSD base however is very cut-down and all unnecessary components remove, and the Kernel refined for use as a NAS system. The FreeBSD operating system also offers a very reliable platform for this type of product too, and this software OS is frequently used on other popular NAS devices such as Synology, QNAP and Thecus. On top of the FreeBSD OS, ixSystems, the creators of FreeNAS, have created a web-based GUI interface, which has undergone years of development. This GUI ties in various features of the OS which make up the NAS solution, such as the ZFS file system, file sharing daemons (such as Samba for Windows/CIFS shares). The FreeNAS web interface has undergone years of development by ixSystems, and is now one of the best interfaces around for easy web based NAS management. It works on all standard desktop browsers, but will also function on a mobile device’s web browser. The interface is fast, intuitive and easy to use. A novice user of a NAS device can easily setup a NAS system with this software – it’s wizard setup for most areas makes it easy to use, but the advanced elements allow for significantly greater control than other products on the market such as QNAP, Synology etc. We can offer custom built NAS solutions for home or business, using FreeNAS and have successfully built and deployed many server systems for our customers – from small desktop mini systems through to large 45 bay Rackmount monsters in a RSYNC replication environment. FreeNAS is free to download from www.freenas.org – If you want to get started with this you can install it on pretty much any modern PC capable with a 64bit compatible CPU – so anything from Core 2 Duo upwards, or various AMD CPU’s. FreeNAS recommend 1GB of RAM per 1TB of storage you setup, although this is not critical and using less will not hinder the performance for the majority of users. A dual core or higher CPU is also recommended. FreeNAS supports many NIC cards, and we’ve had the best success with Intel NIC’s, so we would recommend an onboard or add-on Intel based NIC. We’ve not come across a SATA controller which isn’t supported, so pretty much any add-on or SATA controller should work. The only important thing to remember is that FreeNAS must be installed on to a USB key, or a small SATADOM. The OS cannot share a drive with storage and will use the entire drive for it’s OS – so if you plan on using a 1TB HDD for FreeNAS then this will be  a waste, as the OS only uses around 2GB of space. We have had good success with the Kingston range of USB drives. Desktop FreeNAS System - 4x 3.5" Hot-Swap Micro System   We’ve put together an excellent Mini FreeNAS desktop product, available to buy as a turnkey solution from us. This product is based on our popular Logic Case ITX chassis, which is very quiet in operation and has 4x SATA 6Gbps hot-swap bays – compatible with both 2.5” and 3.5” HDD’s without the need for any brackets. The case itself has a really nice look and feel, with an attractive design. The front door protects the installed drive bays and has a mesh design, so activity lights can be easily viewed. The front door is also lockable. Installed within this product is a 320W 80plus low-noise PSU. The whole system is based on a Supermicro X10SBA server motherboard. This has an embedded Intel Quad Core Celeron CPU and 8GB RAM. The board also has Dual Gigabit LAN onboard. USB 2.0 and USB 3.0 onboard allows for easy expansion for backup USB drives, such as USB flash drives or offsite backup hard drives. Please have a look at our FreeNAS offering. This is a pre-assembled system, with FreeNAS fully installed. All you need to do is install your hard drives – these can be spare drives you have or new ones – We can also supply hard drives and have linked in Western Digital Red drives in the Compatible Products part. To view our product please click on the link below; If you would like to discuss your FreeNAS requirements – Please contact us and our team would be happy to work with you.
              iOS 9.3.4 Security Update Available   
    Apple has released an update for its mobile operating system, iOS 9.3.4, which fixes a memory corruption issue that could be exploited “to execute arbitrary code with kernel privileges.” The update blocks a jailbreak bug known as IOMobileFrameBuffer. Please check your iOS device to ensure it is fully updated and patched.  This is done under […]
              By: Jeff Schroeder   
    You might also take a look at: www.kroah.com/lkn/ Linux Kernel in a Nutshell www.kroah.com/lkn/ Linux Device Drivers 3rd Ed <a href="http://www.phptr.com/promotions/promotion.asp?promo=1484&redir=1&rl=1" rel="nofollow">http://www.phptr.com/promotions/promotion.asp?promo=1484&redir=1&rl=1</a> Bruce Perens Open Source Series Page...24 Open Publication licensed Books
              Leon leads Burlington to 5-2 win over Cedar Rapids   
    CEDAR RAPIDS, Iowa (AP) -- Julian Leon hit a three-run home run in the third inning, leading the Burlington Bees to a 5-2 win over the Cedar Rapids Kernels on Wednesday.
              Poultry will be attracted    
    This idea is touted by no smaller quantity than the National Audubon Society. The idea is to make a pace in which one increases the food, water, structure and nesting opportunities for wildlife while tapering hose down and chemical use. Generally, one landscapes so that pasture proportions is decreased, but the mixture of native, non invasive plants is expanded.Consider choosing flowers that wrapper a remark of feed sources. Sources of supplies consider nuts, seeds, reproductive structure and nectar. Each aggregation attracts a dissimilar array of birds. Nectar can be provided by red tubular flowers-scarlet sage, columbine, lobelia, penstemon, azalea, fuchsia, Bee Balm or bush. Hummingbirds and orioles are the species attracted. Woodpeckers, nuthatches, jays and even poultry will be attracted to oaks, hickory, buckeye, chestnuts and walnuts. Fruit pose foliage such as as dogwood, serviceberry, cherry, Red Mulberry, and succulent will transport thrushes, Veery, robins, catbirds, Cedar Waxwings, tanagers, wrens, vireos and warblers. Seed good posture flora see sunflowers, coneflowers, wildflower (the fowl may suchlike this one, but I'd tip off that more than a few of you are allergic to it-I am), pine, maple and alder. Grosbeaks, finches, cardinals, Pine Siskins, juncos, titmice and dove will realize these kernel supporting plant life. Always single out undergrowth economically suited for your locality. Arranging these in your courtyard can deal in you near a idyllic and visually extraordinary pay for yard as economically as providing feed and structure for the birds. Since billowing to this provide lodgings and emplacement lots of the above plants, we've canned a few 54 species of birds guest us.Water attracts fowl and wildlife. Bird baths placed nigh on the courtyard adds to the aesthetic patch providing portion and diversion opportunities for our feathery friends. Our craniate hip bath is set so that the birds can suddenly plunge into the Rose-A-Sharon if surprised. If you have a fluent wellspring of water, extract it to the birds and your authority. Our chasm trailing our manor provides slim marine but a comfortable and diverse environment that has attracted even the likes of Sharp-shinned and Red Shouldered Hawks and Great Horned and Screech Owls.Post ads:record call lg voyager / cheating husband navy / free call recorder landline phone / just bugging / how to find out if my spouse is cheating / ten signs she is cheating / check verizon text recordsPost ads:
              Devcon 2017 - Day Two   

    Day two dawns on what is a much better day in now-sunny Portugal. The team has emerged, blinking slowly in the unfamiliar daylight - copious amounts of espresso have been drunk to neutralise the effects of all those craft beers - and we're off once again. This is also probably an opportune time to publicly say thank you to Wetek, who joined us last night and whose generous sponsorship has helped to make this DevCon happen.

    Razze opened day two with a presentation of the translation/internationalisation (i18n) process in Kodi, specifically around the workflow required as strings move back and forth to Transifex.  

    We then moved on to one of our perennial topics: piracy boxes and how we can better distance ourselves from them. What you do with Kodi remains your business, but people will be well aware of the ongoing battle we have to defend our name, and break the popular link between Kodi, sub-standard hardware, and unlawful streaming.

    Next up was Paxxi with a presentation around the core Kodi architecture (form and function within the code) and how that could be improved. As examples, there are several areas in which code could be compartmentalised, or where platform abstraction could be made cleaner, as well as possible enhancements to inter-process messaging queuing. Taken together with other changes, these would give a much clearer model of "this code, this function, this way of talking to other threads". Paxxi also covered planned changes to how we handle threading within Kodi - particularly within the user interface - which would make the whole application feel much more responsive.

    We then moved on to platform status - where are we on the different operating systems we support.

    • First on was Memphiz with all things Apple. iOS - iTunes, jailbreaking, Cydia, Apple's end-of-support for 32-bit applications, TVOS; MacOS/OSX - hardware decoding, GPU support on older devices.
       
    • Koying (our long-standing Android developer) then covered Android:  decryption, OS release support (especially as we move to Leia).
       
    • Linux was the next in line, with Lrusak taking the stage. Linux, by its nature, is a very fragmented platform in terms of hardware support, and that's a challenge for e.g. video decoding APIs; it's the same issue for compositors and window managers when it comes to the presentation layer. We need to simplify how we deal with all the different SoCs out there because of the complexity it's driving into our code. The other major change we're seeing is the acceptance of CEC into the Linux kernel, which will remove our dependency on libcec at some point in the future.
       
    • Paxxi then rounded this section off with Windows: the desktop bridge/Windows Store version of Kodi, challenges around porting to UWP (it's effectively like a new OS), 64-bit (x86-64) support and the subsequent end-of-life of 32-bit.

    Next on, a discussion about attitude and communications within the team and with the community, courtesy of Lrusak. We're well aware that there's a fine line between "heated debate" and "abuse" sometimes, and the Internet can bring out the worst in people (never read the bottom of the the Internet, and all that). Similarly, people can be terse when quickly bashing in a quick response to something, especially on a mobile device; this can easily cause unintentional offence, which in turn reflects badly on everyone involved.

    What's meant as humour or sarcasm can easily be understood as a snide, unhelpful remark; too much of this, and you're well on your way to creating a hostile culture. Kodi is not a company with employees and HR rules that we can enforce, but is in no way the worst offender in the FOSS community when it comes to this behaviour. However, we also know that we're not the best either. We can and will improve.

    Changing gear a little back to more technical matters, Chewitt then took us through an update on LibreELEC: project principles, new relationships/contacts, active installation base, platforms in use, recent developments, future roadmap (especially security improvements and platform specifics, particularly how to deal with the increasing variations of ARM SoCs), project funding, project governance.

    Kwiboo then took the stage to talk about ongoing work to implement LibreELEC on the Tinker Board - a Rockchip-based alternative to the Raspberry Pi that was recently launched by ASUS. By some measures, it's twice as fast as the Pi, but maintains the same form factor and GPIO layout. While it was launched as very much a "plaything" (hence the name), and lacks the out-of-the-box software of the Pi, the implementation of LibreELEC has gone a long way to make it a workable HTPC platform. The project has generated much platform-specific work on various application and library code which now need to be merged back upstream.

    Phil65 talked about skin development, specifically the KodiDevKit plugin for Sublime Text 3 (ST3) and how this can be used to streamline the development process - live information in the editor regarding what state Kodi is in, or what images it has loaded and processed, for example.

    Nearing the end of the day, Natethomas gave an update on this year's Google Summer of Code (GSoC) process. We want to take part, and we want to "give back" by helping the students with their skills (as well as getting the benefit of their input, of course). However, it isn't necessarily a trival task, and we need our development team to be more active as mentors - both during the development process, and afterwards, as we mop up and document what was achieved. We have our projects identified, so let's embrace them.

    Finally, to close the day, Keith covered Business Development and Conferences. The Kodi team has become much more active on the conference circuit, and we've had people attend many events over the past year: CES, VideoLAN Dev Days, Open Source Leadership Summit, Embedded Linux Conference (both Europe and North America) - and we have a couple of people attending Microsoft Build later this week. They're very effective ways to make new contacts, interact with peers in the FOSS/multimedia community, and raise the project's profile. These events have given us access to folks like Amazon, who have made huge strides in taking down the "fully loaded" boxes; eBay is lagging, but we're making slow progress there as well. On the BusDev side, we're still really keen to get official content owners on board, but that is heavily predicated on being able to protect the content rights, so there remains much to do in this space.

    A long and productive day, followed by some more team building - this time, with the Wetek guys, did I mention them enough? :) - and then on to another local watering hole... 

    PS We also broke Jenkins. Badly. Oops. No nightlies for a few days while we <cough> restore. Sorry about that.

    Tags: 

              Not Dead Yet - Rick's Pick 2016 - The Best Radio You Have Never Heard Vol. 302   

    NEW FOR JANUARY 15, 2017

    For a full 10 years now, (math properly checked this time) BRYHNH producer Rick From New York has been tasked for what has developed into one of each year's most anticipated shows. This annual event is Rick's perspective on the best tracks played on BRYHNH from the previous year, affectionately known as Rick's Picks.
    Now to be sure, this end of year "Best Of" list is not what you might expect. This is Rick From New York meticulously sifting through Volumes 274 - 300 and choosing the finest tracks, both new releases and classics you may or may not have ever heard before, that graced the BRYHNH feed from the past year.

    And again, Rick has picked a list of winners like a six year old can pick a booger.

    As you will read below, there is a dispute over the linear nature of the events that led up to the thematic content dispute between episodes 301 and 302 that Rick attributes to a Mexican cactus debauch. But I heartily say this can only be settled by a Mexican cactus summit on my turf. Where my people can provide protection.
    It was only business, Rick. I always liked you.
    So as the bell rings to start round one, I present to you: Not Dead Yet - Rick's Picks 2016

    We now join Rick in progress . . .

    I've been doing this for a while now.

    In December, every year since 2006, I get a little kernel of anxiety in the pit of my stomach as the deadline for the January 15th "Rick's Picks" approaches. I want to do a good job of representing BRYHNH, staying true to the Brand while inserting my own take on the year's offerings, but... am I up to it?

    A couple of times I thought about trying to back out because, after all, December can be a crazy-busy time and believe it or not, putting this podcast together has more to it than you might think, as I have written about previously. This year the kernel became a throbbing mass as I realized that my ears had died to most of my musical interests. Because the nature of my obsessions are not fully understood, or under control, I am constantly led down odd streets, and sometimes... blind alleys. How else to explain my year-long obsession with the music of "Hamilton: an American Musical"?

    December arrived once again, along with the gnawing realization that I had only a very short time to immerse myself in the BRYHNH universe to pull out an acceptable contribution. Well, inspiration arrived just like a shiny Christmas morning; how could I not acknowledge the passing of so many music icons in 2016? All I had to do was cherry-pick the years' podcasts for my dearly departed playlist! Perfect! And easy, I thought. I promptly emailed Perry Bax with my idea, to which he responded with a hearty thumbs up, and encouragement that I was "on a good track".

    Imagine my surprise when on 01.01.17 "In the Wake of 2016" drops. Bowie, Prince, Russell, Frey and Emerson...all there. Even a belated Leonard Cohen track I had planned on using on my fadeout! Was I upset? No. Taken aback? Definitely. But what's a bit of larceny between friends, eh? I may have emailed during a Mexican Holiday Tequila Debauch and in the cold light of day (ouch!), I'm sure Mr. Bax thought it was HIS idea.
    Just to prove that I remain light on my feet after all these years, here is my musical riposte to "In the Wake of 2016".

    Ladies and Gents, I give you "Rick's Picks for 2016: Not Dead Yet".

    Alive and kicking, Rick from New York

    Not Dead Yet - Rick's Picks 2016

    1. After Midnight (live) - Eric Clapton w/ Derek Truck and Doyle Bramhill Buy From iTunes*
    2. Street Fighting Man (live) - The Rolling Stones Buy From iTunes*
    3. After The Gold Rush (live) - Neil Young w/ The Promise of Real Buy From iTunes
    4. Woodstock (live) - Joni Mitchell Buy From iTunes*
    5. Wristband (live) - Paul Simon Buy From iTunes
    6. Tomorrow Never Knows (live) - Govt. Mule Buy From iTunes*
    7. Hey Joe (live) - Bad Company Buy From iTunes
    8. Big Yellow Taxi (live) - Joe Jackson
    9. Oops ! I Did It Again (live) - Richard Thompson Buy From iTunes*
    10. Peaches en Regalia (live) - Phish Buy From iTunes
    11. I Want More (live) - Tedeschi Trucks Band Buy From iTunes*
    12. Smash The Mirror / We're Not Gonna Take it (live) - The Who Buy From iTunes
    13. The Greatest Thing (live) - Elvis Costello and The Attractions Buy From iTunes*
    14. Beautiful (live) - Carole King Buy From iTunes
    15. There Will Be Time (live) - Mumford and Sons w/ Baaba Maal, Beatenberg, and The Very Best Buy From iTunes

    The Best Radio You Have Never Heard.
    Just another piece of billion year old carbon.
    Accept No Substitute.


              Crinkler secrets, 4k intro executable compressor at its best   
    (Edit 5 Jan 2011: New Compression results section and small crinkler x86 decompressor analysis)

    If you are not familiar with 4k intros, you may wonder how things are organized at the executable level to achieve this kind of packing-performance. Probably the most important and essential aspect of 4k-64k intros is the compressor, and surprisingly, 4k intros have been well equipped for the past five years, as Crinkler is the best compressor developed so far for this category. It has been created by Blueberry (Loonies) and Mentor (tbc), two of the greatest demomakers around.

    Last year, I started to learn a bit more about the compression technique used in Crinkler. It started from some pouet's comments that intrigued me, like "crinkler needs several hundred of mega-bytes to compress/decompress a 4k intros" (wow) or "when you want to compress an executable, It can take hours, depending on the compressor parameters"... I observed also bad comrpession result, while trying to convert some part of C++ code to asm code using crinkler... With this silly question, I realized that in order to achieve better compression ratio, you better need a code that is comrpession friendly but is not necessarily smaller. Or in other term, the smaller asm code is not always the best candidate for better compression under crinkler... so right, I needed to understand how crinkler was working in order to code crinkler-friendly code...

    I just had a basic knowledge about compression, probably the last book I bought about compression was more than 15 years ago to make a presentation about jpeg compression for a physics courses (that was a way to talk about computer related things in a non-computer course!)... I remember that I didn't go further in the book, and stopped just before arithmetic encoding. Too bad, that's exactly one part of crinkler's compression technique, and has been widely used for the past few years (and studied for the past 40 years!), especially in compressors like H.264!

    So wow, It took me a substantial amount of time to jump again on the compressor's train and to read all those complicated-statistical articles to understand how things are working... but that was worth it! In the same time, I spent a bit of my time to dissect crinkler's decompressor, extract the code decompressor in order to comment it and to compare its implementation with my little-own-test in this field... I had a great time to do this, although, in the end, I found that whatever I could do, under 4k, Crinkler is probably the best compressor ever.

    You will find here an attempt to explain a little bit more what's behind Crinkler. I'm far from being a compressor expert, so if you are familiar with context-modeling, this post may sounds a bit light, but I'm sure It could be of some interest for people like me, that are discovering things like this and want to understand how they make 4k intros possible!


    Crinkler main principles


    If you want a bit more information, you should have a look at the "manual.txt" file in the crinkler's archive. You will find here lots of valuable information ranging from why this project was created to what kind of options you can setup for crinkler. There is also an old but still accurate and worth to look at powerpoint presentation from the author themselves that is available here.

    First of all, you will find that crinkler is not strictly speaking an executable compressor but is rather an integrated linker-compressor. In fact, in the intro dev tool chain, It's used as part of the building process and is used inplace of your traditional linker.... while crinkler has the ability to compress its output. Why crinkler is better suited at this place? Most notably because at the linker level, crinkler has access to portions of your code, your data, and is able to move them around in order to achieve better compression. Though, for this choice, I'm not completely sure, but this could be also implemented as a standard exe compressor, relying on relocation tables in the PE sections of the executable and a good disassembler like beaengine in order to move the code around and update references... So, crinkler, cr-linker, compressor-linker, is a linker with an integrated compressor.

    Secondly, crinkler is using a compression method that is far more aggressive and efficient than any old dictionary-coder-LZ methods : it's called context modeling coupled with an arithmetic coder. As mentioned in the crinkler's manual, the best place I found to learn about this was Matt Mahoney resource website. This is definitely the place to start when you want to play with context modeling, as there are lots of sourcecode, previous version of PAQ program, from which you can learn gradually how to build such a compressor (more particularly in earlier version of the program, when the design was still simple to handle). Building a context-modelling based compressor/decompressor is almost accessible from any developer, but one of the strength of crinkler is its decompressor size : around 210-220 bytes, which makes it probably the most efficient and smaller context-modelling decompressor in the world. We will see also that crinkler made one of the simplest choice for a context-modelling compressor, using a semi-static model in order to achieve better compression for 4k of datas, resulting in a less complex decompressor code as well.

    Lastly, crinkler is optimizing the usage of the exe-PE file (which is the Windows Portable Executable format, the binary format of the a windows executable file, official description is available here). Mostly by removing the standard import table and dll loading in favor of a custom loader that exploit internal windows structure as well as storing function hashing in the header of the PE files to recover dll functions.

    Compression method


    Arithmetic coding


    The whole compression problem in crinkler can be summarized like this: what is the probability of the next bit to compress/decompress to be 1? The better is the probability (meaning by matching the expecting result bit), the better is the compression ratio. Hence, Crinkler needs to be a little bit psychic?!

    First of all, you probably wonder why probability is important here. This is mainly due to one compression technique called arithmetic coding. I won't go into the detail here and encourage the reader to read about the wikipedia article and related links. The main principle of arithmetic coding is its ability to encode into a single number a set of symbols for which you know their probability to occur. The higher the probability is for a known symbol, the lower the number of bits will be required to encode its compressed counterpart.

    At the bit level, things are getting even simpler, since the symbols are only 1 or 0. So if you can provide a probability for the next bit (even if this probability is completely wrong), you are able to encode it through an arithmetic coder.

    A simple binary arithmetic coder interface could look like this:
    /// Simple ArithmeticCoder interface
    class ArithmeticCoder {

    /// Decode a bit for a given probability.
    /// Decode returns the decoded bit 1 or 0
    int Decode(Bitstream inputStream, double probabilityForNextBit);

    /// Encode a bit (nextBit) with a given probability
    void Encode(Bitstream outputStream, int nextBit, double probabilityForNextBit);
    }

    And a simple usage of this ArithmeticCoder could look like this:
    // Initialize variables
    Bitstream inputCompressedStream = ...;
    Bitstream outputStream = ...;
    ArithmeticCoder coder;
    Context context = ...;

    // Simple decoder implem using an arithmetic coder
    for(int i = 0; i < numberOfBitsToDecode; i++) {
    // Made usage of our psychic alias Context class
    double nextProbability = context.ComputeProbability();

    // Decode the next bit from the compressed stream, based on this
    // probability
    int nextBit = coder.Decode( inputCompressedStream, nextProbability);

    // Update the psychic and tell him, how much wrong or right he was!
    context.UpdateModel( nextBit, nextProbability);

    // Output the decoded bit
    outputStream.Write(nextBit);
    }

    So a Binary Arithmetic Coder is able to compress a stream of bits, if you are able to tell him what's the probability for the next bit in the stream. Its usage is fairly simple, although their implementations are often really tricky and sometimes quite obscure (a real arithmetic implementation should face lots of small problems : renormalization, underflow, overflow...etc.).

    Working at the bit level here wouldn't have been possible 20 years ago, as It requires a tremendous amount of CPU (and memory for the psychic-context) in order to calculate/encode a single bit, but with nowadays computer power, It's less a problem... Lots of implem are working at the byte level for better performance, some of them can work at the bit level while still batching the decoding/encoding results at the byte level. Crinkler doesn't care about this and is working at the bit level, making the arithmetic decoder in less than 20 x86 ASM instructions.

    The C++ pseudo-code for an arithmetic decoder is like this:

    int ArithmeticCoder::Decode(Bitstream inputStream, double nextProbability) {
    int output = 0; // the decoded symbol

    // renormalization
    while (range < 0x80000000) {
    range <<= 1;
    value <<= 1;
    value += inputStream.GetNextBit();
    }

    unsigned int subRange = (range * nextProbability);
    range = range - subRange;
    if (value >= range) { // we have the symbol 1
    value = value - range;
    range = subRange;
    output++; // output = 1
    }

    return output;
    }

    This is almost exactly what is used in crinkler, but this done in only 18 asm instructions! The crinkler arithmetic coder is using a 33 bit precision. The decoder only needs to handle up to 0x80000000 limit renormalization while the encoder needs to work on 64 bit to handle the 33 bit precision. This is much more convenient to work at this precision for the decoder, as it is able to easily detect renormalization (0x80000000 is in fact a negative number. The loop could have been formulated like while (range >= 0), and this is how it is done in asm).

    So the arithmetic coder is the basic component used in crinkler. You will find plenty of arithmetic coder examples on Internet. Even if you don't fully understand the theory behind them, you can use them quite easily. I found for example an interesting project called flavor, which provides a tool to produce some arithmetic coders code based on a formal description (For example, a 32bit precision arithmetic coder description in flavor), pretty handy to understand how things are translated from different coder behaviors.

    But, ok, the real brain here is not the arithmetic coder... but the psychic-context (the Context class above) which is responsible to provide a probability and to update its model based on the previous expectation. This is where a compressor is making the difference.

    Context modeling - Context mixing


    This is one great point about using an arithmetic coder: they can be decoupled from the component responsible to provide the probability for the next symbol. This component is called a context-modeling.

    What is the context? It is whatever data can help your context-modeler to evaluate the probability for the next symbol to occur. Thus, the most obvious data for a compressor-decompressor is to use previous decoded data to update its internal probability table.

    Suppose you have the following sequence of 8 bytes 0x7FFFFFFF,0xFFFFFFFF that is already decoded. What will be the next bit? It is certainly to be a 1, and you could bet on it as high as 98% of probability.

    So this is not a surprise that using history of data is the key point for the context modeler to predict next bit (and well, we have to admit that our computer-psychic is not as good as he claims, as he needs to know the past to predict the future!).

    Now that we know that to produce a probability for the next bit, we need to use historic data, how crinkler is using them? Crinkler is in fact maintaining a table of probability, up to 8 bytes + the current bits already read before the next bit. In the context-modeling jargon, it's often called the order (before context modeling, there was technique developped like PPM  for Partial Predition Matching and DMC for dynamic markov compression). But crinkler is using not only the last x bytes (up to 8), but sparse mode (as it is mentioned in PAQ compressors), a combination of the last 8 bytes + the current bits already read. Crinkler calls this a model: It is stored into a single byte :
    • The 0x00 model says that It doesn't use any previous bytes other than the current bits being read.
    • The 0x80 model says that it is using the previous byte + the current bits being read.
    • The 0x81 model says that is is using the previous byte and the -8th byte + the current bits being read.
    • The 0xFF model says that all 8 previous bytes are used
    You probably don't see yet how this is used. We are going to take a simple case here: Use the previous byte to predict the next bit (called the model 0x80).

    Suppose the sequence of datas :

    0xFF, 0x80, 0xFF, 0x85, 0xFF, 0x88, 0xFF
    , ???nextBit???
    (0) (1) (2) (3) | => decoder position

    • At position 0, we know that 0xFF is followed by bit 1 (0x80 <=> 10000000b). So n0 = 0, n1 = 1 (n0 denotes the number of 0 that follows 0xFF, n1 denotes the number of 1 that usually follows 0xFF)
    • At position 1, we know that 0xFF is still followed by bit 1: n0 = 0, n1 = 2
    • At position 2, n0 = 0, n1 = 3
    • At position 3, we have n0 = 0, n1 = 3, making the probability for one p(1) = (n1 + eps) / ( n0+eps + n1+eps). eps for epsilon, lets take 0.01. We have p(1) = (2+0.01)/(0+0.01 + 2+0.01) = 99,50%

    So we have the probability of 99,50% at position (3) that the next bit is a 1.

    The principle here is simple: For each model and an historic value, we associate n0 and n1, the number of bits found for bit 0 (n0) and bit 1 (n1). Updating those n0/n1 counters needs to be done carefully : a naive approach would be to increment according values when a particular training bit is found... but there is more chance that recent values are more relevant than olders.... Matt Mahoney explained this in The PAQ1 Data Compression Program, 2002. (Describes PAQ1), and describes how to efficiently update those counters for a non-stationary source of data :
    • If the training bit is y (0 or 1) then increment ny (n0 or n1).
    • If n(1-y) > 2, then set n(1-y) = n(1-y) / 2 + 1 (rounding down if odd).

    Suppose for example that n0 = 3 and n1 = 4 and we have a new bit 1. Then n0 will be = n0/2 + 1 = 3/2+1=2 and n1 = n1 + 1 = 5

    Now, we know how to produce a single probability for a single model... but working with a single model (for exemple, only the previous byte) wouldn't be enough to evaluate correctly the next bit. Instead, we need a way to combine different models (different selection of historic data). This is called context-mixing, and this is the real power of context modeling: whatever is your method to collect and calculate a probability, you can, at some point, mix severals estimator to calculate a single probability.

    There are several ways to mix those probabilities. In the pure context-modeling jargon,  the model is the way you mix probabilities and for each model, you have a weight :
    • static: you determine the weights whatever the data are.
    • semi-static: you perform a 1st pass over the data to compress to determine the weights for each model, and them a 2nd pass with the best weights
    • adaptive: weights are updated dynamically as new bits are discovered.

    Crinkler is using a semi-static context-mixing but is somewhat also "semi-adaptive", because It is using different weights for the code of your exe, and the data of your exe, as they have a different binary layout.

    So how this is mixed-up? Crinkler needs to determine the best context-models (the combination of historic data) that It will use, assign for each of those context a weight. The weight is then used to calculate the final probability.


    For each selected historic model (i) with an associated model weight wi, and ni0/ni1 bit counters, the final probability p(1) is calculated like this :

    p(1) = Sum(  wi * ni1 / (ni0 + ni1))  / Sum ( wi )

    This is exactly what is done in the code above for context.ComputeProbability();, and this is exactly what crinkler is doing.

    In the end, crinkler is selecting a list of models for each type of section in your exe: a set of models for the code section, a set of models for the data section.

    How many models crinkler is selecting? It depends on your data. For example, for ergon intro,crinklers is selecting the following models:

    For the code section:
    0 1 2 3 4 5 6 7 8 9 10 11 12 13
    Model {0x00,0x20,0x60,0x40,0x80,0x90,0x58,0x4a,0xc0,0xa8,0xa2,0xc5,0x9e,0xed,}
    Weight { 0, 0, 0, 1, 2, 2, 2, 2, 3, 3, 3, 4, 6, 6,}

    For the data section:
    0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
    Model {0x40,0x60,0x44,0x22,0x08,0x84,0x07,0x00,0xa0,0x80,0x98,0x54,0xc0,0xe0,0x91,0xba,0xf0,0xad,0xc3,0xcd,}
    Weight { 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5,}
    (note that in crinkler, the final weight used to multiply n1/n0+n1 is by 2^w, and not wi itself).

    Wow, does it means that crinkler needs to store those datas in your exe. (14 bytes + 20 bytes) * 2 = 68 bytes? Well, crinkler authors are smarter than this! In fact the models are stored, but weights are only store in a single int (32 bits for each section). Yep, a single int to stored those weights? Indeed: if you look at those weights, they are increasing, sometimes they are equal... So they found a clever way to store a compact representation of those weights in a 32 bit form. Starting with a weight of 1, the 32bit weight is shifted by one bit to the left : If this is 0, than the currentWeight doesn't change, if bit is 1, than currentWeight is incremented by 1 : (in this pseudo-code, shift is done to the right)

    int currentWeight = 1;
    int compactWeight = ....;
    foreach (model in models) {
      if ( compactWeight & 1 )
        currentWeigh++;
      compactWeight =  compactWeight >> 1;

    //  ... used currentWeight for current model
    }

    This way, crinkler is able to store a compact form of pairs (model/weight) for each type of data in your executable (code or pure data).

    Model selection


    Model selection is one of the key process of crinkler. For a particular set of datas, what is the best selection of models? You start with 256 models (all the combinations of the 8 previous bytes) and you need to determine the best selection of models. You have to take into account that each time you are using a model, you need to use 1 byte in your final executable to store this model. Model selection is part of crinkler compressor but is not part of crinkler decompressor. The decompressor just need to know the list of the final models used to compress the data, but doesn't care about intermediate results. On the other hand, the compressor needs to test every combination of model, and find an appropriate weight for each model.

    I have tested several methods in my test code and try to recover the method used in crinkler, without achieving comparable compression ratio... I tried some brute force algo without any success... The selection algorithm is probably a bit clever than the one I have tested, and would probably require to layout mathematics/statistics formulas/combination to select an accurate method.

    Finally, blueberry has given their method (thanks!)

    "To answer your question about the model selection process, it is actually not very clever. We step through the models in bit-mirrored numerical order (i.e. 00, 80, 40, C0, 20 etc.) and for each step do the following:

    - Check if compression improves by adding the model to the current set of models (taking into account the one extra byte to store the model).

    - If so, add the model, and then step through every model in the current set and remove it if compression improves by doing so.

    The difference between FAST and SLOW compression is that SLOW optimizes the model weights for every comparison between model sets, whereas FAST uses a heuristic for the model weights (number of bits set in the model mask).
    "


    On the other hand, I tried a fully adaptive context modelling approach, using dynamic weight calculation explained by Matt Mahoney with neural networks and stretch/squash functions (look at PAQ on wikipedia). It was really promising, as I was able to achieve sometimes better compression ratio than crinkler... but at the cost of a decompressor 100 bytes heavier... and even I was able to save 30 to 60 bytes for the compressed data, I was still off by 40-70 bytes... so under 4k, this approach was definitely not as efficient as a semi-static approach chosen by crinkler.

    Storing probabilities


    If you have correctly followed the previous model selection, crinkler is now working with a set of models (selection of history data), for each bit that is found, each model probabilities must be updated...

    But think about it: for example, if to predict the following bit, we are using the probabilities for the 8 previous bytes, it means that for every combination of 8 bytes already found in the decoded data, we would have a pair of n0/n1 counters?

    That would mean that we could have the folowing probabilities to update for the context 0xFF (8 previous bytes):
    - "00 00 00 00 c0 00 00 50 00" => some n0/n1
    - "00 00 70 00 00 00 00 F2 01" => another n0/n1
    - "00 00 00 40 00 00 00 30 02" => another n0/n1
    ...etc.

    and if we have other models like 0x80 (previous byte), or 0xC0 (the last 2 previous bytes), we would have also different counters for them:

    // For model 0x80
    - "00" => some n0/n1
    - "01" => another n0/n1
    - "02" => yet another n0/n1
    ...

    // For model 0xC0
    - "50 00" => some bis n0/n1
    - "F2 01" => another bis n0/n1
    - "30 02" => yet another bis n0/n1
    ...

    From the previous model context, I have slightly over simplified the fact that not only the previous bytes is used, but also the current bits being read. In fact, when we are using for example the model 0x80 (using the previous byte), the context of the historic data is composed not only by the previous byte, but also by the bits being read on the current octet. This implies obviously that for every bit read, there is a different context. Suppose we have the sequence 0x75, 0x86 (in binary 10000110b), the position of the encoded bits is just after the 0x75 value and that we are using the previous byte + the bits currently read:

    First, we start on a byte boundary
    - 0x75 with 0 bit (we start with 0) is followed by bit 1 (the 8 of 0x85). The context is 0x75 + 0 bit read
    - We read one more bit, we have a new context :  0x75 + bit 1. This context is followed by a 0
    - We read one more bit, we have a new context :  0x75 + bit 10. This context is followed by a 0.
    ...
    - We read one more bit, we have a new context :  0x75 + bit 1000011, that is followed by a 0 (and we are ending on a byte boundary).

    Reading 0x75 followed by 0x86, with a model using only the previous byte, we finally have 8 context with their own n0/n1 to store in the probability table.

    As you can see, It is obvious that It's difficult to store all context found (.i.e for each single bit decoded, there is a different context of historic bytes) and their respective exact probability counters, without exploding the RAM. Moreover if you think about the number of models that are used by crinkler: 14 types of different historic previous bytes selection for ergon's code!

    This kind of problem is often handled using a hashtable while handling collisions. This is what is done in some of the PAQ compressors. Crinkler is also using an hashtable to store counter probabilities, with the association context_history_of_bytes = > (n0/n1), but It is not handling collision in order to keep minimal the size of the decompressor. As usual, the hash function used by crinkler is really tiny while still giving really good results.

    So instead of having the association between  context_history_of_bytes => n0/n1, we are using a hashing function, hash(context_history_of_bytes) => n0/n1. Then, the dictionary that is storing all those associations needs to be correctly dimensioned, large enough, to store as much as possible associations found while decoding/encoding the data.

    Like in PAQ compressors, crinkler is using one byte for each counter, meaning that n0 and n1 together are taking 16 bit, 2 bytes. So if you instruct crinkler to use a hashtable of 100Mo, It will be possible to store 50 millions of different keys, meaning different historic context of bytes and their respective probability counters. There is a little remark about crinkler and the byte counter: in PAQ compressors, limits are handled, meaning that if a counter is going above 255, It will stuck to 255... but crinkler made the choice to not test the limits in order to keep the code smaller (although, that would take less than 6 bytes to test the limit). What is the impact of this choice? Well, if you know crinkler, you are aware that crinkler doesn't handle large section of "zeros" or whatever empty initialized data. This is just because the probabilities are looping from 255 to 0, meaning that you jump from a 100% probability (probably accurate) to almost a 0% probability (probably wrong)  every 256 bytes. Is this really hurting the compression? Well, It would hurt a lot if crinkler was used for larger executable, but in a 4k, It's not hurting so much (although, It could hurt if you really have large portions of initialized data). Also, not all the context are reseted at the same time (a 8 byte context will not probably reset as often as a 1 byte context), so it means that final probability calculation is still accurate... while there is a probability that is reseted, other models with their own probabilities are still counting there... so this is not a huge issue.

    What happens also if the hash for a different context is giving the same value? Well, the model is then updating the wrong probability counters. If the hashtable is too small the probability counters may really be too much disturbed and they would provide a less accurate final probability. But if the hashtable is large enough, collisions are less likely to happen.

    Thus, it is quite common to use a hashtable as large as 256 to 512Mo if you want, although 256Mo is often enough, but the larger is your hashtable, the less are collisions, the more accurate is your probability. Recall from the beginning of this post, and you should understand now why "crinkler can take several hundreds of megabytes to decompress"... simply because of this hashtable that store all the probabilities for the next bit for all models combination used.

    If you are familiar with crinkler, you already know the option to find a best possible hashsize for an initial hashtable size and a number of tries (hashtries option). This part is responsible to test different size of hashtable (like starting from 100Mo, and reducing the size by 2 bytes 30 times, and test the final compression) and test final compression result. This is a way to empirically reduce collision effects by selecting the hashsize that is giving the better compression ratio (meaning less collisions in the hash). Although this option is only able to help you save a couple of bytes, no more.


    Data reordering and type of data


    Reordering or organizing differently the data to have a better compression is one of the common technique in compression methods. Sometimes for example, Its better to store deltas of values than to store values themselves...etc.

    Crinkler is using this principle to perform data reordering. At the linker level, crinkler has access to portion of datas and code, and is able to move those portions around in order to achieve a better compression ratio. This is really easy to understand : suppose that you have a series initialized zero values in your data section. If those values are interleaved with non zero values, the counter probabilities will switch from "there are plenty of zero there" to "ooops, there are some other datas"... and the final probability will balance between 90% to 20%. Grouping data that are similar is a way to improve the overall probability correctness.

    This part is the most time consuming, as It needs to move and arrange all portions of your executable around, and test which arrangement is giving the best compression result. But It's paying to use this option, as you may be able to save 100 bytes in the end just with this option.

    One thing that is also related to data reordering is the way crinkler is handling separately the binary code and the data of your executable. Why?, because their binary representation is different, leading to a completely different set of probabilities. If you look at the selected models for ergon, you will find that code and data models are quite different. Crinkler is using this to achieve better performance here. In fact, crinkler is compressing completely separately the code and the datas. Code has its own models and weights, Data another set of models and weights. What does it means internally? Crinkler is using a set of model and weights to decode the code section of your exectuable. Once finished, It will erase the probability counters stored in the hashtable-dictionary, and go to the data section, with new models and weights. Reseting all counters to 0 in the middle of decompressing is improving compression by a factor of 2-4%, which is quite impressive and valuable for a 4k (around 100 to 150 bytes).

    I found that even with an adaptive model (with a neural networks dynamically updating the weights), It is still worth to reset the probabilities between code and data decompression. In fact, reseting the probabilities is an empirical way to instruct the context modeling that datas are so different that It's better to start from scratch with new probability counters. If you think about it, an improved demo compressor (for larger exectuable, for example under 64k) could be clever to detect those portions of datas that are enough different that It would be better to reset the dictionary than to keep it as it is.

    There is just one last thing about weights handling in crinkler. When decoding/encoding, It seems that crinkler is artificially increasing the weights for the first discovered bit. This little trick is improving compression ratio by about 1 to 2% which is not bad. Having higher weights at the beginning enable to have a better response of the compressor/decompressor, even If it doesn't still have enough data to compute a correct probability. Increasing the weights is helping the compression ratio at cold start.

    Crinkler is also able to transform the x86 code for the executable part to improve compression ratio. This technique is widely used and consist of replacing relative jump (conditionnal, function calls...etc.) to absolute jump, leading to a better compression ratio.

    Custom DLL LoadLibrary and PE file optimization


    In order to strip down the size of an executable, It's necessary to exploit as much as possible the organization of a PE file.

    First thing that crinkler is using is that lots of part in a PE files are not used at all. If you want to know how a windows executable PE files can be reduced, I suggest you read Tiny PE article, which is a good way to understand what is actually used by a PE loader. Unlike the Tiny PE sample, where the author is moving the PE header to the dos header, crinkler made the choice to use this unused place to store hash values that are used to reference DLL functions used.

    This trick is called import by hashing and is quite common in intro's compressor. Probably what make crinkler a little bit more advanced is that to perform the "GetProcAddress" (which is responsible to get the pointer to a function from a function name), crinkler is navigating inside internal windows process structure in order to directly get the address of the functions from the in-memory import table. Indeed, you won't find any import section table in a crinklerized executable. Everything is re-discovered through internal windows structures. Those structures are not officially documented but you can find some valuable information around, most notably here.

    If you look at crinkler's code stored in the crinkler import section, which is the code injected just before the intros start, in order to load all dll functions, you will find those cryptics calls like this:
    //
    (0) MOV EAX, FS:[BX+0x30]
    (1) MOV EAX, [EAX+0xC]
    (2) MOV EAX, [EAX+0xC]
    (3) MOV EAX, [EAX]
    (4) MOV EAX, [EAX]
    (5) MOV EBP, [EAX+0x18]


    This is done by going through internal structures:
    • (0) first crinklers gets a pointer to the "PROCESS ENVIRONMENT BLOCK (PEB)" with the instruction  MOV EAX, FS:[BX+0x30]. EAX is now pointing to the PEB <