WoDotA Top 10 Weekly Vol.50        
Hy DotA Holic, Vieo kali ini adalah WoDOtA Top10 vol. 50.
Langsung saja ini dia :




Keterangan :
10. Anti-Mage melakukan Rampage dengan satu kali Ulti!
9. Akasha menyelamatkan dirinya yang hampir mati dengan Toss Gondar dua kali.
8. Neutrall Creep melakukan Triple Kill
7. Storm Spirit membunuh Roshan dengan tiba-tiba dan mendapatkan Aegis juga melakukan Double Kill.
6. Meepo yang sekarat menyelamatkan dirinya menggunakan Poof dan sedikit tipuan.
5. Morphling menggunakan Waveform unutk menghindar dari stun Ogre Magi lalu pergi ke base kemudian kembali lagi dengan Replicate dan membunuh Ogre.
4. Puck menghindari stun venge dan assasinate sniper. Lalu membunuh mereka semua.
3. Invoker melakukan Ultrakill dengan bantuan Neutrall Creep.
2. Kerja tim yang mengagumkan oleh Sand King, Kunkka, Faceless Void, Magnus dan Lina.
1. Sven menggunakan Armlet untuk menyelamatkan dirinya dan melakukan Double kill dengan sedikit bantuan.
          Democoding, tools coding and coding scattering        
Not so much post here for a while... So I'm going to just recap some of the coding work I have done so far... you will notice that It's going in lots of direction, depending on opportunities, ideas, sometimes not related to democoding at all... not really ideal when you want to release something! ;)

So, here are some directions I have been working so far...


C# and XNA

I tried to work more with C#, XNA... looking for an opportunity to code a demo in C#... I even started a post about it few months ago, but leaving it in a draft state. XNA is really great, but I had some bad experience with it... I was able to use it without requiring a full install but while playing with model loading, I had a weird bug called the black model bug. Anyway, I might come back to C# for DirectX stuff... SlimDx is for example really helpful for that.

A 4k/64k softsynth

I have coded a synth dedicated to 4k/64k coding. Although, right now, I only have the VST and GUI fully working under Renoise.. but not yet the asm 4k player! ;)



The main idea was to build a FM8/DX7 like synth, with exactly the same output quality (excluding some fancy stuff like the arpegiator...). The synth was developed in C# using vstnet, but must be more considered as a prototype under this language... because the asm code generated by the JIT is not really good when it comes to floating point calculation... anyway, It was really good to develop under this platform, being able to prototype the whole thing in few days (and of course, much more days to add rich GUI interaction!).

I still have to add a sound library file manager and the importer for DX7 patch..... Yes, you have read it... my main concern is to provide as much as possible a tons of ready-to-use patches for ulrick (our musician at FRequency)... Decoding the DX7 patch is well known around the net... but the more complex part was to make it decode like the FM8 does... and that was tricky... Right now, every transform functions are in an excel spreadsheet, but I have to code it in C# now!

You may wonder why developing the synth in C# if the main target is to code the player in x86 asm? Well, for practical reasons : I needed to quickly experiment the versatility of the sounds of this synth and I'm much more familiar with .NET winform to easily build some complex GUI. Although, I have done the whole synth with 4k limitation in mind... especially about data representation and complexity of the player routine.

For example, for the 4k mode of this synth, waveforms are strictly restricted to only one : sin! No noise, no sawtooth, no square... what? A synth without those waveform?.... but yeah.... When I looked back at DX7 synth implem, I realized that they were using only a pure "sin"... but with the complex FM routing mechanism + the feedback on the operators, the DX7 is able to produce a large variety of sounds ranging from strings, bells, bass... to drumkits, and so on...

I did also a couple of effects, mainly a versatile variable delay line to implement Chorus/Flanger/Reverb.

So basically, I should end up with a synth with two modes :
- 4k mode : only 6 oscillators per instrument, only sin oscillators, simple ADSR envelope, full FM8 like routing for operators, fixed key scaling/velocity scaling/envelope scaling. Effects per instrument/global with a minimum delay line + optional filters. and last but not least, polyphony : that's probably the thing I miss the most in 4k synth nowadays...
- 64k mode : up to 8 oscillators per instrument, all FM8 oscillators+filters+WaveShaping+RingModulation operators, 64 steps FM8's like envelope, dynamic key scaling/velocity scaling/envelope scaling. More effects, with better quality, 2 effect //+serial line per instrument. Additional effects channel to route instrument to the same effects chain. Modulation matrix.

The 4k mode is in fact restricting the use of the 64k mode, more at the GUI level. I'm currently targeting only the 4k mode, while designing the synth to make it ready to support 64k mode features.

What's next? Well, finish the C# part (file manager and dx7 import) and starting the x86 asm player... I just hope to be under 700 compressed byte for the 4k player (while the 64k mode will be written in C++, with an easier limitation around 5Ko of compressed code) .... but hey, until It's not coded... It's pure speculation!.... And as you can see, the journey is far from finished! ;)

Context modeling Compression update

During this summer, I came back to my compression experiment I did last year... The current status is quite pending... The compressor is quite good, sometimes better than crinkler for 4k... but the prototype of the decompressor (not working, not tested....) is taking more than 100 byte than crinkler... So in the end, I know that I would be off more than 30 to 100 byte compared to crinkler... and this is not motivating me to finish the decompressor and to get it really running.

The basic idea was to take the standard context modeling approach from Matt Mahoney (also known as PAQ compression, Matt did a fantastic job with his research, open source compressor....by the way), using dynamic neural network with an order of 8 (8 byte context history), with the same mask selection approach than crinkler + some new context filtering at the bit level... In the end, the decompressor is using the FPU to decode the whole thing... as it needs ln2() and pow2() functions... So during the summer, I though using another logistic activation function to get rid of the FPU : the standard sigmoid used in the neural network with a base 2 is 1/(1+2^-x)), so I found something similar with y = (x / (1 + |x|) + 1) /2 from David Elliot (some references here). I didn't have any computer at this time to test it, so I spent few days to put some math optimization on it, while calculating the logit function (the inverse of this logistic function).

I came back to home very excited to test this method... but I was really disappointed... the function had a very bad impact on compression ratio by a factor of 20%, in the end, completely useless!

If by next year, I'm not able to release anything from this.... I will put all this work open source, at least for educational purposes... someone will certainly be clever than me on this and tweak the code size down!

SlimDx DirectX wrapper's like in C++

Recall that for the ergon intro, I have been working with a very thin layer around DirectX to wrap enums/interfaces/structures/functions. I did that around D3D10, a bit of D3D11, and a bit of D3D9 (which was the one I used for ergon). The goal was to achieve a DirectX C# like interface in C++. While the code has been coded almost entirely manually, I was wondering If I could not generate It directly from DirectX header files...

So for the last few days, I have been a bit working on this... I'm using boost::wave as the preprocessor library... and I have to admit that the C++ guy from boost lost their mind with templates... It's amazing how they did something simple so complex with templates... I wanted to use this under a C++/Cli managed .NET extension to ease my development in C#, but I end up with a template error at link stage... an incredible error with a line full of concatenated template, even freezing visual studio when I wanted to see the errors in the error list!

Template are really nice, when they are used not too intensively... but when everything is templatized in your code, It's becoming very hard to use fluently a library and It's sometimes impossible to understand the template error, when this error is more than 100 lines full of cascading template types!

Anyway, I was able to plug this boost::wave in a native dll, and calling it from a C# library... next step is to see how much I can get from DirectX header files to extract a form of IDL (Interface Definition Language). If I cannot get something relevant in the next week, I might postpone this task when I won't have anything more important to do! The good thing is for example for D3D11 headers, you can see that those files were auto-generated from a mysterious... d3d11.idl file...used internally at Microsoft (although It would have been easier to get directly this file!)... so It means that the whole header is quite easy to parse, as the syntax is quite systematic.

Ok, this is probably not linked to intros... or probably only for 64k.... and I'm not sure I will be able to finish it (much like rmasm)... And this kind of work is keeping me away from directly working with DirectX, experimenting rendering techniques and so on... Well, I have to admit also that I have been more attracted for the past few years to do some tools to enhance coding productivity (not necessary only mine)... I don't like to do too much things manually.... so everytime there is an opportunity to automatize a process, I can't refrain me to make it automatic! :D


AsmHighlighter and NShader next update

Following my bad appetite for tools, I need to make some update to AsmHighlighter and NShader, to add some missing keywords, patch a bug, support for new VS2010 version... whatever... When you release this kind of open source project, well, you have to maintain them, even if you don't use them too much... because other people are using them, and are asking for improvements... that's the other side of the picture...

So because I have to maintain those 2 projects, and they are in fact sharing logically more than 95% of the same code, I have decided to merge them into a single one... that will be available soon under codeplex as well. That will be easier to maintain, ending with only one project to update.


The main features people are asking is to be able to add some keywords easily and to map file extensions to the syntax highlighting system... So I'm going to generalize the design of the two project to make them more configurable... hopefully, this will cover the main features request...

An application for Windows Phone 7... meh?

Yep... I have to admit that I'm really excited by the upcoming Windows Phone 7 metro interface... I'm quite fed up with my iPhone look and feel... and because the development environment is so easy with C#, I have decided to code an application for it. I'm starting with a chromatic tuner for guitar/piano/violins...etc. and it's working quite well, even if I was able to test it only under the emulator. While developing this application, I have learned some cool things about pitch detection algorithm and so on...

I hope to finish the application around september, and to be able to test it with a real hardware when WP7 will be offcialy launched... and before puting this application on the windows marketplace.

If this is working well, I would study to develop other applications, like porting the softsynth I did in C# to this platform... We will see... and definitely, this last part is completely unrelated to democoding!


What's next?

Well, I have to prioritize my work for the next months:
  1. Merge AsmHighlighter and NShader into a single project.
  2. Play a bit for one week with DirectX headers to see if I could extract some IDL's like information
  3. Finish the 4k mode of the softsynth... and develop the x86 asm player
  4. Finish the WP7 application
I still have also an article to write about ergon's making of, not much to say about it, but It could be interesting to write down on a paper those things....

I need also to work on some new directX effects... I have played a bit with hardware instantiating, compute shaders (with a raymarching with global illumination for a 4k procedural compo that didn't make it to BP2010, because the results were not enough impressive, and too slow to calculate...)... I would really want to explore more about SSAO things with plain polygons... but I didn't take time for that... so yep, practicing more graphics coding should be on my top list... instead of all those time consuming and - sometimes useful - tools!
          Playing a MP3 in c++ using plain Windows API        
While playing a mp3 is quite common in a demo, I have seen that most demo are often using 3rd party dlls like Bass or FMod to perform this simple task under windows. But if we want to get rid off this dependency, how can we achieve this with a plain windows API? What's the requirements to have a practical MP3Player for a demo?

Surprisingly, I was not able to find a simple code sample other the Internet that explain how to play a mp3 with Windows API, without using the too simple Windows Media Player API. Why WMP is not enough (not even talking about MCI - Media Control Interface which is even more basic than WMP)?

Well, It's lacking one important feature : It's only able to play from an url, so it's not possible for example to pack the song in an archive and play it from a memory location (although not a huge deal if you want to release the song on the side of your demo). Also I have never tested the timing returned by WMP (using probably IWMPControls3 getPositionTimeCode) and not really sure It's able to provide a reliable sync (at least, If you intend to use sync... but hey, is a demo without any sync, can be still called a demo?:)

So I started to find some peace of code around the net but they were covering only part of the problem. The starting point was to rely on the Audio Compression Manager API that provides an API conversion to perform for example a mp3 to pcm. Hopefully, I found the code from a guy that was kind enough to post the whole converter for a mp3 file using ACM. In the mean time, I found that Mark Heath, the author of NAudio posted few days ago a solution to convert a MP3 to WAV using NAudio. Looking at his code, he was using also ACM but he reported also some difficulties to implement a reliable MP3Frame/ID3Tag decoder in order to extract samplerate, bitrate, channels...etc. I didn't want to use this kind of heavy code and was looking a lighter and reliable solution for this : most of the people were talking about using the Windows Media Format SDK to get all this information from the file. The starting point is the WMCreateSyncReader method. Through this method, you are able to retrieve part of MP3Frame as well as ID3Tag.

Finally, I came with a patchwork solution :
  • using SyncReader from WMF to extract song duration.
  • using ACM to decode the mp3 to pcm
  • using plain old waveOut functions to perform sound playback and retrieve sound playback position.
Everything is inside a single .h with less than 300 lines including comments. I don't really know If it's the best way to play a mp3 from a file or from the memory, with Windows API, while still providing a reliable timing. I have tested it against a couple of mp3, thus It may still have some bugs... but at least, It's working quite well and It's a pretty small code. For example, the code is expecting the input mp3 to be at 44100Hz samplerate... If not It should probably failed... although with the use of WMF, It's quite easy to extract the sampleRate (although I'm not using it in the sample code provided here... was not sure about the result though :) )

Also, the code is not decoding&playing in realtime the song but is instead performing the decoding in a single pass and then playing the decoded buffer. This requires that the full pcm song to be allocated, which could be around 20Mo to 50Mo depending on the size of your song (It's easy to calculate : durationInSecondsOfTheSong * 4 * 441000, so a 3min song is requiring 30Mo). This is not probably the best solution, but It's not a huge task to transform this code to do realtime decoding/playback. The downside is that It will take some CPU in your demo. So that in the end, It's a just tradeoff between memory vs cpu depending on your needs!

/* ----------------------------------------------------------------------
* MP3Player.h C++ class using plain Windows API
*
* Author: @lx/Alexandre Mutel, blog: http://code4k.blogspot.com
* The software is provided "as is", without warranty of any kind.
* ----------------------------------------------------------------------*/
#pragma once
#include <windows.h>
#include <stdio.h>
#include <assert.h>
#include <mmreg.h>
#include <msacm.h>
#include <wmsdk.h>

#pragma comment(lib, "msacm32.lib")
#pragma comment(lib, "wmvcore.lib")
#pragma comment(lib, "winmm.lib")
#pragma intrinsic(memset,memcpy,memcmp)

#ifdef _DEBUG
#define mp3Assert(function) assert((function) == 0)
#else
//#define mp3Assert(function) if ( (function) != 0 ) { MessageBoxA(NULL,"Error in [ " #function "]", "Error",MB_OK); ExitProcess(0); }
#define mp3Assert(function) (function)
#endif

/*
* MP3Player class.
* Usage :
* MP3Player player;
* player.OpenFromFile("your.mp3");
* player.Play();
* Sleep((DWORD)(player.GetDuration()+1));
* player.Close();
*/
class MP3Player {
private:
HWAVEOUT hWaveOut;
DWORD bufferLength;
double durationInSecond;
BYTE* soundBuffer;
public:

/*
* OpenFromFile : loads a MP3 file and convert it internaly to a PCM format, ready for sound playback.
*/
HRESULT OpenFromFile(TCHAR* inputFileName){
// Open the mp3 file
HANDLE hFile = CreateFile(inputFileName, // open MYFILE.TXT
GENERIC_READ,
FILE_SHARE_READ, // share for reading
NULL, // no security
OPEN_EXISTING, // existing file only
FILE_ATTRIBUTE_NORMAL, // normal file
NULL); // no attr
assert( hFile != INVALID_HANDLE_VALUE);

// Get FileSize
DWORD fileSize = GetFileSize(hFile, NULL);
assert( fileSize != INVALID_FILE_SIZE);

// Alloc buffer for file
BYTE* mp3Buffer = (BYTE*)LocalAlloc(LPTR, fileSize);

// Read file and fill mp3Buffer
DWORD bytesRead;
DWORD resultReadFile = ReadFile( hFile, mp3Buffer, fileSize, &bytesRead, NULL);
assert(resultReadFile != 0);
assert( bytesRead == fileSize);

// Close File
CloseHandle(hFile);

// Open and convert MP3
HRESULT hr = OpenFromMemory(mp3Buffer, fileSize);

// Free mp3Buffer
LocalFree(mp3Buffer);

return hr;
}

/*
* OpenFromMemory : loads a MP3 from memory and convert it internaly to a PCM format, ready for sound playback.
*/
HRESULT OpenFromMemory(BYTE* mp3InputBuffer, DWORD mp3InputBufferSize){
IWMSyncReader* wmSyncReader;
IWMHeaderInfo* wmHeaderInfo;
IWMProfile* wmProfile;
IWMStreamConfig* wmStreamConfig;
IWMMediaProps* wmMediaProperties;
WORD wmStreamNum = 0;
WMT_ATTR_DATATYPE wmAttrDataType;
DWORD durationInSecondInt;
QWORD durationInNano;
DWORD sizeMediaType;
DWORD maxFormatSize = 0;
HACMSTREAM acmMp3stream = NULL;
HGLOBAL mp3HGlobal;
IStream* mp3Stream;

// Define output format
static WAVEFORMATEX pcmFormat = {
WAVE_FORMAT_PCM, // WORD wFormatTag; /* format type */
2, // WORD nChannels; /* number of channels (i.e. mono, stereo...) */
44100, // DWORD nSamplesPerSec; /* sample rate */
4 * 44100, // DWORD nAvgBytesPerSec; /* for buffer estimation */
4, // WORD nBlockAlign; /* block size of data */
16, // WORD wBitsPerSample; /* number of bits per sample of mono data */
0, // WORD cbSize; /* the count in bytes of the size of */
};

const DWORD MP3_BLOCK_SIZE = 522;

// Define input format
static MPEGLAYER3WAVEFORMAT mp3Format = {
{
WAVE_FORMAT_MPEGLAYER3, // WORD wFormatTag; /* format type */
2, // WORD nChannels; /* number of channels (i.e. mono, stereo...) */
44100, // DWORD nSamplesPerSec; /* sample rate */
128 * (1024 / 8), // DWORD nAvgBytesPerSec; not really used but must be one of 64, 96, 112, 128, 160kbps
1, // WORD nBlockAlign; /* block size of data */
0, // WORD wBitsPerSample; /* number of bits per sample of mono data */
MPEGLAYER3_WFX_EXTRA_BYTES, // WORD cbSize;
},
MPEGLAYER3_ID_MPEG, // WORD wID;
MPEGLAYER3_FLAG_PADDING_OFF, // DWORD fdwFlags;
MP3_BLOCK_SIZE, // WORD nBlockSize;
1, // WORD nFramesPerBlock;
1393, // WORD nCodecDelay;
};

// -----------------------------------------------------------------------------------
// Extract and verify mp3 info : duration, type = mp3, sampleRate = 44100, channels = 2
// -----------------------------------------------------------------------------------

// Initialize COM
CoInitialize(0);

// Create SyncReader
mp3Assert( WMCreateSyncReader( NULL, WMT_RIGHT_PLAYBACK , &wmSyncReader ) );

// Alloc With global and create IStream
mp3HGlobal = GlobalAlloc(GPTR, mp3InputBufferSize);
assert(mp3HGlobal != 0);
void* mp3HGlobalBuffer = GlobalLock(mp3HGlobal);
memcpy(mp3HGlobalBuffer, mp3InputBuffer, mp3InputBufferSize);
GlobalUnlock(mp3HGlobal);
mp3Assert( CreateStreamOnHGlobal(mp3HGlobal, FALSE, &mp3Stream) );

// Open MP3 Stream
mp3Assert( wmSyncReader->OpenStream(mp3Stream) );

// Get HeaderInfo interface
mp3Assert( wmSyncReader->QueryInterface(&wmHeaderInfo) );

// Retrieve mp3 song duration in seconds
WORD lengthDataType = sizeof(QWORD);
mp3Assert( wmHeaderInfo->GetAttributeByName(&wmStreamNum, L"Duration", &wmAttrDataType, (BYTE*)&durationInNano, &lengthDataType ) );
durationInSecond = ((double)durationInNano)/10000000.0;
durationInSecondInt = (int)(durationInNano/10000000)+1;

// Sequence of call to get the MediaType
// WAVEFORMATEX for mp3 can then be extract from MediaType
mp3Assert( wmSyncReader->QueryInterface(&wmProfile) );
mp3Assert( wmProfile->GetStream(0, &wmStreamConfig) );
mp3Assert( wmStreamConfig->QueryInterface(&wmMediaProperties) );

// Retrieve sizeof MediaType
mp3Assert( wmMediaProperties->GetMediaType(NULL, &sizeMediaType) );

// Retrieve MediaType
WM_MEDIA_TYPE* mediaType = (WM_MEDIA_TYPE*)LocalAlloc(LPTR,sizeMediaType);
mp3Assert( wmMediaProperties->GetMediaType(mediaType, &sizeMediaType) );

// Check that MediaType is audio
assert(mediaType->majortype == WMMEDIATYPE_Audio);
// assert(mediaType->pbFormat == WMFORMAT_WaveFormatEx);

// Check that input is mp3
WAVEFORMATEX* inputFormat = (WAVEFORMATEX*)mediaType->pbFormat;
assert( inputFormat->wFormatTag == WAVE_FORMAT_MPEGLAYER3);
assert( inputFormat->nSamplesPerSec == 44100);
assert( inputFormat->nChannels == 2);

// Release COM interface
// wmSyncReader->Close();
wmMediaProperties->Release();
wmStreamConfig->Release();
wmProfile->Release();
wmHeaderInfo->Release();
wmSyncReader->Release();

// Free allocated mem
LocalFree(mediaType);

// -----------------------------------------------------------------------------------
// Convert mp3 to pcm using acm driver
// The following code is mainly inspired from http://david.weekly.org/code/mp3acm.html
// -----------------------------------------------------------------------------------

// Get maximum FormatSize for all acm
mp3Assert( acmMetrics( NULL, ACM_METRIC_MAX_SIZE_FORMAT, &maxFormatSize ) );

// Allocate PCM output sound buffer
bufferLength = durationInSecond * pcmFormat.nAvgBytesPerSec;
soundBuffer = (BYTE*)LocalAlloc(LPTR, bufferLength);

acmMp3stream = NULL;
switch ( acmStreamOpen( &acmMp3stream, // Open an ACM conversion stream
NULL, // Query all ACM drivers
(LPWAVEFORMATEX)&mp3Format, // input format : mp3
&pcmFormat, // output format : pcm
NULL, // No filters
0, // No async callback
0, // No data for callback
0 // No flags
)
) {
case MMSYSERR_NOERROR:
break; // success!
case MMSYSERR_INVALPARAM:
assert( !"Invalid parameters passed to acmStreamOpen" );
return E_FAIL;
case ACMERR_NOTPOSSIBLE:
assert( !"No ACM filter found capable of decoding MP3" );
return E_FAIL;
default:
assert( !"Some error opening ACM decoding stream!" );
return E_FAIL;
}

// Determine output decompressed buffer size
unsigned long rawbufsize = 0;
mp3Assert( acmStreamSize( acmMp3stream, MP3_BLOCK_SIZE, &rawbufsize, ACM_STREAMSIZEF_SOURCE ) );
assert( rawbufsize > 0 );

// allocate our I/O buffers
static BYTE mp3BlockBuffer[MP3_BLOCK_SIZE];
//LPBYTE mp3BlockBuffer = (LPBYTE) LocalAlloc( LPTR, MP3_BLOCK_SIZE );
LPBYTE rawbuf = (LPBYTE) LocalAlloc( LPTR, rawbufsize );

// prepare the decoder
static ACMSTREAMHEADER mp3streamHead;
// memset( &mp3streamHead, 0, sizeof(ACMSTREAMHEADER ) );
mp3streamHead.cbStruct = sizeof(ACMSTREAMHEADER );
mp3streamHead.pbSrc = mp3BlockBuffer;
mp3streamHead.cbSrcLength = MP3_BLOCK_SIZE;
mp3streamHead.pbDst = rawbuf;
mp3streamHead.cbDstLength = rawbufsize;
mp3Assert( acmStreamPrepareHeader( acmMp3stream, &mp3streamHead, 0 ) );

BYTE* currentOutput = soundBuffer;
DWORD totalDecompressedSize = 0;

static ULARGE_INTEGER newPosition;
static LARGE_INTEGER seekValue;
mp3Assert( mp3Stream->Seek(seekValue, STREAM_SEEK_SET, &newPosition) );

while(1) {
// suck in some MP3 data
ULONG count;
mp3Assert( mp3Stream->Read(mp3BlockBuffer, MP3_BLOCK_SIZE, &count) );
if( count != MP3_BLOCK_SIZE )
break;

// convert the data
mp3Assert( acmStreamConvert( acmMp3stream, &mp3streamHead, ACM_STREAMCONVERTF_BLOCKALIGN ) );

// write the decoded PCM to disk
//count = fwrite( rawbuf, 1, mp3streamHead.cbDstLengthUsed, fpOut );
memcpy(currentOutput, rawbuf, mp3streamHead.cbDstLengthUsed);
totalDecompressedSize += mp3streamHead.cbDstLengthUsed;
currentOutput += mp3streamHead.cbDstLengthUsed;
};

mp3Assert( acmStreamUnprepareHeader( acmMp3stream, &mp3streamHead, 0 ) );
LocalFree(rawbuf);
mp3Assert( acmStreamClose( acmMp3stream, 0 ) );

// Release allocated memory
mp3Stream->Release();
GlobalFree(mp3HGlobal);
return S_OK;
}

/*
* Close : close the current MP3Player, stop playback and free allocated memory
*/
void __inline Close() {
// Reset before close (otherwise, waveOutClose will not work on playing buffer)
waveOutReset(hWaveOut);
// Close the waveOut
waveOutClose(hWaveOut);
// Free allocated memory
LocalFree(soundBuffer);
}

/*
* GetDuration : return the music duration in seconds
*/
double __inline GetDuration() {
return durationInSecond;
}

/*
* GetPosition : return the current position from the sound playback (used from sync)
*/
double GetPosition() {
static MMTIME MMTime = { TIME_SAMPLES, 0};
waveOutGetPosition(hWaveOut, &MMTime, sizeof(MMTIME));
return ((double)MMTime.u.sample)/( 44100.0);
}

/*
* Play : play the previously opened mp3
*/
void Play() {
static WAVEHDR WaveHDR = { (LPSTR)soundBuffer, bufferLength };

// Define output format
static WAVEFORMATEX pcmFormat = {
WAVE_FORMAT_PCM, // WORD wFormatTag; /* format type */
2, // WORD nChannels; /* number of channels (i.e. mono, stereo...) */
44100, // DWORD nSamplesPerSec; /* sample rate */
4 * 44100, // DWORD nAvgBytesPerSec; /* for buffer estimation */
4, // WORD nBlockAlign; /* block size of data */
16, // WORD wBitsPerSample; /* number of bits per sample of mono data */
0, // WORD cbSize; /* the count in bytes of the size of */
};

mp3Assert( waveOutOpen( &hWaveOut, WAVE_MAPPER, &pcmFormat, NULL, 0, CALLBACK_NULL ) );
mp3Assert( waveOutPrepareHeader( hWaveOut, &WaveHDR, sizeof(WaveHDR) ) );
mp3Assert( waveOutWrite ( hWaveOut, &WaveHDR, sizeof(WaveHDR) ) );
}
};

#pragma function(memset,memcpy,memcmp)

The usage is then pretty simple :

MP3Player player;

// Open the mp3 from a file...
player.OpenFromFile("your.mp3");
// or From a memory location!
player.OpenFromMemory(ptrToMP3Song, bytesLength);

player.Play();

while (...) {
// Do here your synchro in the demo using
...
double playerPositionInSeconds = player.GetPosition()
...
}
player.Close();

And that's all! Hope someone will find this useful!

You can download a Visual Studio project using the MP3Player.h class.
          The Afternoon Sound Alternative 01-19-2016 with Barry Roark        
Playlist:

Mercury Rev- Emotional Free Fall - The Light In You
Stick Figure- Fire On The Horizon - Set In Stone
Shocking Pinks- Nostalgia - Dance The Dance Electric
- voicebreak -
Sheila Chandra- Lament - The Struggle
Jacuzzi Boys- Wildflower - Happy Damage EP
Black Masala- I Love You Madly - I Love You Madly
Lettuce- He Made A Woman Out Of Me - Crush
Luna- 23 Minutes In Brussels - Penthouse
Ginger Baker- Alamout - Middle Passage
Kasey Chambers- Im Alive - Bittersweet
The Balancing Act- Who Got The Pearls - New Campfire Songs
- voicebreak -
Mark McGuire- The Undying Stars - Beyond Belief
Pretenders- Dance Take 1 Bonus Track - Get Close Expanded Remastered
Tuka- You - Life Death Time Eternal
The Chills- Warm Waveform - Silver Bullets
The Chills- Silver Bullets - Silver Bullets
- voicebreak -
Red Martian- Air - Slow Motion Samurai
Son Volt- Too Early Demo - Trace Expanded Remastered
Son Volt- Windfall 2015 Remastered - Trace Expanded Remastered
Clockdva- Beautiful Losers - Breakdown 12
David Bowie- Lazarus - Blackstar
John Cale- The Sleeper - Artificial Intelligence
Tindersticks- Frozen - The Something Rain
- voicebreak -
HeCTA- We Bitched We Bovvered And We Buildered - The Diet
Coke Weed- Dandelion - Mary Weaver
Green On Red- Lost World - Gas Food Lodging Green On Red
Eleanor Friedberger- He Didnt Mention His Mother - New View
Pell Mell- American Eagle - Flow
Ty Segall- The Magazine - Emotional Mugger
Brooklyn Funk Essentials- Recycled - Funk Aint Ova
Givers- Record High Record Low - New Kingdom


playlist URL: http://www.afterfm.com/index.cfm/fuseaction/playlist.listing/showInstanceID/20/playlistDate/2016-01-19
          Big Briar Etherwave Waveforms        
click for full size There is currently a discussion on the Csound mailing list about how to emulate a theremin waveform. I put together this chart that displays the output from the Big Briar Etherwave with various settings. As some … Continue reading
          AudioThing Phase Motion effect plugin updated to v2        
AudioThing Phase Motion 2 featAudioThing has launched version 2.0 of Phase Motion, a stereo phaser effect plugin for Windows and Mac. The update offers stereo parameters (with link), up to 32 stages (from 16), phase control over the waveforms, bipolar feedback control and more. Phase Motion 2 is a stereo phaser plugin featuring up to 32 phasing stages. The […]
          AudioThing updates Phase Motion to v1.2.5        
AudioThing has released version 1.2.5 of Phase Motion, a phaser effect plug-in for Windows and Mac. Phase Motion is a Phaser plug-in featuring up to 16 phasing stages. The Rate can be adjusted in Hertz or it can be synced to the host/DAW tempo. You can choose from 7 waveforms (sine, triangle, saw, square, smooth […]
          AudioThing releases Phase Motion plugin        
AudioThing has announced the release of Phase Motion, a phaser effect plug-in featuring up to 16 phasing stages. The Rate can be adjusted in Hertz or it can be synced to the host/DAW tempo. You can choose from 5 waveforms (sine, triangle, saw, square, random s&h) to modulate the phase, and you can tweak Depth […]
          SPC Plugins releases Freek stereo frequency shifter effect plugin        
SPC Plugins has introduced its Freek stereo frequency shifter effect plug-in for Windows and Mac. Stereo frequency shifter and barberpole phaser with tempo-locked delays, dual 96-waveform LFOs and phase-lock. All audio processing 64-bit. Freek features Stereo frequency shifting with independent or phase-locked shift. Stereo barberpole phasing with up to 32 stages. Advanced, dual 96-waveform LFOs. […]
          MB-PlugIns releases Phaz-Zoar        
MB-PlugIns has released yet another plug-in, Phaz-Zoar. Phaz-Zoar is a 6-stage phaser with adjustable amount for each stage which is fed into the next stage and can also be adjusted. Features 6 stages LFO: frequency, waveform (sine, triangle, saw and ramp) and range (in ms) Filters which affect the phase signal to modify the phasing […]
          Introduction to Curve 2 video by Dan Worrall + Cableguys plugins updated        
Cableguys Curve 2 videoCableguys has published a new tutorial video on its Curve 2 synthesizer instrument, in which Dan Worrall demonstrates how to design your own sounds. Curve is a software synthesizer with an irresistible waveform editor, huge sound library and slick interface. Ideal for both experimentation and detailed tweaking at an excellent sound quality. Curve 2 is […]
          Australian Army Ground-Based Air Defence Capability        
Defence and security company Saab has signed a contract with the Australian Defence Force to upgrade the Army’s RBS 70 ground-based air defence weapon system and Giraffe AMB radar. The contract has a combined value of approximately AUD32.5 million. Delivered under the AIR 90 programme, the existing Identification Friend or Foe (IFF) capability of the RBS 70 and Giraffe AMB systems will be upgraded to include Mode 5 functionality. The Mode 5 waveform uses modern modulation, coding, and...
          Phaser Coding Tips 8        
Phaser Coding Tips is a free weekly email – subscribe here. Welcome! In the last issue we covered how to make a bullet pool and various shoot-em-up weapon types. This time we’re exploring a way to create waveforms, or paths for your baddies to follow. Motion paths don’t apply to just shoot-em-ups of course. They’re […]

          Macroscopic scaling of high-order harmonics generated by two-color optimized waveforms in a hollow waveguide        
Macroscopic scaling of high-order harmonics generated by two-color optimized waveforms in a hollow waveguide Jin, Cheng; Lin, C. D.; Hong, Kyung-Han We present the macroscopic scaling of high harmonics generated by two-color laser pulses interacting with Ne gas in a hollow waveguide. We demonstrate that the divergence of harmonics is inversely proportional to the waveguide radius and harmonic yields are proportional to the square of the waveguide radius when the gas pressure and waveguide length are chosen to meet the phase-matching condition. We also show that harmonic yields are inversely proportional to the ionization level of the optimized two-color waveform with proper gas pressure if waveguide radius and length are fixed. These scaling relations would help experimentalists find phase-matching conditions to efficiently generate tabletop high-flux coherent soft x rays for applications.
          BOSS DR-880 Drum Machine with COSM Effects        
BOSS DR-880 Drum Machine with COSM Effects

BOSS DR-880 Drum Machine with COSM Effects

Boss DR880 Drum Machine with COSM Effects - 5-Year Warranty The DR-880 is a rhythm-programming powerhouse that's loaded with world-class drum, percussion, and bass sounds from Roland's famous SRX library. It also includes a stunning collection of original waveforms. You can get microscopic with the DR-880, but you also have the option of taking the simple route with its three EZ Compose buttons, which allow original patterns to be constructed without note-by-note programming hassles. Patterns can be taken deeper with the Groove Modify feature, where various groove and triplet feels can be applied. Ghost notes and fills can also be added automatically. Guitar and bass players can join the action by plugging directly to the DR-880's Guitar/Bass Input jack, and playing through the built-in COSM® Drive/Amp models and multi-effects. Innovative EZ Compose feature for quick, hassle-free programming 440 world-class drum and percussion sounds, 40 bass sounds with COSM bass-amp models Guitar/bass input, multi-effects, COSM amp models 3 independent insert effects (EQ and compression), TSC (Total Sound Control) featuring 3-band EQ and high-quality ambience 1,000 patterns (500 preset, 500 user); easily add fills, ghost notes, chord progressions, and more 20 velocity-sensitive pads Four assignable footswitch and expression-pedal inputs Individual outputs, digital out, USB port 


          Bring Home a Classic Synth with the DIY Fairlight CMI        

[Davearneson] built a modern version of a classic synthesizer with his DIY Fairlight CMI. If there were a hall of fame for electronic instruments, the Fairlight CMI would be on it. An early sampling synth with a built-in sequencer, the Fairlight was a game changer. Everyone from A-ha to Hans Zimmer has used one. The striking thing about the Fairlight was the user interface. It used a light pen to select entries from text menus and to interact with the audio waveform.

The original Fairlight units sold for £18,000 and up, and this was in 1979. Surviving units are well …read more


          Sound Wave Art by Epic Frequency        
We all know a picture is worth a thousand words, but up until now the value of an audio file has been hazy. Epic Frequency clarifies this by turning sound visualizations into canvas artwork suitable for framing. Epic’s site creates oversized waveform images of famous speeches and quotes from the likes of John F. Kennedy, Martin Luther King, Albert Einstein, Ronald Reagan, […]
          Low compliance and high elastance        
The ICP waveform shown demonstrates a value of greater than 20 mm Hg and is frankly triangular with a low compliance/high elastance appearance. CSF is drained from the external ventricular drain (EVD) system (line goes flat for a while) and is later reopened periodically. By draining CSF, this essentially changes the ICP waveform by moving down and left on the elastance curve. Later, the ICP waveform returns after the external ventricular drain is opened and some P wave components are seen. However, it is important to recognize that the ICP waveform still has an overall noncompliant morphology indicative of a persistent abnormal intracranial pressure-volume state.
          Mildly abnormal intracranial pressure (ICP) waveform.        
Mild, abnormal ICP waveform during external ventricular drain (EVD) clamp trial with clustering of P1 to P3 waves. Respiratory variations are noted as well as Valsalva maneuver. The patient has muscle resistance testing of his deltoid, which causes the patient to perform the Valsalva maneuver and transiently increase ICP. Inspiration causes a decrease in ICP, and Valsalva maneuver during segmental muscle strength testing increases it.
          January 2017 Meeting        

Speaker: Alexandre Shvartsburg, Wichita State University

Topic: High-Definition FAIMS for Proteomics, Metabolomics, and Structural Characterization Using Isotopologic Shifts

Date: Monday, January 23, 2017

Time: 6:15 pm Dinner, 7:15 pm: Presentation

Location: Shimadzu Scientific Instrument, Inc. Training Center 7100 Riverwood Drive, Columbia, MD 21046 (Directions)

Dinner: Please RSVP to Katherine Fiedler (Katherine.L.Fiedler@fda.hhs.gov) before January 23 if you will be attending the dinner or are a presenting as a vendor.

Abstract: With all the power of modern MS, most biological and environmental samples require substantial prior separations. The traditional chromatography and electrophoresis are now increasingly complemented by ion mobility spectrometry (IMS) in gases. The nonlinear method of differential or field asymmetric waveform IMS (FAIMS) based on the difference between mobilities at high and low electric fields is much more orthogonal to MS than linear IMS based on absolute mobility, which enables exceptionally specific isomer separations.

We will review the prerequisites for high-resolution FAIMS/MS and its exemplary applications. A major topic in proteomics is the localization of post-translational modifications in mixtures of isomeric proteoforms (variants), where MS/MS is limited by the lack of unique fragments. Mixtures of variants up to ~6 kDa with various PTMs are effectively disentangled by FAIMS using synthetic standards and downstream ETD. All D-amino acid containing peptides (DAACP) are likewise resolved from L-analogs. A similar challenge in metabolomics is elucidating the isomeric diversity of lipids that comprises multiple isomer types including transacylation, double bond position, and cis/trans geometry. High-definition FAIMS developed in our lab generally resolves over ~80% of lipid isomers across types, and more in conjunction with OzID for double bond localization. Finally, FAIMS can resolve isotopic isomers (isotopomers) and isotopologues with peak shifts dependent on the geometry. That is conceptually parallel to NMR, enabling a fundamentally new approach to molecular structure characterization based on gas-phase isotopic shifts.

http://heartlandmassspec.weebly.com/


          Root Mean Square        

The first time I was in school for electrical engineering (long story), I had a professor who had never worked in the industry. I was in her class and the topic of the day was measuring AC waveforms. We got to see some sine waves centered on zero volts and were taught that the peak voltage was the magnitude of the voltage above zero. The peak to peak was the voltage from–surprise–the top peak to the bottom peak, which was double the peak voltage. Then there was root-mean-square (RMS) voltage. For those nice sine waves, you took the peak voltage …read more


          Commenti su Vulcani il Vesuvio di rdroma        
SISMOLOGIA E. Auger*, M.L. Bernard, A. Bobbio*, M.T. Bonagura, L. Boschi*, P. Capuano*, V.Convertito, L. D’Auria, R. De Matteis*, A. Emolo*, G. Festa, P. Gasparini, A. Herrero*, G. Iannaccone*, L. Improta*, S. Judenherc, M. Lancieri, S. Nielsen*, V. Nisii*, R. Prevete, G. Russo, C. Satriano, A. Zollo Attività 2002 Tomografia sismica dei vulcani napoletani L'attività di ricerca ha riguardato principalmente l'analisi preliminare dei dati acquisiti durante la campagna sismica SERAPIS condotta nei golfi di Napoli e Pozzuoli nel Settembre 2002 e l'organizzazione del data-base di forme d'onda dei microterremoti registrati ai Campi Flegrei durante la crisi bradisismica del 1984. Le prime analisi tomografiche dei dati SERAPIS hanno portato alla detezione e alla definizione geometrica del bordo sepolto della caldera flegrea (il cui tetto è circa 1 km di profondità) riconoscibile dalle immagini tomografiche come un corpo ad alta velocità ed alta densità di forma circolare. L' evidenza del bordo calderico e la chiara identificazione del tetto del basamento carbonatico sotto la baia flegrea a circa 4 km di profondità apre nuove prospettive circa l'interpretazione dei fenomeni vulcanici e la modellistica dei meccanismi eruttivi ai Campi Flegrei. Una conferma della presenza dello strato a bassa velocità (sill magmatico) sotto al Vesuvio, a circa 9 km di profondità, viene dallo studio dei microterremoti registrati dalla rete sismica dell'Osservatorio Vesuviano. Studio della sorgente sismica e simulazione strong-motion L’attività di ricerca in questi due settori riguarda l’inversione non lineare delle registrazioni sismiche di alta frequenza e la simulazione della radiazione accelerometrica prodotta dalla rottura su una faglia (o su un sistema di faglie) estesa. Uno dei principali obiettivi della sismologia è la determinazione delle caratteristiche della sorgente dei terremoti a partire dalle registrazioni del moto del suolo ad essi associate. Con questo scopo, sono state sviluppate due differenti tecniche: la prima è basata sull’ottimizzazione di una funzione costo mediante l’algoritmo genetico concependo l’inversione come un processo multi-scala; la seconda mette direttamente in relazione le ampiezze del moto del suolo con quelle zone della faglia che hanno effettivamente dislocato, mediante una tecnica di retro-propagazione. La simulazione della radiazione accelerometrica può fornire un utile strumento per mitigare il rischio in aree sismicamente attive. Gli scenari di scuotimento sono generati modellando il processo di frattura alla sorgente mediante una descrizione cinematica e considerando distribuzioni eterogenee della dislocazione finale. I sismogrammi sintetici sono quindi calcolati risolvendo numericamente l’integrale di rappresentazione. Sono inoltre simulati numerosi processi di rottura possibili per la stessa sorgente e viene determinato il campo d’onda associato a ciascuno di essi. Infine viene eseguita un’analisi statistica sui risultati ottenuti al fine di stimare i parametri del moto del suolo di interesse ingegneristico. Sismica di esplorazione Nel corso del 2002 si è continuata l'attività di ricerca (avviata con la collaborazione degli anni scorsi con la compagnia petrolifera "Enterprise Oil Italiana") riguardante lo sviluppo e l'applicazione di tecniche per la migrazione pre-stack di dati di sismica a riflessione e wide-angle in aree dell'Appennino meridionale di interesse per la ricerca petrolifera. In particolare si è messa a punto una procedura di inversione non lineare della morfologia di riflettori crostali a partire dalla conoscenza di un modello tomografico, ottenuto dai primi arrivi acquisiti in geometria wide-angle. Si è inoltre avviata una collaborazione con il gruppo dell'istituto GeoAzur di Nizza circa lo sviluppo di tecniche 2-D e 3-D per l'inversione di forme d'onda. Studio dell’input sismico per la valutazione del rischio sismico L’attività di ricerca nell’ambito dello studio dell’input è stata sviluppata perseguendo essenzialmente due obiettivi. Il primo è stato quello di fornire come input sismico i risultati ottenuti utilizzando metodologie classiche, sia per quanto riguarda il calcolo della pericolosità sismica con approccio probabilistico che la generazione di accelerogrammi sintetici. Il secondo è stato quello di sviluppare nuove metodologie per il raffinamento dei risultati classici. In particolare, è stato affrontato il problema dell’utilizzo dei parametri cinematici della sorgente sismica come informazioni a-priori per lo studio della pericolosità sismica. Tale approccio ha fondamentalmente due implicazioni. Infatti, se da un lato consente di raffinare il calcolo della pericolosità sismica, dall’altro consente di estendere la definizione del “terremoto di progetto”. Fino ad oggi, utilizzando la tecnica della de-aggregazione, tale terremoto è stato definito assegnando solo una coppia magnitudo-distanza che viene assunta come responsabile più probabile del raggiungimento di un assegnato livello del moto del suolo. L’attività di ricerca finora svolta ha consentito di estendere tale definizione anche al meccanismo focale. Nello stesso ambito di ricerca è stato affrontato anche lo studio del non sincronismo del moto del suolo che risulta essere di particolare importanza per strutture quali ponti e viadotti. Tale problema è stato affrontato con metodi di modellazione del campo d’onda completo in mezzi complessi e con topografia. Eduseis: un sismografo didattico Effettuato in collaborazione con la “Città della Scienza”, il progetto ha lo scopo di creare una rete telematica di sismografi digitali a larga banda, a costi contenuti, nelle scuole superiori della Campania. L’architettura e il funzionamento della rete sono concepiti per operare in ambiente WEB con lo scopo di introdurre l’insegnamento della sismologia in una regione, quale la Campania, caratterizzata da un elevato rischio sismico e vulcanico. A regime il data base della rete EDUSEIS può essere utilizzato per ottenere informazioni sia sulle eterogeneità litosferiche che sui parametri di sorgente dei terremoti. Nell’anno in corso il progetto ha ricevuto un finanziamento dal GNDT-INGV. Nel 2002 sono state installate in alcuni istituti superiori nuove stazioni sismiche a larga banda (20s-20Hz) e a corto periodo (4.5Hz), inoltre è stato sperimentato un nuovo software di acquisizione dati e gestione della rete EduSeis. Nel laboratorio di sismologia sono state anche valutate le caratteristiche di diversi sensori sismici per valutarne l’adattabilità alla rete EduSeis. Per quanto riguarda la divulgazione, è proseguito l’addestramento degli insegnanti e degli studenti al funzionamento del sismografo e all’analisi ed interpretazione dei dati sismici insieme allo sviluppo di nuovi moduli formativi per le attività didattiche. Programma 2003 Saranno portati a termine le attività di ricerca nell’ambito dei progetti GNV e GNDT riguardanti la struttura dei vulcani attivi napoletani e lo sviluppo e l’applicazione di metodi per lo studio e la valutazione della pericolosità sismica. Nuovi programmi di ricerca saranno avviati nell’ambito dei progetti del Centro Regionale di Competenza sui Rischi Ambientali (Sez. Rischio Sismico) incentrati sull’implementazione e la messa in opera di una rete sismica muliti-componente in area Irpina. Il progetto Eduseis avrà uno sviluppo ulteriore attraverso l'installazione di nuove stazioni in area Irpina ed iniziative di divulgazione in collaborazione con Città della Scienza. Si avvierà inoltre nel 2003 l’attività scientifica relativa ai progetti PON approvati dal MIUR: Tecsas (Sistema per il monitoraggio e telecontrollo sismico di un edificio di interesse strategico (Ospedale di S.Angelo dei Lombardi) in partenariato con consorzio ISIDE, INGV- Osservatorio Vesuviano ed altre industrie minori) Sisma (Sistema di monitoraggio sismico sottomarino: prototipo da installare nel Golfo di Napoli) in partenariato con la Whitehead – Alenia Sistemi Subacquei e INGV- Osservatorio Vesuviano. 2002 Activity Report Seismic tomography of neapolitan volcanoes The research activity mainly concerned the preliminar analysis of data acquired during the Serapis 2001 campaign in the Gulfs of Napoli and Pozzuoli. In addition, the microearthquake data-base recorded during the 1984 bradiseismic crisis at Campi Flegrei has been reconstructed and archived. First tomographic analyses of Serapis data led to the dectection and geometrical definition of the buried rim of the Campi Flegrei caldera (whose roof is at about 1 km depth underneath the Pozzuoli bay area) which can be recognised as a high velocity, high density body of near circular shape. The evidence for a buried caldera rim and for the occurrence of the carbonatic basement top beneath the pozzuoli bay area, open new perspectives for the interpretation of volcanic phenomena and possible eruption mechanism in Campi Flegrei. We also confirmed the occurrence of the 9 km deep, magmatic sill, underneath Mt. Vesuvius from a detailed analysis of microearthquake records collected by the OV permanent network. Seismic source study and strong motion simulation The research activities in these fields concern the non linear inversion of high frequency seismograms and the simulation of accelerometric field associated with the rupture of an extended fault (or fault system). One of the main goals of seismology is the estimation of earthquake source characteristics starting from the ground motion records. With this in mind, we developed two different approaches: the former is based on the optimization of a misfit function by a genetic algorithm technique, conceiving the inversion as a multi-step process; the latter relates directly the observed wave amplitudes with those zones on the fault which could have effectively slipped by a back projection technique. The simulation of accelerometric radiation can provide a helpful tool to mitigate the seismic risk in a seismic active area. We produce shacking scenarios modelling fault rupture using a kinematic approach and a heterogeneous final slip distribution. Synthetic seismograms are evaluated resolving numerically the representation integral. Many different rupture processes occurring on the same fault are simulated and the seismic radiation field associated with each of them is then computed. Finally, a statistical analysis is performed to getting estimations of ground motion parameters of engineering interest. Exploration Seismics During 2002, we continued the research activity (started few years ago in cooperation with the oil company "Enterprise Oil Italiana") aimed at the development and application of pre-stack migration of seismic reflection and wide-angle data acquired in southern Apennines areas for oil exploration purposes. In particular we developed and applied a method for the non linear inversion of reflioction arrival times aimed at the reconstruction of the morphology of upper crustal reflectors, based on a background velocity medium inferred by tomographic inversion of wide angle, first arrival times. We also started cooperation with the research group at GeoAzur institute in Nice, to develop 2-D and 3-D migration techniques based on the full waveform inversion. Study of the seismic input for the seismic risk evaluation The research activity developed essentially pursuing two main goals. The first concerns the production of seismic input using classical methodology both for the evaluation of the seismic hazard and the generation of the time histories. The second goal concern the development of new methodology aimed to refine the classical results. In particular, the problem of introducing kinematic seismic source parameters as a-priori information inside the probabilistic seismic hazard analysis has been faced. This approach has fundamentally two main implications. In fact, if on one hand allows to refine the classical result, on the other hand permits to extend the classical definition of “design earthquake” also to seismic source parameters and in particular to focal mechanism. In the same activity research the study of out-of-phase motion has been faced. In particular a methodology for the full wave-field computation in complex media with the topographic effect has been developed. This study is extremely important for life-line structures as bridges or viaducts. The EduSeis Project This project is carried out jointly with “Città della Scienza”. Its aim is the creation of a telematic network of digital, broad band low cost seismometers in high schools of Campania. The network design is conceived to work in WEB environment. The project’s goal is the introduction of seismology teaching in high schools located in a high seismic and volcanic risk region, such as Campania. Once the project is fully settled, the collected data base can be used to get information on the small scale heterogeneity of the lithosphere and of seismic source parameters. During this year the project has been financially supported by GNDT-INGV. During 2002 new seismic station are installed in some high school (equipped with 20s-20Hz broad-band and 4.5Hz short period), and new software for the data acquisition and for the stations management has been experimented. In Seismology lab different seismic sensors are tested for checking a good fitness with EduSeis seismic network. Concerning the educational aims, during this year it’s carried on the training of teachers and students on seismic stations maintenance and on seismic data analysis and interpretation. Then several web-oriented modules have been implemented for the didactic activities. 2003 Program The activities related to GNV and GNDT projects concerning the structure of neapolitan active volcanoes, the development and application of methods for the evaluation of seismic hazard will be continued and finished (third year of the projects) New programs will be started in the framework of the Regional Center of Competences AMRA (Environmental Risks) centered on the implementation and installation of a multi-component seismic network in the Irpinia fault area with the research objective of study the fracture phenomena occurring of a causative fault system during the inter-seismic period. The project Eduseis will have a new impulsion due to the installation of other stations in the Irpinia area and the start-up of formation/information actions to be promoted in cooperation with the science museum "Città della Scienza". We will start the scientific activity related to the project PON approved and financed by MIUR: Tecsas (Development of a proto-type system for the remote seismic monitoring of a building of public interest (the hospital of S.Angelo dei Lombardi) with the partnership of "Consorzio Iside", INGV-OV and other three minor companies) Sisma (System for the submarine seismic monitoring of an active volcanic area: prototype to be installed in the Gulf of Naples) with the partnership of Whitehead – Alenia Sistemi Subacquei e INGV- Osservatorio Vesuviano. Pubblicazioni 1. Zollo A., D’Auria L., De Matteis R., Herrero A., Gasparini P., Bayesian estimation of 2-D P-velocity models from active seismic arrival time data: imaging of the shallow structure of Mt. Vesuvius (Southern Italy), Geophysical Journal International, 151,566-582, 2002 2. D'Auria L., Zollo A. (2002) - Recursive tessellation of Lagrangian manifolds: a new method for asymptotic seismic wavefield modeling.submitted to Geoph. J. Int. 3. A. Zollo, L. Cantore (2002).“Uno strumento di informazione e sensibilizzazione sul rischio sismico. Il sismografo didattico EduSeis”.Città della Scienza news 4. L. Cantore, F. De Martino. “Sismometri in rete per lo studio della sismologia, delle onde e della matematica. Le attività EduSeis nel progetto LES”. L. Atti del Convegno AICA-Didamatica, Napoli 2002 5. S. Judenherc, M. Granet, J.-P. Brun, G. Poupinet, J. Plomerová, A. Mocquet, U. Achauer. Images of lithospheric heterogeneities in the Armorican segment of the Hercynian Range in France. Tectonophysics 358/1, 121-134, 2002 6. L. Improta, A. Zollo, A. Herrero, M.R. Frattini, J. Virieux, and P. Dell’Aversana (2002). - Seismic imaging of complex structures by non-linear traveltime inversion of dense wide-angle data: application to a thrust belt. Geophys.J.Int., 151, pp. 264-278. 7. C. Ravaut, S. Operto, L. Improta, A. Herrero, J. Virieux and P. Dell’Aversana (2002). – Seismic imaging of a thrust belt by traveltime and frequency-domain waveform inversions. 64th EAGE Conf., Extended Abstract. 8. Zollo A., Marzocchi W., Capuano P., Lomax A., Iannaccone G. – Space and time behaviour of seismic activity at Mt.Vesuvius volcano, Southern Italy. Bulletin of the Seismological Society of America, 92, 625-640, 2002 9. Improta L., Bonagura M.T., Capuano P., Iannaccone G.: An integrated geophysical investigation of the upper crust in the epicentral area of the 1980, Ms=6.9, Irpinia earthquake (Southern Italy). Tectonophysics 10. Lanari R., De Natale G., Berardino P., Sansosti E., Ricciardi G.P., Borgstrom S., Capuano P., Pingue F., Troise C, 2002. Evidence for a peculiar style of ground deformation inferred at Vesuvius volcano, Geoph. Res. Lett., 29, 9, 10.1029/GL014571. 11. Zollo A., De Matteis R., D'Auria L. & Virieux J. (2002) - A 2D non-linear method for travel time tomography: application to Mt.Vesuvius active seismic data. in "Problems in Geophysics for the new millenium", Eds. Boschi E., Ekstrom G. & Morelli A., ING-Ed. Compositori. 12. Gorini A., Emolo A., Iannaccone G., Zollo A.: Il terremoto Irpino del 23 luglio 1930: modelli e simulazione della frattura sismica. In “Il Terremoto del Vulture 23 Luglio 1930 VIII dell’era fascista”. A cura di Castenetto S. e Sebastiano M., Servizio Sismico Nazionale, Ist. Polig. e Zecca dello Stato, 2002 13. Pingue F., Berrino G., Capuano P., Del Gaudio C., Obrizzo F., Ricciardi G.P., Ricco C., Sepe V., Borgstrom S.E.P., Cecere G., De Martino P., D’Errico V., La Rocca A., Malaspina S., Pinto S., Russo A., Serio C., Siniscalchi V., Tammaro U., Aquino I. (2002). Sistema Integrato di Monitoraggio Geodetico dell’area vulcanica attiva napoletana: reti permanenti e rilevamenti periodici. Atti 6a Conferenza Nazionale ASITA, II : 1751-1764. Perugia 5-8 novembre 2002 In press - L. Cantore, A. Bobbio, F. Di Martino, A. Petrillo, M. Simini and A. Zollo, 2002. The EduSeis project in Italy: a tool for training and awareness on the seismic risk. Seismological Research Letters. - Maresca R., Castellano M., De Matteis R. Saccorotti G., Vaccariello P., Local site effects in the town of Benevento (Italy) from noise measurements, Pure and Applied Geophysics. -Russo G. and Zollo A., 2002, A constant Q technique for the modeling of seismic body waves, Geophysics Submitted - Festa,G., Zollo,A., Manfredi,G., Polese,M., and Cosenza,E. (2002) Simulation of the earthquake ground motion and effects on engineering structures during the pre-eruptive phase of an active volcano. (Submitted to Bull.Seism.Soc.Am). - C. Ravaut, S. Operto, L. Improta, J. Virieux, A. Herrero, P. Dell’Aversana – Seismic imaging of crustal structures from surface wide-angle data by traveltime and full-wavefield inversions (to be submitted to Geophys. J. Int) In preparation - Festa, G., Zollo, A., Emolo, A. Fault Slip inversion by isochron back-projection. - A. Herrero, L. Improta, A. Zollo, P. Dell’Aversana - A first-arrival traveltime inversion method by multiscale non-linear search (to be submitted to Geophys.J.Int.) - L. Improta, G. Di Giulio, A. Rovelli – Local site effects estimation in Benevento (Southern Italy) by the analysis of weak-motion data, microtremors and 1D numerical modelling (to be submitted to PAGEOPH) - D. Patella, Z. Petrillo, A. Siniscalchi, L. Improta, B. DiFiore – New magnetotelluric data along the CROP04 profile, Southern Apennines, Italy (to be sumitted to Tectonophysics) Books - The TomoVes Seismic Project: Looking Inside Mt. Vesuvius, Editors: A. Zollo, A. Bobbio, P. Gasparini, R. Casale and M. Yeroyanni, publisher CUEN, Napoli, Italy. - The internal structure of mt. Vesuvius. A seismic tomography investigation” Editors: P.Capuano, P.Gasparini, A.Zollo, J.Virieux et al., Liguori editore. - Zollo A, Herrero A., Emolo A. Terremoti ed onde: Introduzione alla sismologia sprimentale. in preparation, Liguori editore - Futuro Remoto Novembre 2002 - “Dieci Domande Sul Nostro Futuro” - Fondazione Idis Città Della Scienza – Onlus Napoli Il Progetto EduSeis e il rischio sismico A. Zollo Comunicazioni a congressi American Geophysical Union 2002 Fall Meeting, San Francisco, USA Posters - J. Virieux, A. Zollo, A., J. Berenguer, T. Picq and A. Bobbio, 2002. EDUSEIS Project : an On-Going Education and Adwardness Experiment in Europe. - Soldati, G., L. Boschi, A. Piersanti, and A. M. Dziewonski, Travel time tomography of the CMB: Discrepancy between reflected and refracted phases. - De Matteis R., Vanorio T., Ciulli B., Spinelli E., Fiordelisi A., Zollo A., Seismic velocity structures of Larderello Geothermal System, Tuscany – Italy: Preliminary results. Oral presentation - Nielsen, R. Madariaga 2002.On the self-healing fracture mode. - Zollo A., Virieux J., Makris J., Auger E., Boschi L., Capuano P., Chiarabba C., D’Auria L., De Franco R., Judenherc S., Michelini A., Musacchio G., Serapis Group. High resolution seismic imaging of the Campi Flegrei caldera, Southern Italy. (Invited) European Seismological Commission XXVIII General Assembly, Genoa, Italy Posters - Emolo A., A. Gorini, G. Iannaccone, L. Improta and A. Zollo. Constraints on the source mechanism of the 1930 Irpinia (Southern Italy) earthquake from the simulation of the kinematic rupture process. - Bonagura M., A. Emolo, G. Iannaccone, L. Improta and V. Nisii. Inference on the source parameters for an historical earthquake through the round acceleration simulation. Oral presentation - ConvertitoV., A. Herrero. Introduction of a-priori seismic source information inside probabilistic seismic hazard assessment approach. (Oral presentation) - D'Auria L., Zollo A. Recursive Tessellation of Lagrangian Manifolds: a new tool for asymptotic seismic wavefield modeling. - Boschi, L., and A. Zollo, A new technique for seismic tomography of volcanic areas, applied to the Phlegrean Fields. - Lancieri, M., Herrero, A. , Zollo, A., Bernard, P. (2002). Simulation of strong motion using small events records. - G Festa, and A. Zollo, (2002). Fault slip inversion by isochron back-projection - M. Simini, A. Petrillo, L. Cantore, A. Bobbio and A. Zollo, 2002. The educational seismic network in Southern Italy: management and web-oriented activities. - A. Zollo, E. Balzano, A. Bobbio, L. Cantore, F. Di Martino, C. Paolantonio, A. Petrillo, M. Simini, J. Virieux, 2002. The EduSeis project in Italy: a tool for training and awareness on the seismic risk. - S. Nielsen, R. Madariaga, 2002.What does it take to stop an earthquake ? - European Geophysical Society’s XXVII General Assembly, Nice, France Posters - Convertito V., Herrero A. and Zollo A., Influence and implications of source parameters on probabilistic seismic hazard analysis. - Zollo A. et al. The SERAPIS Project: high resolution seismic imaging of the Campi Flegrei caldera structure - Emolo A. and A. Zollo (2002). Kinematic source parameters for the 1989 Loma Prieta earthquake from the non linear inversion of accelerograms. - Festa,G., and Nielsen,S. (2002).PML absorbing boundaries. - Ravaut C., Operto S., Improta L., Herrero A., Virieux J. and Dell’Aversana P. (2002) – Quantitative imaging of a thrust belt area from wide-angle seismic data by traveltime and frequency-domain waveform inversion. - Improta L., Di Giulio G. and Rovelli A. (2002) – Local site effects in the city of Benevento (Southern Italy) usng weak-motion and microtremor recordings. - De Luca G., de Natale G., Benz H., Troise C., Capuano P. (2002). Campi Flegrei structure from earthquake tomography. - Pingue F., Berrino G., Capuano P., Del Gaudio C., Obrizzo F., Ricciardi G.P., Ricco C., Sepe V., Borgstrom S.E.P., Cecere G., De Martino P., d’Errico V., La Rocca A., Malaspina S., Pinto S., Russo A., Serio C., Siniscalchi V., Tammaro U., Aquino I. (2002). Geodetic monitoring system operating on Neapolitan Volcanic area (Southern Italy). - Lanari R., De Natale G., Berardino P., Sansosti E., Ricciardi G.P., Borgstrom S., Capuano P., Pingue F., Troise C. Evidence for a peculiar style of ground deformation inferred at Vesuvius volcano. Oral presentation - Boschi, L., B. Kustowski, and G. Ekstrom, Tomographic images of the upper mantle from observations of surface wave phase and group velocity. IECEE, London - Festa,G., Zollo,A., Manfredi, G., Polese, M., and Cosenza, E. (2002). Simulation of the earthquake ground-motion on engineering structures during the pre-eruptive phase of a active volcano. Rèunion des Sciences de la Terre,Nantes,France,2002 - S. Judenherc, Barruol G., Granet M. L'anisotropie sismique du manteau sous la France vue par les ondes SKS et Pn. 64th EAGE Conf., Florence, Italy, May 2002 - C. Ravaut, S. Operto, L. Improta, A. Herrero, J. Virieux and P. Dell’Aversana (2002). – Seismic imaging of a thrust belt by traveltime and frequency-domain waveform inversions. Workshop - L.Boschi (2002). Long wavelength images of the mediterranean upper mantle from observations of surface wave phase and group velocities. Workshop “on the structure of the Mediterranean upper mantle”(2002), Napoli, Iatly. Tesi di laurea Satriano C.(2002). Analisi dei segnali basata sulla coerenza multipla spaziale: applicazione ai dati di sismica di esplorazione. Seminari - Gasparini. University of Utrecht: Volcanoes and geothermal anomalies in the Mediterranean area. (Invited) - Gasparini. Università di Bologna, Facoltà di Scienze: Il sistema di alimentazione del Vesuvio: evidenze geofisiche e geochimiche. (Invited) - Gasparini. Facoltà di Scienze Ambientali di Ravenna: Struttura interna del Vesuvio.(Invited) - Gasparini. European Commission, MEDIN, Bruxelles. Perspectives for volcanic risk mitigation in the neapolitan area, South Italy (Invited). - Zollo A., A. Emolo and G. Festa. International School of Geophysics, 22nd Course, Erice, Italy. High frequency seismic rupture imaging using the kinematic approach.(invited) - Boschi, L. International School of Geophysics, 22nd Course, Erice, Italy. On the resolution of global mantle tomography.(invited) Progetti di ricerca - CIPE-MURST (2000-2002) “La protezione dal rischio sismico: vulnerabilità, analisi e riqualificazione dell’ambiente fisico e costruito mediante tecniche innovative” (Coordinatore nazionale Prof. Filippo Vinale, Università di Napoli) nell’Unità di Ricerca “Pericolosità sismica e Simulazioni Accelerometriche”. (Coordinatore locale: Prof. Aldo Zollo) - “TRAIANO” del Gruppo Nazionale per la Difesa dai Terremoti (Programma Quadro 2000-2002, Coordinatore locale: Prof. Paolo Gasparini, Università di Napoli) - “VIA: Riduzione della vulnerabilità sismica di sistemi infrastrutturali e dell’ambiente fisico” del Gruppo Nazionale per la Difesa dai Terremoti (Programma Quadro 2000-2002, Coordinatore Prof. Francesco Calvi, Università di Pavia) nell’Unità di Ricerca “Pericolosità sismica ed Input sismico”. - “Indagini sismiche ad alta risoluzione per lo studio di strutture sismogenetiche in Appennino Meridionale” (Università di Napoli “Federico II”, Progetto Giovani Ricercatori 2002-2003, coordinatore Dott. Luigi Improta). - “Sviluppo e confronto di metodologie per la valutazione della pericolosità sismica in aree sismogenetiche: applicazione all’Appennino centrale e meridionale”.(Coordinatore Dott. Massimo Cocco, INGV, Roma). Task 4 “Validazione di metodologie per la simulazione di sismogrammi sintetici”(Coordinatore locale Prof. Aldo Zollo). - GNV-INGV (2002) “Metodologie sismiche integrate per lo studio della struttura dei vulcani attivi: applicazione alla caldera di campi flegrei”. (Coordinatore nazionale e locale: Prof. Aldo Zollo) - PRIN (2001-2002)“Studio dei fenomeni di fatturazione sismica”. (Coordinatore nazionale: Prof. M. Dragoni, Coordinatore locale: Prof. A.Zollo) Progetto Cee: “3F-Corinth”. (Coordinatore locale: Prof. Aldo Zollo)
          å›žè¦†"æµ´å·¾"的問題:如何讓量測的訊號可以在顯示的Graph上面停留不會消失        
我在網站的第一篇文章有提過"提問請儘量不要使用注音文或火星文",這位網友你的問題不難,看懂你要問的內容反而比較難 基本上要使Waveform Graph顯示多筆數據資料,只要將原本是1D Array的資料集合成2D Array的資料送進Waveform Graph即可,實做上就是迴圈加上移位暫存器來累計數據資料    
          Noise Angels        

Graphics and Music by Peder Norrby

Cast: Peder Norrby

Tags: experimental, audio-reactive, trapcode, particular, waveform, particles, after effects, minimalistic and abstract


          Dynamics, the "Loudness Wars," or "Why is Your Album so Quiet Sometimes?"        
I've been meaning to write about this for a while; a couple of folks have asked me about this.  They wonder why my album sounds so much more "quiet" compared to other things they listen to. They notice, for example, that when a Flaud Logic song comes up in their iTunes playlist or something, that they sometimes have to raise the volume on their device to hear it better. But then, a song from another artist comes on afterwards, and they have to lower the volume again!

I promise it's not some type of unique torture I devised!  It actually exemplifies the results of a much bigger issue--the fabled "Loudness Wars."  For those who are unfamiliar, it refers to the modern trend of "squashing" the waveforms that represent your album during the mastering process, thereby causing a reduction in dynamic range, but an increase in perceived loudness (Google 'Loudness Wars' for some great articles on the subject).  This effect can be desirable in some cases and can be implemented artfully. The trouble is that nowadays, there's this sort of "competition" where an artist may ask the mastering engineer to make his or her album "as loud as possible," so that when heard on the radio or on a playlist, it appears to stand out.  "You know that great album by ___insert artist name___?  Make my record rock harder than that!"

Studies have shown that listening to music that's been mastered this way can lead to a phenomenon known as "Aural Fatigue".  Think about it: With "squashed" music, this is the equivalent of stimulating a huge number of each ear's sensory receptors all at once, at high amplitudes, over long periods of time.  Though not everybody is aware of how this manifests when actually listening to the music, those who are often describe it as, "The music was really great, but for some reason, I can't make it through the whole record in one sitting," or, "The best album that I never listen to."  It's doing something on a subconscious level which can align the listener's preferences against that particular music.  

For the type of music I'm writing these days, a full spectrum of dynamics is important.  My hope is that listeners will "feel" something through my music, whatever that feeling may be, and I like to bring the listener on a journey.  That means there will be "loud" sections, "soft" sections, angry, calm, tranquil, explosive.  It made sense for me to attempt to maintain those aural contrasts in the music at the expense of it perhaps sounding like "the next best thing," or somehow dated.  Interestingly, people have said to me that Flaud Logic, "...reminds me of when I used to listen to Yes albums on vinyl. The really quiet parts you sometimes couldn't even hear because the record fuzz and pops were louder!"  

Is there a place in the world for rock music that still maintains its dynamic range in an era where noise on a record is conspicuously absent ("let's fill all of that now-empty sonic space!")?  Does it sound really really weird for a rock album to be made using the latest technology but still have quiet sections that get drowned out when listened to through iPod headphones?   I surely don't know the answer, but it is something that weighs on my mind in the production process.




          Subclavian Steal        
This first picture is a normal vertebral artery waveform. Compared to the next one (which is abnormal) it is easy the see a difference. In the abnormal one, the flow dips down to baseline immediately after systole. The pattern created is called a “bunny waveform” for resembling a crouched rabbit as viewed from the side. […]
          Carotid Spectral Doppler Waveform in Heart Failure        
The hemodynamics of advanced heart failure are quite complex. In this post, I present a visual demonstration of some of these changes by looking at the carotid artery spectral Doppler waveform in advanced heart failure. To begin let’s look at normal. The brain maintains a continuous supply of blood throughout the cardiac cycle. Thus, there is […]
          Dark Matter May Be Trapped in All the Black Holes - Facts So Romantic        

This isn’t the first time scientists have suggested black holes might be dark matter, but we thought the possibility had been decisively ruled out. The resurrection of the idea is but one example of the fertile creativity that follows a new discovery.Photograph by NASA Goddard Space Flight Center / Flickr

When, on February the 11th, 2016, the spokesperson for the Advanced Laser Interferometric Gravitational Wave Observer, or aLIGO, for short, announced the discovery of gravitational waves, I was stunned. For sure, we expected aLIGO to, at some point, give us something interesting, but we thought it would be tentative. We expected that the project would, after a sophisticated and laborious look at months or years of data, show us a weak signal, popping its head feebly above the noise. 

But no, the plots that were shown that fateful day in February were so clear and unambiguous that I didn’t take any convincing. I could see, with my bare eyes, the unmistakable waveform of two large black holes coming together, merging into one and, as it settled down, bleeding gravitational waves into the ambient space time. 

And there was more. The black holes that aLIGO saw weren’t supposed to be…
Read More…

          Digital Music Studio 7.0        

Not only can you use Digital Music Studio to grab music from a CD, but also record audios, in virtually any format. Stop there, and burn your improved, converted audio files to disc. Or use the advanced audio editor to create your own music and songs that you can transfer to CDs, or share it with your friends in party. With Digital Music Studio you can: Digitize a sound recording to the hard disk, in a way that is suitable to record it on an audio CD; Record audio data from a microphone or other available input device ; Display a waveform window of an audio file and apply zooming; Edit audio files visually and apply various effects as well as different filters to any selected portion of audio files; Convert an audio file from one format to another; Digitize a sound burning to the hard disk, in a way that is suitable to burn it on an audio CD. You could enjoy all the features in the trial period. So, just download it and try!

Download Digital Music Studio 7.0

          License terms of for the all files other than the 3-D model “mei”        
The materials in the example other than the 3-D model of “mei”, including motions, panel models, voice models, and the waveforms generated from voices, are covered under the Creative Commons license (CC BY). You are free to copy and distribute this model under the license terms. You must attribute the work by showing the copyright […]
          Alma S2200 Cx E        

MAIN FEATURES Full HD Digital Satellite Receiver Conax Embedded card reader Channel Recording to External Storage Devices TimeShift Support MKV, AVI, MPG, MOV, MP3 playback support Ethernet Connection & WiFi Support Weather Forecast & RSS Reader Functions 4 (7 segment) digit LED display Powerful Channel Management tools (Lock, Edit, Move, Skip, Delete) YouTube Favourites User Friendly On-Screen Display (OSD) Multi-Language support Subtitles support (DVB/TXT) DiSEqC 1.0, 1.2 and USALS compatible Dolby Digital BitStream out via S/PDIF & HDMI Software Upgrade Support via USB & RS232 <1W Power Consumption in Stand-By mode TECHNICAL SPECIFICATIONS Tuner & Demodulation System standard: Fully MPEGII/DVB Compliant Input Frequency: 950~2150Mhz RF Input level: -65~-25dBm LNB Control: DiSEqC1.0/1.2 LNB Power:13V/18V (Max, 400mA) LNB tone switch: 22KHZ Waveform: 8PSK / QPSK (SCPC, MCPC capable) Symbol rate: 2~90Msps A/V Mode Video format: MPEG-II Main pro*le/Main level Audio format: MPEG-II layer I&II Aspect Ratio: 16:9, 4:3 Audio sampling rate: 32,44.1, 48kHz Audio Type: Left / Right / Stereo / Mono Graphic display: 1920×1080, 1280×720, 720x480 Microprocessor & Memories Processor: 400MHz based CPU RAM: 128Mb DDR FLASH: 8 Mb Power & Environment Condition Supply Voltage: Free Voltage (175~250V AC, 50/60Hz) Supply Power: Max.30 W Stand-By Power: <1W Operating temp: 0°C~40°C Physical Speci*fication Display: 4Digit (7-segment) LED display Dimensions: 260(W) x 150(D) x 45(H)mm Connectors Satellite IF input: F-type (Digital) Satellite IF loop out: 950-2150MHz 1 SCART TV (RGB, CVBS) RCA/Composite Audio Left & Audio Right 0/12V Output RS232: 9 way D RS232 DCE serial port S/PDIF: Coaxial HDMI USB
€90.00
          Genie in a Mouse Click: Indago Protocol Debug App        

Do you remember what life was like before the internet and smart phones? If you wanted to find information about something, you had to go to the library and find a book that could answer your question, which usually meant a delay of anything from a few hours to several days. Then the internet happened, and the world as we know it changed forever. Smart phones came along, and brought with them a new kind of freedom and unprecedented levels of productivity gains. Now, any information that we need to get our jobs done is but a measly click away. These seismic changes in our lifestyle have also brought about a big change in our attitude toward how we approach problems. We value our time more, and want to spend it on things that are creative, enjoyable, and productive. And this includes the way we debug code while designing complex Systems-on-Chip.

Several years ago, finding and root-causing bugs in chips meant staring at long lines of zeroes and ones in trace files until one went cross-eyed from all that strain. Waveform viewers made things a little better, but tracking down a bug on a complex standard bus was still a nightmare. There was no way to easily compare different transactions in a channel, trace the different packets, and compare data at different simulation times to see what went wrong and where. And to decode the complex bus state machine during debug meant drawing the whole machine on a white board in the office with multi-colored marker pens.

Not anymore! Cadence’s Indago Protocol Debug App is a productivity tool that lives up to every letter of its long name! It is truly a verification engineer’s dream come true. It has everything you could’ve wished for in a debug tool and several things that you never knew you needed. Take, for instance, its channel viewer. The channel viewer provides a visual representation of protocol traffic on all the instances of Verification IP on your SoC. It allows you to choose any interface, and immediately shows you all the traffic on the interface channels at any given time. You can scroll the timeline to see traffic at different points in the simulation. If you are trying to trace, say, a READ, you can click on it and choose to highlight linked content. This highlights the data that was returned during that READ transaction so you can look at it right away. You can also click on any command, and see the contents of the packet fields, and, this is the best part – you can choose several packets to compare data side by side! This feature alone, would’ve saved me days of debug and frantic scrolling of long waveforms back and forth trying to trace packets on interfaces channels and their contents.

The state machine viewer is another piece of magic that I could’ve really used when I was doing verification. Debugging state machines can be a real bear, especially when they are very complex. And let’s face it—none of the standard protocols are simple anymore, and their complexity is only increasing with every new generation. The state machine viewer essentially brings the entire protocol state machine to life—except, it does it within the context of that particular simulation. This means, that instead of a generic protocol state machine diagram, you are now looking at one that shows you exactly what happened with the state machine in that particular simulation. It is the difference between looking at a generic Disney World map, and one that shows at all times where you are now, where you were before and which rides you’ve already been on! With the state machine viewer, you can see what state you were in at any given time, what the previous state was, how you got there, and what the next state will be. The states that were never visited during that simulation are greyed out. And all this is done in a pictorial representation so it’s very intuitive to grasp and navigate.

The memory viewer allows you to see the entire contents of the memory at any given time. The smart log viewer lets you search by string for error and warning messages that were generated during simulation, and a nifty little feature that allows you to save queries and share them with colleagues. The life story viewer allows you to choose any item in one of the other viewers and see what it did during the entire simulation.

Cadence is now offering a free trial for this productivity tool that will make your life so much easier. Watch a demo of the tool, and sign up for the free trial now. You can thank me later.

(Please visit the site to view this video)

Join the future of SoC verification. All that extra time you will have now—imagine the possibilities!


          How To Build a Pocket-Sized mBed Signal Generator        

Last month, I talked about how to get started with mBed and ARM processors using a very inexpensive development board. I wanted to revisit mBed, though, and show something with a little more substance. In particular, I often have a need for a simple and portable waveform generator. It doesn’t have to be too fancy or meet the same specs as some of the lab gear I have, but it should be easy to carry, power off USB, and work by itself when required.

My requirements mean I needed a slightly more capable board. In particular, I picked up a …read more


          A Memory of Light free Reaktor Blocks set gets more presets        
Flintpope A Memory of LightFlintpope has released an updated version of its A Memory of Light, a free Reaktor Blocks synth instrument featuring sounds that bypass the usual waveform synthesis. Instead it combines a wind-synth oscillator with a noise generator through a pair of reverbs and a tape delay to create a mix of ethereal, powerful and lonely textures. […]
          NGC Delivers Advanced Technology Simulator for Gripen JAS 39 Fighter Aircraft        
Northrop Grumman Corporation's (NYSE: NOC) Amherst Systems business unit has delivered a Combat Electromagnetic Environment Simulator (CEESIM) to SAAB AB, Surveillance in Jarfalla, Sweden. CEESIM will perform aircraft testing on the Gripen JAS 39 multirole fighter. The CEESIM system includes the advanced pulse generation (APG) capability, which uses the latest digital technology to generate advanced waveforms. It also features the ability to perform digital modeling of multiple active electro...
          Paper: PUI (1997) “Prosody Analysis for Speaker Affect Determination”        
Andrew Gardner and Irfan Essa (1997) “Prosody Analysis for Speaker Affect Determination” In Proceedings of Perceptual User Interfaces Workshop (PUI 1997), Banff, Alberta, CANADA, Oct 1997 [PDF][Project Site] Abstract Speech is a complex waveform containing verbal (e.g. phoneme, syllable, and word) and nonverbal (e.g. speaker identity, emotional state, and tone) information. Both the verbal and […]
          Kollaborate Server 2.0 - notification digests, hoverscrub thumbs, peek in folder, Web Hooks and more        

We've just launched Kollaborate Server 2.0 - a major update to our in-house workflow platform for video professionals. This is the in-house companion to Kollaborate 2.0 that launched on the cloud last month.

File management improvements


We've rebuilt the Files page to behave more like a desktop browser like OS X Finder, including arrow key navigation, click to select, shift-click to select a range of files, etc.


QuickLook is also back but this time it behaves a lot more like the one in OS X. Press Space on a selected file to preview it without opening it. You can even QuickLook folders to peek inside them.




Notification Changes

One of the key goals of Kollaborate is keeping people informed, but with the old system it could sometimes be difficult to balance the needs of people who wanted to know about every single change vs those who wanted a much looser connection to the project.

So we’ve now introduced the concept of subscriptions. By default, everyone will receive a periodic digest summarizing recent changes to the project in a single email. You can choose to receive this once an hour, every few hours, once a day or not at all. (If there are no changes to the project in that time you won’t receive an email.)

If you want to receive instant alerts like in the previous version, you can subscribe to a project, file or task. Subscribing to a project makes the behavior identical to the previous version and can be done by clicking the dropdown next to the project on the Projects page.

Subscribing to a file or task alerts you to instant changes involving that item but without switching on alerts for other files or tasks in the project. Some actions automatically subscribe you to a file - such as uploading or commenting on it. You can click the Subscribe button again on the file to unsubscribe.

 

Thumbnail view and hoverscrub

You can now toggle between the traditional list view and a new thumbnail view. This provides fewer power user options but is a great way of graphically browsing through your files. 


We've also added hoverscrub thumbnails like in FCPX so you can browse through a file just by scrubbing your cursor over it. You also get a thumbnail view when you hover your mouse over the playhead in the player.


Additionally, audio files now show a thumbnail of their waveform instead of a generic icon.

(Note: hoverscrub thumbs are currently only created by Kollaborate Encoder and the latest beta of Kollaborate Transfer.)

To-Do Comments


Tap the checkbox icon in a comment to turn it into a to-do comment, then tap it again to mark it done. 


You can filter by to-do and choose to only export to-do comments to your NLE. 


Adobe Premiere panel


We've also launched an extension for Adobe Premiere that functions as both a file browser and task manager.

You can browse files in the cloud and download them directly into your project by double-clicking them. You can also import  markers into your current sequence. 


View your current to-dos, click on them to jump to that position in your timeline and then click the tick to mark them as complete.

The extension is supported on Mac and Windows and Kollaborate Server is supported by version 1.0.2 and higher.

Dashboard

Dashboard is back - view recent project changes, assigned tasks, favorites, to-do comments and more at a glance. 


Web Hooks (Beta)

Kollaborate Server can send events such as uploads or new comments to a custom script either on the same server or another server for integrating with in-house databases and task managers.

Some examples of what you could use it for:

  • Logging every file uploaded to a database
  • Adding new tasks to your calendar management software
  • Adding comments or tasks to your in-house work tracking system
  • Sending custom email alerts to users
  • Creating keywords or hashtags in comments that trigger certain behavior in your in-house systems

Note that web hook events are not immediate and will normally arrive after a few minutes. This is a beta feature so we're keen to hear how customers plan to use this feature and what additional data they need for integrating in-house.

Web Hooks can be switched on from the Configure page in the admin area. Only super admins can access this page.

Major architectural overhaul


We've made significant under-the-hood modifications to the software, not all of which are immediately noticeable to users, but they pave the way for significant features in the future.

If you are upgrading from a previous version, be aware that this version moves a lot of things around so please be patient if the upgrade script takes a while. Playable proxies are now stored separately from original media files and thumbnails are now stored in your media storage area (expect thumbnails to take up 2-3 times more space than before due to the new sizes and hoverscrub options).

Another area we've made significant changes to is our API. This improves security by allowing you to approve or deny access to your account on a per-app basis. Did your laptop get stolen? Did you log into an app on your friend’s computer and accidentally forget to log out again? Not a problem - you will be able to revoke access on those devices remotely once we fully switch over to the new API.

We’ve made the changes in such a way that the current apps still work, but this is only intended to be temporary. We currently have beta versions of our apps available that support the new API and will be revoking access to older versions in the next few months. We’ll make a public announcement when that date is set so that users know when to update to newer versions.

Kollaborate Encoder 1.1

A corresponding Kollaborate Encoder update has been released that supports Kollaborate Server 2.0, can generate hoverscrub thumbnails and also fixes some bugs from the previous version. This is an essential update for Kollaborate Server 2.0 users with an encoding license.

Upgrade instructions

Full upgrade instructions are located in the Quick Start Guide but the basic workflow is as follows:

1. Download the new update (available after logging in).

2. Copy the contents of the Installation Files folder except the config folder to the root folder of your web server, overwriting any .

3. Visit yoursite.com/upgrade in your web browser and click Upgrade.

4. Delete the "upgrade" and "install" folders from the root folder of your web server.

Kollaborate is an essential cloud workflow platform that allows you to share files with clients and team members while integrating with Digital Rebellion apps and services. Kollaborate Server allows you to host the platform in-house on your own servers and storage. To find out more, see the Kollaborate Server overview or register for the free cloud trial.


          Kollaborate 2.0 - notification overhaul, hoverscrub thumbs, peek in folder, Premiere extension, Web Hooks and more        

We've just launched Kollaborate 2.0 - a major update to our cloud workflow platform for video professionals.

Notification Changes

One of the key goals of Kollaborate is keeping people informed, but with the old system it could sometimes be difficult to balance the needs of people who wanted to know about every single change vs those who wanted a much looser connection to the project.

So we’ve now introduced the concept of subscriptions. By default, everyone will receive a periodic digest summarizing recent changes to the project in a single email. You can choose to receive this once an hour, every few hours, once a day or not at all. (If there are no changes to the project in that time you won’t receive an email.)

If you want to receive instant alerts like in the previous version, you can subscribe to a project, file or task. Subscribing to a project makes the behavior identical to the previous version and can be done by clicking the dropdown next to the project on the Projects page.

Subscribing to a file or task alerts you to instant changes involving that item but without switching on alerts for other files or tasks in the project. Some actions automatically subscribe you to a file - such as uploading or commenting on it. You can click the Subscribe button again on the file to unsubscribe.

File management improvements


We've rebuilt the Files page to behave more like a desktop browser like OS X Finder, including arrow key navigation, click to select, shift-click to select a range of files, etc.


QuickLook is also back but this time it behaves a lot more like the one in OS X. Press Space on a selected file to preview it without opening it. You can even QuickLook folders to peek inside them.




Thumbnail view and hoverscrub

You can now toggle between the traditional list view and a new thumbnail view. This provides fewer power user options but is a great way of graphically browsing through your files. 


We've also added hoverscrub thumbnails like in FCPX so you can browse through a file just by scrubbing your cursor over it. You also get a thumbnail view when you hover your mouse over the playhead in the player.


Additionally, audio files now show a thumbnail of their waveform instead of a generic icon.

(Note: we're still converting some of the existing files on the server so it might be a day or two before you see the hoverscrub thumbs show up for files you've already uploaded.)


To-Do Comments


Tap the checkbox icon in a comment to turn it into a to-do comment, then tap it again to mark it done. 


You can filter by to-do and choose to only export to-do comments to your NLE. 


Adobe Premiere panel


We've also launched an extension for Adobe Premiere that functions as both a file browser and task manager.

You can browse files in the cloud and download them directly into your project by double-clicking them. You can also import  markers into your current sequence. 


View your current to-dos, click on them to jump to that position in your timeline and then click the tick to mark them as complete.

The extension is currently Mac-only but will be available for Windows soon.

Dashboard

Dashboard is back - view recent project changes, assigned tasks, favorites, to-do comments and more at a glance. 


Storage boost

We've given both subscribers and trial users a storage boost.

Package Old Storage New Storage
Trial 1 GB 2 GB
Freelance 20 GB 25 GB
Small Business 40 GB 50 GB
Business 60 GB 75 GB
Production 120 GB 140 GB
Studio 200 GB 225 GB
Studio Plus 350 GB 400 GB
Network 750 GB 1 TB
Network Plus 1.5 TB 2 TB

Web Hooks (Beta)

Kollaborate can send events such as uploads or new comments to a third-party server for integrating with in-house databases and task managers.

Some examples of what you could use it for:

  • Logging every file uploaded to a database
  • Adding new tasks to your calendar management software
  • Adding comments or tasks to your in-house work tracking system
  • Sending custom email alerts to users
  • Creating keywords or hashtags in comments that trigger certain behavior in your in-house systems

Note that web hook events are not immediate and will normally arrive after a few minutes. This is a beta feature so we're keen to hear how customers plan to use this feature and what additional data they need for integrating in-house.

Web Hooks can be switched on from your profile page and will only apply to projects you own.

Major architectural overhaul


We've made significant under-the-hood modifications to the service, not all of which are immediately noticeable to users, but they pave the way for significant features in the future.

One area we've made significant changes to is our API. This improves security by allowing you to approve or deny access to your account on a per-app basis. Did your laptop get stolen? Did you log into an app on your friend’s computer and accidentally forget to log out again? Not a problem - you will be able to revoke access on those devices remotely.

We’ve made the changes in such a way that the current apps still work, but this is only intended to be temporary. We will be releasing new versions of our apps in the coming weeks that support the new API and will be revoking access to older versions in the next few months. We’ll make a public announcement when that date is set so that users know when to update to newer versions.

As always, we take all feedback on board so please let us know what you think using the Feedback link on the site or by contacting us.

Kollaborate is an essential cloud workflow platform that allows you to share files with clients and team members while integrating with Digital Rebellion apps and services. To find out more, see the overview or register for the free trial.


          ARRI Alexa dailies workflow with DaVinci Resolve 12        

I've been working on a feature film shot on the ARRI Alexa and Alexa Mini and had to come up with a workflow for syncing and rendering out dailies. DaVinci Resolve 12 proved useful because all of the prep work could be done in one application all at once, with the added bonus of being able to roundtrip again after picture lock.

Step 1: Offloading

I used our very own Auto Transfer tool to offload the memory cards to two hard drives at once with checksum authentication to ensure the copies were identical to the source files. The media files were then backed up to LTO tape at the end of the day.

Step 2: Organize and sync clips

In Resolve 12, create a new project, then go to File > Project Settings and switch off Use local version for new clips in timeline from the Color page. This will be important later on.



Then bring the video and audio files into your Media Pool and organize them in whatever manner makes sense to you. I chose to create bins for each scene (be careful what names you choose for the bins).

On this particular movie no audio was shot in-camera so the only way to automatically sync audio and video is by timecode. Select the video and audio files in your bin, right-click and choose Auto-sync Audio Based on Timecode.



In theory this is all you should need to do to get perfect sync, but in practice timecode can drift or it may be set incorrectly (or not at all) in the camera or sound recorder.

An additional reason for splitting things into bins is because if you are shooting time-of-day timecode, you may have multiple clips with the same timecode that could confuse Resolve and cause it to sync clips up to audio from a different scene.

In the event that the audio is not synced correctly, open the video in the viewer, scrub to the exact frame that the slate hits on and write down the timecode for that frame. Then open up the audio file and stop it on the exact frame that you hear the clap of the slate (in a lot of cases this will be an obvious short spike in the waveform towards the beginning of the file). Then right-click the audio file in the Media Pool and select Clip Attributes. In the Timecode pane, enter the timecode from the video you noted down earlier. Then repeat the earlier step of selecting the video and audio files, right-clicking and choosing Auto-sync Audio Based On Timecode again.

(If you can't hear the slate, go to the Audio pane of Clip Attributes and make sure your extra audio channels aren't muted.)


This will modify the timecode of the audio file so it can then be matched up automatically. You may be wondering why automatic matching is so important when you could just manually sync clips in a sequence. There is an important reason for this that will be clear later on.

Now select all the video clips then right-click and select Create Timeline Using Selected Clips.

Step 3: Grading

Open the timeline then switch to the Color tab and grade the clips as you normally would.

Step 4: Render

In the Deliver tab, choose the Individual source clips option under Render timeline as and Use source filename under Save as. This will export each clip in the sequence as an individual movie file with the same  filename as its original source file - this is important to make it easy to reconnect back to the high-res source files later. Because these are offline clips we're rendering as ProRes Proxy to keep file sizes small but keeping the resolution the same as the source files.



This is why it was necessary to auto-sync the clips in the earlier step. I could find no way to manually sync audio clips and then link the audio back to the original source file. That synchronization will only exist in the sequence itself and is ignored if you choose the Individual Source Clips option.

Mark in and out points on the timeline at the bottom of the Deliver page to make sure it's going to render out all of the clips, then click Add to Queue. It's easiest to queue up lots of sequences and render them all out in one go.

Step 5: Edit

Import the rendered proxy files into the NLE of your choice and edit.

Step 6: Roundtripping back to Resolve

After editorial, export an XML from your NLE and reimport back into your Resolve project. (With Avid you need to export an AAF and things become a bit more complicated but this is covered in the user manual.)

On the Load XML dialog, deselect Automatically import clips into media pool (because they already exist in the media pool) and deselect Use color information if you edited in FCPX. Then click Ok.

Resolve should present you with a timeline from your NLE, however often things will not translate fully and need to be fixed. A great way to do this is to render out the full sequence from your NLE and then navigate to it in the media pool's browser. Right-click the file and select Add as Offline Reference Clip. Then right-click the timeline in the media pool window and select Timelines > Link Offline Reference Clip and choose the clip you just added.

Switch to the Edit pane and click the icon that looks like a filmstrip underneath the left-hand viewer. Choose Offline and Resolve will show the file you rendered from your NLE. You can then scrub through or play your timeline and it will show the reference clip alongside the corresponding frame of your timeline so you can compare them.

If any clips are offline you can right-click the timeline in the browser and select Timelines > Reconform from Bin(s), then select the bins with your source media. If the clips still won't reconnect, select the relevant clip in the media pool then right-click the offline clip in the timeline and choose Force Conform with Selected Media Pool Clip.

(At this point you may want to media-manage the timeline onto another drive to save disk space but I opted not to.)

Now go to the Color tab. If you don't see the grades you did previously, select all of the clips (you may need to click the Clips button at the top to see them) then right-click and choose Use Remote Grades (you may need to right-click again and choose Update All Thumbnails to see the changes).

Because you switched off local grades by default at the start of the project your grades were remote, which means they will stick across different timelines and if you adjust the grade of a clip, any other copies of it on your timeline (and throughout the project) will also be updated. In some cases this may not be desired, so you can right-click and choose Copy Remote Grades to Local so that your changes only apply to that specific instance of the clip.

Step 7: Sending back to the NLE

After grading you'll probably need to send it back to your NLE again for titling and syncing with the finished audio. You can do this one of two ways: export each clip individually like in Step 4 and then reconnect in your NLE (media managing before doing so will help a lot) or render out a single QuickTime file of the entire timeline. If you don't expect many editorial changes at this point the latter is simpler, which is what I opted for.


          Innovative Laser Light to Stimulate Fundamental Physics Research        
By deploying an extremely short and highly intense laser light pulses, scientists have been able to make significant strides in their attempts to watch and control particle motions which are outside the confines of the atomic nuclei. This will enable data processing operating which can be performed at the frequencies which are equivalent to the rate of visible light oscillation which is around 100,000 times faster as compared to the feasibility with current techniques. To make this happen, advances in the technology of laser are necessary. The physicist and the LAP, or Lab for Attosecond Physics and the Max Planck Institute of Quantum Optics or MPQ has created a novel light source which helps to bridge the gap of optoelectronics. The team has described this new instrument in the Nature Communications journal. A majority of these lasers were used in research labs which are based on titanium sapphire crystals. These are a kind of instrument which is dominant tool used in the production of ultra short light pulses for the past 20 years. However, this situation will most probably change pretty soon. These are all indications that their older rivals that use a rod or slab based crystals technique. Physicists are at present able to control the waveform of the emitted pulses using considerable precision. However, this new system extends the capacity even further. The exquisite control of temporary shape of the electron magnetic fields is indispensable as it can help to switch electron flows in a very compact manner and in single atoms which are aimed for optoelectronics.

Original Post Innovative Laser Light to Stimulate Fundamental Physics Research source Twease
          In Depth Look At The Canon C300 Mark II – An Operators Point of View        

There’s a lot of information starting to circulate on the Canon C300 Mark II as it finds its way into the hands of owners. I’ve had one in for the last few days and wanted to share some fresh thoughts as a working operator.

We know Canon has a good track record for color rendition, great single operator ergonomics and have all heard about that revolutionary new auto focus system.
But how well does it actually function as, a camera? How well are the new features implemented, how intuitive is the menu? I spent a few days with two Canon C300 Mark IIs; in this article I will share my findings from a working operators point of view.
Lets start this off by first saying this is not an official review. Having only had two cameras with me for a few days (most of which was spent on the job/in transit) I haven’t spent anywhere near the amount of time it would take to cast a full review worthy opinion.
That said, I spent the little time I had spare with the cameras productively inspecting every aspect of the menu system and button layout, as if it were my own.
I’ve been looking for an in-house camera to replace my Canon C100 mark I for a while. Like many, the Sony FS7 and the Canon C300 Mark II are close to the top of my list. And like many, I’ve been trying to justify where the over 100% increase in cost comes with the Canon.
I’m not going to go over the same information that you can already find on the Internet. The spec can be found here, here’s the manual, here’s what we found when we did a lab analysis of the image and yes, the auto focus system is very impressive and will revolutionize the way many shoot.
So lets get started, and to kick things off I’ll start with what I liked about the C300 Mark II.

Positives
1. Focus Guide
OK, so lets give this feature a little more credit than a passing comment at the end of a paragraph. The auto focus system is great, if you’re familiar with the old Dual Pixel AF you’ll appreciate it’s potential.
I want to highlight the new Focus Guide feature. I’d heard about this a little in passing comments, but using it for 2 days on a shoot I can really see it’s worth.
It’s a bi-product of the intuitive Dual Pixel AF system. Activating it gives you a little square on screen that signifies the area in which the system is monitoring for focus.
Turning the focus wheel on your AF EF mount lens will display two markers that move in an arc shaped dial. When the focus is on point the box will go green.

What’s great about this is the fact that when activated you can tell which way you need to turn the lens barrel to hit focus (markers sit above or below the box depending of fore/aft focus correction).
I found this really helpful when in an interview environment and wanting to re-adjust focus, hunting would ruin the shot but with this feature you know exactly which way to adjust.
Canon have Focus Guide set to the custom function button on the handle, and once activated you can use the joystick to navigate to your desired position on the screen.
2. 12bit 4444
This was of the reasons I got these cameras in for the job I did and is a features that has been skipped over a little since announcement.
Many (including myself) have spent their time complaining about the lack of 4K 60p, but the inclusion of FullHD/2K 12bit 4444 is a very welcomed feature.
It’s for great green screen and one that I’ll probably use more myself than 4K 60p, although I still think it’s a huge oversight to exclude higher 4K framerates and would like both features in my camera, please.
Here’s a shot from my shoot, key at your hearts content (TIFF download at bottom of page).
JPEG frame of 12bit 4444 image from C300 Mark II. ISO 800, C log 2
3. Custom Marker Displays
Sifting through the menu system I found this a nice feature, custom markers displays.
You have your standard grid lines, centre crosshair and aspect ratios. But navigating further down and you’ll find a custom button for your own aspect ratio.
Custom Aspect Ratio Markers
4. Custom Options For Peaking/WFM/Zebra Outputs
There has been an overall improvement in terms of onscreen displays. Many complained that the mark I EOS cameras (both C100 and C300) didn’t display the waveform monitor on outputs (SDI, HDMI and even the inbuilt EVF) but you can now choose which output you wish to view WFM on (and which you don’t) and this translates to all assisting display features; WFM, peaking, Zebras etc.
Output options for peaking, same for zebras and WFM
5. WFM Displays Log When Viewing LUT
An overlooked feature in many existing cameras and monitors is the ability to monitor what you are recording, not viewing.
When you are shooting LOG, but viewing a LUT you’ll know by the color shift of the menu and a little overlay on the screen (not to mention the fact that the contrast and colour of your image will change) but the most important aspect of this workflow is that your Waveform Monitor adheres to the recorded log, not the viewed LUT.
Here is an image below of the camera recording in Canon C Log 2, but viewing a BT. 709 LUT, notice the WFM with a little overlay explaining it’s reference to log, nice.
WFM displays log exposure whilst displaying LUT
6. LUT & OSD In Assignable Buttons Menu
Another LUT/log related feature I like is the inclusion of LUT enable in the assignable buttons menu.
This means you can switch on/off your LUT with the press of button of your choice; I also found the inclusion of OSD (on screen displays) very handy in this list.
On set I can often find myself referring to bigger/better monitors for the most part of operation (compared to camera screen), which can often be shared by an output recording and/or viewed by other crew members and/or the client.
For both these reasons it is not possible to have your OSD on this output, so you often have to switch back to your inferior on-camera monitor to check settings before recording again.
OSD and LUT good to have in Assignable Buttons menu
Having the OSD on/off by one button is a really nice way to quickly check them on your output before hitting record; the same applies to switching between your LUT and your log.
It’s a shame I have to talk about this sort of feature as a reason I like a specific camera, I feel every camera should have every feasible feature accessible via assignable buttons as it completely personalizes the user experience.
Unfortunately this is not the case for the C300 Mark II as with 90% of other cameras on the market; not all features are available on the assignable buttons menu.
7. Dual Level Control For Single Audio Source
The audio options on the camera isn’t too surprising, 24-bit audio is nice and you can change all settings in the designated audio menu and most physically on the audio/monitor module.
Like previous EOS cameras you have the ability to choose what Channel 2 records (it’s own channel or a clone of channel 1). Cloning Channel 1 still gives you independent control on Channel 2 however.
This feature may have been on the C300 mark I, so forgive me if this is old news but as a C100 Mark I user I found this feature handy.
Single audio source for both channels, independent channel control
It means that you can record a source on Channel 1 and set your levels, then clone the input to Channel 2 at which you’ve set a few db lower for redundancy in case there’s a spike in your input and you haven’t time to adjust your levels.

These are just a few of the features I selected as things I liked about the C300 Mark II that you wouldn’t find in usual online information. You have to experience the camera first hand to discover the little gems hidden in the menus or work out how the software works for a given feature.
Speaking generally about the camera I am a big fan of the ergonomics, like previous EOS cameras it sits well in the hand (despite being a little on the heavier side). I really like the proxy recording to SD cards too, as well as the prospect of using a more advanced Dual Pixel AF for gimbal use (more on that below).
That said there’s also many things I don’t like about the camera. I’m a bit of a pessimist with kit, I always expect more from it, whether it’s a tripod, monitor, camera etc.
I therefore wasn’t surprised that I spent most of the two days that I was shooting with the C300 Mark II complaining about a lack of this & that, the placement of that button, the hard to find feature buried in the menu etc.
The lack of 4K 60p is a big blow for me (like many), the price is also another big factor, but keeping this article on track here’s some specific quirks I found that I wasn’t immediately expecting.

Negatives
1. Dual Pixel AF doesn’t work in Slowmo Mode
This is perhaps something you could have predicted ahead of working it out for yourself, but just so you know Dual Pixel Autofocus does not work in slow motion mode.
I guess this is down to the additional processing involved when sampling more frames and processing AF information.
That doesn’t mean you can’t achieve off speed auto focus however. The camera shoots up to 50/60p in normal mode (contrast to slow motion mode where the camera conforms higher framerates to your working framerate) and Dual Pixel AF works in this mode.
As with many other cameras with high framerate options, audio is also disabled in slow motion mode.
2. No Shortcut To Recording Mode
I found this very odd, and still scratching my head thinking, “am I doing this right, there’s surely a way?” but you can’t access slow motion via a single button press.
Sony has had this feature on the button for a long time (FS100 comes to mind but probably before that also). You press S&Q, it goes into slow mode and you can select your framerate, simple.
Not on the C300 Mark II, you have to go into the menu and change the recording mode to slow motion.
Slow motion function buried in the menu
Once in this mode, there’s a specific button on the side that when pressed you can choose your desired framerate; out of the box, this button serves no other purpose (it’s an assignable button also so can be configured to anything on the custom function menu).
Why it doesn’t activate slow motion mode first is beyond me, and to top it off your Recording Mode doesn’t feature in the assignable buttons menu so you can’t set the shortcut it manually.
The closet you can get is placing Recording Mode at the top of your My Menu (a custom menu that allows you to add pretty much any feature to it), and assigning My Menu to one of the custom function buttons.
2. No Custom Look Up Tables
This was a feature I just assumed was in the camera, but at the time of writing this the Canon C300 Mark II does not support custom LUTs.
There are 4 LUTs installed on the camera: BT.709, BT.2020, DCI and ACESproxy10.

I can see exactly what they’ve done here, they’ve included relevant LUTs to correctly display gamma on various different monitors, both current and future. I’m sure Canon will argue this is the correct implementation of Look Up Tables.
I feel this is the bare minimum however; LUTs have evolved from this in recent times. They’re more than just a profile to display a correct image.
Both the Arri Amira and Sony FS7 support custom load LUTs and provide a decent stock LUT pre-loaded on the camera.
You can record the Rec 709 LUT on the Amira and the LC (low contrast) Rec 709 LUT on the Sony FS7 and yield results that are good out of the box to look at and record to, with a little room for tweaking in post.
The advantage of this over just selecting an appropriate picture profile is that you can still record log to the internal cards/external recorder as a backup should you require extra post processing for incorrectly exposed/white balance images, or if you wanted to revisit a project when you have more time and apply a more dynamic grade.
I’ve gotten used to recording dual system on larger projects: log to internal cards and a LUT to an external recorder. It’s great for the above reasons plus puts your rushes in a ballpark as to where they should be for the grade, which is useful if handing over files to a different editor.
In my opinion, you cannot do this with the in built LUTs on the C300 Mark II. It’s clear they are designed merely to correct the gamma curve for correct viewing.
I experimented with all of them, the BT.709 most of all. I found it too rich in contrast and aliased fine detail very quickly, therefore unfit for recording,
There are of course work arounds; use LUTs as they were originally intended and shoot with a single and correct picture profile to start with, and/or loop out of a monitor that supports custom LUTs.

3. Waveform Monitor Cannot Be Re-Positioned
I thought Panasonic were really onto something with their move-at-the-touch-of-a-finger waveform monitor on the GH4.
A waveform monitor (WFM) takes up a lot of real estate on the screen, the inability of being able to move it on screen was a feature I felt was missing on the C100/C300 Mark I (it’s big and actually covers up the audio levels on the C100) and it was a shame to not find there was no change on the C300 Mark II.
The waveform monitor when active resides half way up on the right hand side of the screen. You can’t move it, only turn it off. No biggie, maybe Panasonic and SmallHD spoilt us (you can move it on the 502, too).
Caption test
4. Monitor Out/HDMI Out Are Grouped
The Canon C300 Mark II has two SDI outputs, one HDMI port and one proprietary video connection for the monitor module.
I find it a little frustrating to find the HDMI and two SDI ports are not simply three independent outputs; one SDI is labeled Rec Out, the other Monitor Out.
This means that only the Rec Out SDI can display OSDs (on screen displays) to which the HDMI port is paired with. This means if you want OSDs on SDI 1 (or Monitor Out) they’re on the HDMI feed also.
There’s usually a work around with your monitors, especially if more than one of them can cross convert signals. But working out which monitor should go on what monitor/recorder is brainpower on set that I’d rather spend on something else. The Sony FS7 annoyingly is similar in grouping outputs.
I’d prefer a stand-alone menu that labels each of the 4 outputs independently (SDI 1 & 2, HDMI and camera monitor) and you simply select which ones are active, feature OSDs and LUTs.
You’ll be delighted to know that all these features are hidden on different menus on the C300 Mark II. LUTs are buried in with exposure/focus assist, OSDs and port activation on another.

5. Perimeter Viewing Mode Impacts Other Outputs
This is an extension of the former point; due to grouping certain outputs you can’t use Perimeter View on the camera monitor and receive a clean feed on HDMI or SDI 1.
Perimeter View mode is a lovely feature on the C300 Mark II that shifts all-important overlays to the edge of your screen.

This creates a nice un-interrupted view of your image on the same screen that is displaying your OSD.
However when active it sends a letterboxed image to SDI 1 (Monitor Out) and HDMI even when OSDs are turned off; a problem if you want to record via the HDMI and use this mode.
6. The Battery Charger Is Huge?!
I won’t get too hung up on this as the batteries have doubled in voltage to 14.4V and the unit charges two simulteanously, but the size a consideration for travelling shooters. The charger is a chunky slab and like the C300 Mark I there’s a separate AC adaptor unit. Here’s the charge next to the popular LP-E6 battery charger for Canon DSLRs.

C300 Mark II And The Movi M5

The C300 Mark II makes a fantastic gimbal camera. The Dual Pixel AF provides face tracking detection, as well as movable zonal AF (select the position of your AF zone by toggling the white box around the screen). It’s a big upgrade from the C100/C300 Mark I that only offers a static, small centre portion of the screen as an active AF zone.
For those wondering about whether the C300 Mark II fits on the Movi M5, the answer is both yes and no.
The listed payload for the Movi M5 is 5lbs, however it’s been documented in the past of being physically capable of more, capped by fewer components then you may think (it shares the same motors as the larger M10).
Here’s the C300 Mark II with the 16-35mm f/4 IS lens, single Cfast card and Really Right Stuff EOS C Arca Swiss plate. Converted that’s 6.2 lbs., 5.8lbs if you go with the BP-A30 battery.
The Movi seems to take the weight just fine, balancing and tuning took little effort.
The problem lies in the physical size of the M5. The protruding EVF of the C300 Mark II clips the back of the gimbal when tilting:
Unfortunately unlike the C300 Mark I, the EVF is not quick release. I opened it up to have a look how easy is it to remove and it involves detaching two ribbon cables, so not ideal.
I’d consider the option of extending the Movi cradle arms a little forward. You’d need less than an inch to make this setup work.
The smaller  BP-A30 battery also fits the profile of the camera body so won’t protrude like the above BP-A60. Unfortunately this was the only sized battery the hire company supplied.
Of course smaller, lighter lenses would mean the camera sits further forward therefore creating more room at the rear. I’m sure a Pancake lens like the 24mm STM would result in a fuss free fit.
I’d say cine lenses are out of bounds for the M5 and C300 Mark II; the camera would sit too far back and it would likely enter nervy territory extending the arms of the Movi that far forward.
In order to get the top cage of the M5 to fit as pictured you need to upgrade the 8” rods to 10” rods. These give you enough height to clear the top of the camera (you need these for all EOS cinema cameras including both C100s).

Image
For me operation is a huge factor when considering a camera, it’s one so many people over look and as a result there’s less online information about this sides of things with cameras (hence this article).
With this said, I didn’t have much time to spend testing the image side of things for the public eye.
I plugged it into my Atomos Shogun; on the latest firmware the C300 Mark II is listed in compatible raw cameras for the recorder.
It indeed accepted a 4K raw feed just fine, below is a JPEG of a frame grab taken from the ProRes HQ recording of the Shogun. C-Log 2 ISO 800, F/2.8. TIFF download at the bottom of the page.

And here’s a very quick test looking at the crop factor of the slow motion mode.
Standard Shooting Mode (200mm focal length):

Slow motion Mode (x2 crop of 200mm focal length):

And here’s Standard Shooting Mode at 100mm to compare the two for sharpness:

And at 400%, slow motion mode (100mm at x2 crop):

400% Standard Shooting Mode (200mm):

It’s a very brief and unscientific test (100 & 200mm focal lengths on a zoom lens will have discrepancies) but I want to roughly see how close/far out they were. Settings for all are C-log 2 ISO 800, f/2.8 on a 70-200 F/2.8 IS II.
You can see a slight dip in quality on the 100p frame at 400%, and technically it should be sharper at 100mm compared to 200mm.
Post was import for all stills was into Premiere Pro CC, export to TIFF then convert to 100% JPEG in Lightroom. Here is the download link for all related TIFF files:
download link
I’d like to do some more testing on the C300 Mark II particularly in terms of the image. Real world slow motion and dynamic range would be top of my list as well as some Movi M5 and Dual Pixel AF fun.
Due to our extensive lab test for dynamic range that you can check out here, image quality wasn’t on the top of my list for tests with the short period I had the cameras.

Spending a few days with the camera hasn’t provided my quest for a new camera body with any more clarity, but it’s answered a few questions surrounding Canons latest offering.
I hope you found my thoughts of some use, if you have any questions regarding the C300 Mark II please ask below and I’ll do my best to put my 4 day experience with the camera to some use.
If you’ve already used the camera and have noticed any quirks or gems of your own, please share them. It would be great to hear some alternative views on what is becoming a very talked about camera.

The post In Depth Look At The Canon C300 Mark II – An Operators Point of View appeared first on cinema5D.


          Let The Invasion Commence [Part 10]        
1st October - 8th October

The week was really quiet on the Let's Invade, project. Apart from the weekend, where a lot of activity has been made in this game. I was pondering for some handy information on generating random bullet firing for the aliens. Well, it turned out very bad last week, where I really got stuck. This week however was a completely different kettle of fish ... for better it was too :)

I was thinking in my head that the way I did the alien bullet random position subroutine shouldn't be based on wrapping in memory the low and hi-bytes of the alien colour, position, and sprite X position. Instead, it would have been better to generate a 256 byte table of values ranging from 1 to 25. So I generated a simple BASIC listing, to place a series of random tables in memory $1000-$1100. Then saved the table and then I extracted the random generated table file to the KickAssembler project folder. Next I created a command to open C64 the file (RandTable.PRG) for assembling.

The file imported into the source okay, however I needed to generate some more code, to set a looping value of 1 to 25. So a couple of subroutines were called in. One for moving to the NEXT value read from the random pointers (RandTable.PRG). Then store to the value of the actual pointer. Then a subroutine that calls the fetch table and check for which alien the bullet should be placed on. I also created a check in which commands an alien to not drop a bullet, if it is dead. I had a better result, but for some strange reason, the bullets appeared from the upper border. I played around with the subroutine, to work out what was causing this problem. Then set up a boundary for the bullet. Basically if the bullet tries to leave the top border and enter the screen, zero position the X + Y value of it. That helped me solve the problem.

Another small bug occurred in the game. After finishing playing the first game. I tested the pause/quit function to ensure the game works as it should. Unfortunately the game didn't quit, but made a CPU JAM in vice (A crash). So I did a little bug fix to sort out the problem. The game then seems to have been working okay. It looks as if now, the game could be finished. But I'll see what the testers find, that will cause problems to the game.

While I'm was waiting for a result. I have been composing some loading music for the game's tape loader system (Thunderload). Since the theme of the game is a 'Trance' based theme. The loading music ended up with pumping speed. The drums however didn't use the filtered kick and snare. Instead I set a different waveform to use a filter. The result turned out quite nicely. I have been working on making a tape master for the game. I am hoping that the result will turn out pretty good at the end. There was a slight error with the tape mastering, where I placed the loading text in the wrong place, but everything else worked quite nicely. That bug can easily be fixed no problem. :)

I'm not ready to release the game just yet. Hopefully SOON!




          Comment on Steve Roach: The Delicate Forever (CD) by Reviews Editor        
From <a href="http://expose.org/index.php/articles/display/steve-roach-the-delicate-forever-13.html" target="_blank" rel="nofollow">Exposé</a> Where are we? Some subterranean grotto? An alien garden? In the hands of ambient artist Steve Roach it’s all up to the listener’s imagination. Roach – with umpteen releases under his belt – has had a few years to explore, stretch, and hone his craft. Lately he’s been trending toward greater use of analog gear, which makes sense given that he started out as a pioneering electro-acoustic artist way back when. Not that you can tell the difference, anyways. In the hands of someone this well versed in waveforms, filters, and a mixing console he could probably put a microphone up to my cat and the results would be just as good. As for the specifics of this release, it’s typically mammoth comprising five pieces ranging from over nine minutes on the low end up to the title piece at almost 25 minutes. Each segues into the next, so it really feels like one massive, 75-minute journey. And there’s enough variation and development that the hand of the creator is almost always present. “The Well Spring” evokes old-school electronica via sequenced synth melodies, whereas pieces like the title track or “Perfect Sky” take a more organic approach with irregular accents that burble or skitter across them. Ingeniously, the first and final tracks are mirrors so the entire thing can be put on continuous, seamless replay. Whether it’s enjoyed for itself, or as an aid to meditation, relaxation or study, Roach has delivered yet another in a long series of sublime and expertly crafted sonic excursions. -Paul Hightower
          â€œGreat” Hardware Design in a Wireless World        

As part three of the “Making Hardware Design Great Again” series, let’s see how high-level synthesis (HLS) is helping designers create SoCs for WiFi, Bluetooth, and 5G.

A common challenge in all three wireless spaces is that their standards are evolving… rapidly. Each new specification, or sub-specification, opens new opportunities. It’s a good opportunity for companies to establish themselves as providers for the new standard. On the flip side, it can also be a negative opportunity for the current leaders if they give up their leadership position by not being one of the first providers.

For example, let’s look at Wi-Fi. As of April 2017, IEEE has eight additional 802.11 projects (evolving standards) underway. Each of these presents opportunities, good or bad, for SoC providers.

IEEE Project

Enhancement

Expected Date

IEEE Std P802.11ba

Wake Up Radio

July 2020

IEEE Std P802.11az

Next Generation Positioning

March 2021

IEEE Std P802.11ay

Next Generation 60GHz

November 2019

IEEE Std P802.11ax

High Efficiency WLAN

July 2019

IEEE Std P802.11ak

General Link

November 2017

IEEE Std P802.11aq

Pre-Association Discovery

August 2017

IEEE Std P802.11aj

China Millimeter Wave

December 2017

IEEE Std P802.11ah

Sub 1 GHz

December 2016

(Source: IEEE, http://www.ieee802.org/11/Reports/802.11_Timelines.htm)

Similarly, Bluetooth is undergoing rapid development. Bluetooth 5 was just adopted in December.

5G is still in the early phases, but targeting a much-anticipated 2020 rollout.  To that end 3GPP anticipates a functional freeze date for the 5G standards with stable protocols in September 2018.

For now, it’s the wild wild west when it comes to determining which protocols and waveforms will be used in 5G. A Google search about this will give you all sorts of 5G information. If you prefer an “all-in-one” collection, I have found Signal Processing for 5G: Algorithms and Implementations, edited by Fa-Long Luo and Charlie Zhang to be a good resource.

                                                                             (Interesting note from Wikipedia: With its hit song “Wild Wild West,” The Escape Club became the only British artist to achieve number 1 on the Billboard charts in the U.S. but never make the charts in the U.K. Who knew?)

How does HLS help make hardware design great for emerging wireless standards?

It’s tempting (and easy) to suggest that it’s all about a productivity boost, the implication being that HLS-based design and verification enables a shorter turnaround time from specification to hardware. But when you talk to these designers, you realize that’s not the actual value…

So let’s go back to the our five design steps once again, and I’ll relate some anecdotal data.

1)      Get requirements from management and marketing

In this case, everyone knows the requirements are most likely going to change, so this isn’t a surprise.

In some cases, part of the requirement is to produce an SoC in advance of the final standard. In some cases, this is simply a business requirement, as in “we shall be first to market.” In other cases, it’s even more interesting. Hardware developed in the early phases of a new standard is done, in part, to help advance the standardization process. It moves the process forward from theoretical analysis to empirical analysis.

2)      Understand and clarify the requirements

When the early hardware is part of the standardization process, the design team may be explicitly asked to make (and document) pragmatic decisions required to flesh out the hardware. For example, they may need to fill holes in the standard, or consider multiple options.

By working with high-level models (SystemC, C, C++), the design team can analysis a broad spectrum of options in detail, getting hard data on each. Feedback from this design analysis directly or indirectly becomes feedback into the standards body. This feedback is especially valuable when provided with empirical data from the next two steps…

3)      Consider multiple options to implement the hardware

…and…

4)      Prototype the best options

In some cases, the empirical feedback on the options is the highest value deliverable, as it helps steer the standardization process.

5)      Implement hardware

As I previously explained, after the above steps, you already have an implementation that can be committed to hardware. That also means you already have an implementation with which the verification team to begin work. In the case of hardware design for an emerging (pre-ratification) standard, the likelihood of a significant change to requirements is high… very high.

To make it worse, that “lightning bolt of change” can hit at any moment, and it’s likely it will hit more than once. That is, multiple changes throughout this process.

By using high-level synthesizable models throughout the process, from design analysis through implementation, these inevitable changes become manageable. Or at least, they are more manageable than before. Make the changes in the original model, and then “turn the crank” with HLS to synthesize one or more implementations.

THAT, I’m told, is the true value of HLS when designing hardware for emerging standards. The entire design and verification process becomes more nimble.

Final word

This three-part series focused on how hardware design is better when using high-level models and high-level synthesis. It produces better results, and it is also more fun than coding thousands of lines of RTL.

I have one other suggestion for some real fun.. put on your headphones and click on The Escape Club's “Wild Wild West” music video. Allow yourself four minutes and five seconds of nostalgia during your otherwise hectic day.


          RE: How to probe and view the variable wave in class?        

Class variables are "automatic variables." This means that they appear when a class object is created and disappear when the object is reclaimed. The simvision databases aren't set up for that kind of object, they expect static objects (ones that exist throughout the simulation).

So, in the current releases of simvision, you can't save a waveform of a class object.


Originally posted in cdnusers.org by TAM

          NAudio Tutorial 5 - Recording Audio        
Time for another installment of the NAudio Tutorials; this week we will be looking at how to Record Audio using NAudio from two different recording scenarios. The first being the use of NAudio to record any and all sound coming from the local Sound Card input, whether that be from a Microphone, the Line In Device or the Sound Cards on board wave mixer. The second approach we will be looking at is recording only the Audio that has been mixed by NAudio, regardless of what other audio is being played on the system at the time. This is useful for scenarios where you want to play over a backing track, or play your samples against a click track being played from another program but don't want to record the click track. The additional advantage of recording the audio mixed directly from NAudio is that there is 0 degradation in quality through the process; no audio playing means pure silence, rather than almost silence which for your average audio hardware would be the result, there is always some level of noise when working with an analog signal.

This NAudio Recording audio tutorial builds upon the concepts presented in previous NAudio Tutorials, if you haven't yet had the opportunity to review them I suggest that you venture there first and resume reading this tutorial after you have understood the basic NAudio concepts.

http://opensebj.blogspot.com/2009/02/introduction-to-using-naudio.html
http://opensebj.blogspot.com/2009/02/naudio-tutorial-2-mixing-multiple-wave.html
http://opensebj.blogspot.com/2009/03/naudio-tutorial-3-sample-properties.html
http://opensebj.blogspot.com/2009/03/naudio-tutorials-minor-note.html
http://opensebj.blogspot.com/2009/03/naudio-tutorial-4-sample-reversing.html

Time for another disclaimer, the second approach discussed here, recording the mix directly from NAudio has been suggested as a feature for inclusion in the main branch. I'm not sure if it fits in to the long term direction for the WaveMixerStream32 class, in any case, the code for these modifications have been included in this Tutorial and thanks to the Open Source nature of NAudio you can make these same changes to an instance of the library for yourself. You can find the specific details of this suggest contained in this forum post:

http://naudio.codeplex.com/Thread/View.aspx?ThreadId=52296

If you have any feedback on this tutorial, drop me a line or post a question in the comments section below.

Download the full article (AbiWord and RTF Format), example C#.Net Source Code and tutorial program here.

Recoding from the Sound Card

This is remarkably simple to achieve in NAudio, short of having a big red button which we push before it leaves the factory. First step is to setup.. ah forget the steps here is the code:

// WaveIn Streams for recording
WaveIn waveInStream;
WaveFileWriter writer;

waveInStream = new WaveIn(44100,2);
writer = new WaveFileWriter(outputFilename, waveInStream.WaveFormat);

waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(waveInStream_DataAvailable);
waveInStream.StartRecording();

No joke that's almost it. The only interesting thing here that we need to consider it that we have added an EventHandler that needs to be setup to handle data when it's ready to be handed off to the WaveFileWriter:

void waveInStream_DataAvailable(object sender, WaveInEventArgs e)
{
   writer.WriteData(e.Buffer, 0, e.BytesRecorded);
}

Er, thats it to start recording. We can stop the recording as such:

waveInStream.StopRecording();
waveInStream.Dispose();
waveInStream = null;
writer.Close();
writer = null;

See it would have been to simple a tutorial if we stopped here but feel free to stop reading and give it a crack. Using this method any audio which isn't muted on you input mixer will be recorded; it's up to you and the windows Mixer API to decide what you want to record, isn't that nice; except that you can't only record Audio from your audio application if there are other applications playing sounds in the background - say you get a call on your VOIP connection right in the middle of the hottest composition ever, or some one PM's you in IRC, or you click around on your PC looking for that cool new sample to load, with all the button clicks and other useless sounds being saved in to your mixed composition - oh no. Lets now look at how this unfortunate situation can be avoided.

Direct-To-Disk recoding via the NAudio WaveMixerStream32 Class

Now this is slightly more complicated but much more fun and presents you with a superior audio recording (especially on lousy or average audio hard ware, say my PC for instance). We will cover the code which will be required within the calling application first and then secondly review the changes required within the NAudio library, just so we have some comparison in amount of effort required for both approaches.

mixer.StreamMixToDisk(outputFilename);
mixer.StartStreamingToDisk();

Assuming you already have the mixer defined, thats all that is required to start recording. We can pause the streaming to disk by:

mixer.PauseStreamingToDisk();

Or resume by:

mixer.ResumeStreamingToDisk();

Finally stopping by:

mixer.StopStreamingToDisk();

Easy enough but we should cover whats required for this to actually work right? So lets dive in to the modifications in the WaveMixerStream32.cs file and hack till our hearts are content. In the declaration section of the class we need to add the following:

// Declarations to support the streamToDisk recording methodology
private bool streamToDisk;
private string streamToDiskFileName;
WaveFileWriter writer;

Now we add in the methods that support our calls:

/// <summary>
/// Starts the Strem To Disk recording if a file name to save the stream to has been setup
/// </summary>
public void StartStreamingToDisk()
{
   if (streamToDiskFileName != "")
   {
       streamToDisk = true;
   }
}

/// <summary>
/// Pause's the stream to disk recording (No further blocks are written during the mixing)
/// </summary>
public void PauseStreamingToDisk()
{
   streamToDisk = false;
}

/// <summary>
/// Resume streaming to disk
/// </summary>
public void ResumeStreamingToDisk()
{
   streamToDisk = true;
}

/// <summary>
/// Stop the streaming to disk and clean up
/// </summary>
public void StopStreamingToDisk()
{
   streamToDisk = false;
   writer.Close();
}

/// <summary>
/// Setup the StreamMixToDisk file and initalise the WaveFileWriter
/// </summary>
/// <param name="FileName">FileName to save the mixed stream</param>
public void StreamMixToDisk(string FileName)
{
   streamToDiskFileName = FileName;
   writer = new WaveFileWriter(FileName, this.WaveFormat);
}

/// <summary>
/// Using the final set of data passed through in the overriden read method to also be passed to the WaveFileWriter
/// </summary>
/// <param name="buffer">Data to be written</param>
/// <param name="offset">The Offset, should be 0 as we are taking the mixed data to write and want it all</param>
/// <param name="count">The total count of all the mixed data in the buffer</param>
private void WriteMixStreamOut(byte[] buffer, int offset, int count)
{
   // Write the data to the file
   writer.WriteData(buffer, offset, count);
}

All thats left is the modification to the Read method to pass this data back to the WriteMixStream method. Rather than pasting in the whole read method, even though it may make it look like I've done some extra work, I'll just copy in the last 8 or so lines:

position += count;
// If streamToDisk has been enabled the mixed audio will be streamed directly to a wave file, so we need to send the data to the wave file writer
if (streamToDisk)
{
   WriteMixStreamOut(readBuffer, 0, count);
}
return count;
}

Having jammed the check for streaming out to disk, after the final calculation and before the method is exited gives us everything we need to stream to our file. So now we have two methods of recording audio data and you want to know what my favorite part is?

You can actually use both at the same time and get multi-track / multi-channel audio recording on the same machine with a fairly standard sound card!

I normally refrain from using exclamation points but I was actually quite excited when I tested this. It means that some one can be jamming along on say a C# Audio Synthesizer / Beat Box or composition tool like OpenSebJ while another person is singing vocals or playing in a guitar riff through line in. I guess if your really talented you could be doing both at the same time, perhaps signing to the jam is more ilkley - what ever it is it can actually work; you can record both sets of audio separately - because the NAudio Stream-To-Disk method is not actually using your sound card to save the mixed result. Cool, well I think so.

Download the example program and have a look for yourself.






Conclusion

As pre usual, I've packaged up a copy of the entire article, along with a copy of the example program and source for your consumption. For the modifications required to the NAudio library, I have also copied in to the zip the modified version of WaveMixerStream32.cs for your convenience. Let me know if you have any questions, comments or if your keen to contribute to a project like OpenSebJ.

Until next time, when we look at - well I have actually decided yet. There are two things which are on the list from Tutorial 3 however I don't think they are currently the items which are peaking my interest, so lets assume it will most likely be something from the list below:


  •  Adding Audio Effects to a Stream 

  •  Transposing the frequency of the stream being played back

  •  Using MIDI to trigger audio samples

  • Playing compressed Audio (MP3 & OGG)

  •  Or something else that takes my fancy, write to me and suggest what that may be.



If you haven't already; Download the full article (AbiWord and RTF Format), example C#.Net Source Code and tutorial program here.
          NAudio Tutorial 4 - Sample Reversing        
Welcome to the next edition of the NAudio Tutorials series. In this tutorial we will be looking at how a sample can be reversed and played back.

This tutorial builds upon the previous tutorials, if you haven't had a chance to review them I suggest that you read them first before attempting this tutorial:

http://opensebj.blogspot.com/2009/02/introduction-to-using-naudio.html
http://opensebj.blogspot.com/2009/02/naudio-tutorial-2-mixing-multiple-wave.html
http://opensebj.blogspot.com/2009/03/naudio-tutorial-3-sample-properties.html
http://opensebj.blogspot.com/2009/03/naudio-tutorials-minor-note.html

A bit of a disclaimer for this approach; we will be overriding the Read method to achieve the playback of the reversed sample. Please don't misinterpret this approach to be the most suitable for implementation or the approach which is close to an appropriate design pattern. Marks suggestion for implementing this feature was to create a WaveStreamReverse stream derived from WaveStream. I've taken this approach because I am looking to demonstrate how wave data stored in a byte array can be manipulated and passed back to the mixer for playback.

Also note that during the writing of this tutorial I have uncovered what looks to be a minor bug that was preventing this function for working on samples longer than a second. A complete post of the details is available here: http://naudio.codeplex.com/Thread/View.aspx?ThreadId=50867
Due to this I have packaged a modified version of the NAudio DLL with this Tutorial as per the reference of suggested modification in the NAudio thread.

Hopefully that is suitable for the readership in the crowd out there but if you want to see it setup in a derived WaveStreamReverse class then drop me a line and let me know.

You can download a complete copy of all of the source files and this documentation in AbiWord format from here.


Reversing The Sample

Lets open the floor to how we can actually reverse a wave file. Basically a wave file is made up of samples, for argument sake we will consider a wave file with a two channels (stereo) and the data for both channels, for the same position is considered a sample. A simple example below:

Sine Wave Points

Here we have a single channel sine wave. The 6 boxes highlight six points in this sine wave, these are samples. Now consider an exact copy of this image for our second channel of audio and we would have two points for each sample. Conceptually thats certainly simple enough; now we need to discuss how this data is stored.

In a wave file the beginning of the file has a set of data explaining what the format of the file is, Frequency, Bit Size, Number of Channels etc. Once we go past this header information the main wave file starts. Based on the presiding information we can determine how to read the wave file. NAudio completes that read and load operation for us (thankfully because if you don't have to look at it, don't, it's not lots of fun) and provides us with a byte array of the actual wave form data. Now depending on the preceding information we need to adjust the way we consider and utalise this data; if we have only a single channel the byte array needs to be read in accordance with that, if we have a greater precision of samples (16 bit vs. 8 bit) then we need to also take that in to account. So specifically for reversing we are interested in the number of bytes per sample, which is calculated as such:

bytesPerSample = (channelStream.WaveFormat.BitsPerSample / 8) * channelStream.WaveFormat.Channels;

Taking the number of bits per sample and dividing by 8 gives us the number of bytes per sample (which is very important considering that the wave file is stored in a byte array) and then we multiple this by the number of channels of data we need to handle. So in effect, only considering the number of bytes per sample and then reversing the order that the complete sample appears in a byte array allows us to reverse the complete sample.

Lets have a look at how that works, in a new class called NAudioBufferReverse, which takes in the sampleToReverse as a byte array, the length of the source file in bytes and the number of bytes per sample, notice for this class as long as we have pre-calculated the number of bytes per sample we don't actually need to know the details of why there are that many bytes per sample, only that there is and that each sample needs to be moved as a whole, in the reverse order, to another byte array.

class NAudioBufferReverse
{
   // Length of the buffer
   private int numOfBytes;

   // The byte array to store the reversed sample
   byte[] reversedSample;

   public byte[] reverseSample(byte[] sampleToReverse, int SourceLengthBytes, int bytesPerSample)
   {

       numOfBytes = SourceLengthBytes;

       // Set the byte array to the length of the source sample
       reversedSample = new byte[SourceLengthBytes];
       
       // The alternatve location; starts at the end and works to the begining
       int b = 0;

       //Prime the loop by 'reducing' the numOfBytes by the first increment for the first sample
       numOfBytes = numOfBytes - bytesPerSample;

       // Used for the imbeded loop to move the complete sample
       int q = 0;

       // Moves through the stream based on each sample
       for (int i = 0; i < numOfBytes - bytesPerSample; i = i + bytesPerSample)
       {
           // Effectively a mirroing process; b will equal i (or be out by one if its an equal buffer)
           // when the middle of the buffer is reached.
           b = numOfBytes - bytesPerSample - i;

           // Copies the 'sample' in whole to the opposite end of the reversedSample
           for (q = 0; q <= bytesPerSample; q++)
           {
               reversedSample[b + q] = sampleToReverse[i + q];
           }
       }

       // Sends back the reversed stream
       return reversedSample;
   }
}

Yes, over commented if anything but remember this is a tutorial so you can learn whats going on right? After this class has been implemented we now have an available function to help us reverse a byte array by a sample.

Setup the Sample

So now we have to call this and setup our sample class, which we have introduced in previous tutorials, to access this function. Enter stage right:

// SampleArray to be store the reveresed array
byte[] reversedSample;

bool _sampleReversed = false;

public AudioSample(string fileName)
{
   _fileName = fileName;
   WaveFileReader reader = new WaveFileReader(fileName);
   channelStream = new WaveChannel32(reader);
   muted = false;
   volume = 1.0f;
   
   // Reverse the sample
   NAudioBufferReverse nbr = new NAudioBufferReverse();
   
   // Setup a byte array which will store the reversed sample, ready for playback
   reversedSample = new byte[(int)channelStream.Length];

   // Read the channelStream sample in to the reversedSample byte array
   channelStream.Read(reversedSample, 0, (int)channelStream.Length);
   
   // Calculate how many bytes are used per sample, whole samples are swaped in
   // positioning by the reverse class
   bytesPerSample = (channelStream.WaveFormat.BitsPerSample / 8) * channelStream.WaveFormat.Channels;
   
   // Pass in the byte array storing a copy of the sample, and save back to the
   // reversedSample byte array
   reversedSample = nbr.reverseSample(reversedSample, (int)channelStream.Length, bytesPerSample);
}

So the main difference here is that we have an additional byte array in our sample class, which will be used to store the reversedSample. We cheat a bit by doing this because we don't have to setup the wave format of the reversed sample, as it will be in exactly the same format as the sample which also means there is no requirement to setup any header information, the land of the byte array bliss.

Read What?

Thats all well and good but now we need to be able to use this reversed sample during wave play back and how do you suppose that some additional wave bytes are going to help us achieve this, well like the good little supper hero NAudio is, it fly's in stage left and flapping under it's giant red cape is a Read method which has the override directive and ties us back to the actual stream reading function. Those astute people in the audience who have read the previous tutorials and committed every word to memory should be nodding like the good bobble heads they are, recalling that we used a similar approach in the previous tutorial to provide looping functionality to our samples. An in rides our override:

public override int Read(byte[] buffer, int offset, int count)
{
   if (_sampleReversed)
   {
       //need to understand why this is a more reliable offset
       offset = (int)channelStream.Position;

       // Have to work out our own number. The only time this number should be
       // different is when we hit the end of the stream but we always need to
       // report that we read the same amount. Missing data is filled in with
       // silence
       int outCount = count;

       // Find out if we are trying to read more data than is available in the buffer
       if (offset + count > reversedSample.Length)
       {
           // If we are then reduce the read amount
           count = count - ((offset + count) - reversedSample.Length);
       }

       for (int i = 0; i < count; i++)
       {
           // Individually copy the samples into the buffer for reading by the overriden method
           buffer[i] = reversedSample[i + offset];
       }

       // Setting this position lets us keep track of how much has been played back.
       // There is no other offset used to track this information
       channelStream.Position = channelStream.Position + count;

       // Regardless of how much is read the count expected by the calling method is
       // the same number as was origionaly provided to the Read method
       return outCount;
   }
   else
   {
       // Normal read code, sample has not been set to loop
       return channelStream.Read(buffer, offset, count);
   }
}

Whats going on again? Well we check if the sample has been requested to be played in reverse (every time the read method is called) and then work out from where we should be playing back the reversed sample - at the moment this code is just assuming you play it in one direction from start-to-end or end-to-start; it's not handling the transposing of position for the stream when a reversed flag is set mid playback - that's a small addition for another day.

Now the actual trick here comes by way of us not using the channelStream.Read method when we are using the reverse flag for playback, instead we just write directly to the byte buffer, the samples which are ready for playback. Notice if the stream is not reversed then there is no need to do this, we let it go on using the channelStream.Read method as it always has. So why does this work, well instead of us relying on a standard stream method to read back the data we just copy in data we deem necessary and because this data is in the same format (remember all the reversal occurs after the channelStream has been created) we don't need to do any conversion on the byte array. There was a few little oddities this deals with, like always saying we read as much as was requested even if we didn't but if we don't we throw an exception somewhere else, Nice.

Thats actually about it for this tutorial. I haven't linked this back to the form inside the text here but you can download the sample project and have a look at how it all hangs together (there is really nothing new going on there except for a reverse check box and I don't want to insult any ones intelligence by explaining how that works here). There are two little omissions from this tutorial which I will leave you as homework (that I'll undoubtedly have to do for OpenSebJ at some point in time)
1. Looping of the reversed sample for playback
2. Transposing playback position so a dynamic switching between reversed and non-reversed sample playback can be handled.

Conclusion

Thats a wrap for this Tutorial on how to Reverse a Wave Sample in C# using the NAudio Framework. All of this content and the example project is available for download from here. Let me know how you go and where you use it, always interested in hearing about Audio C# Development.

Until next time, when we look at recording wave files, direct to disk.
          Introduction to Using NAudio        
About NAudio
NAudio, is an Open Source* audio mixing libary written in C#, for the windows platform. It supports PInvoke methods for WaveOut, ASIO, DirectX and some other functions only available on Vista (WASAPI - Windows Vista Core Audio). It doesn't currently include abstraction support for PortAudio, OpenAL or GStreamer but here's hoping for some cross platform compatibility in the future.

The SubVersion repository for NAudio is very clean and easy to distinguish what you may need. My development environment is Windows XP x64 so there was an additional step required for the 64 bit environment, which was to set the platform to build as x86 by default.

You can do this by:

Clicking Build > Configuration Manager and then change all of the projects platform to x86

With the platform set, everything was ready to go. I recompiled and had a working x86 version, that runs on the 64 bit OS.

Introduction: Playing a Sample
This post is going to focus on loading an audio sample, playing the file and being able to reset the position of playback for the sample.

Start a new project, copy the NAudio.dll file over and add it in as a reference. Then set-up the using statements:

using NAudio.Wave;
using NAudio.CoreAudioApi;

In the class, we need to define the items which are going to be visible to all of our methods. We have two declarations we are concerned with in this introduction:
public partial class NAudioTest : Form
{
IWavePlayer waveOutDevice;
WaveStream mainOutputStream;
string fileName = null;

waveOutDevice which is an instance of IWavePlayer is going to be used to define the interface to a device that we will be using to play our audio.

mainOutputStream is used to store the audio sample we load and provides a level of abstraction from the Stream so that we don't need to manually move the stream data around when we want to adjust properties such as position - which allows us to easily seek positions within the sample.

fileName will just store the file name of the wave.

With these class declarations in place, the next logical step for us is to setup the device which is going to be used to output the Audio. At an appropriate location (under a button click event etc.) we can now initialise the instance. In this tutorial we are going to use the default available ASIO interface. This may not be suitable for all sound cards, it doesn't work on mine by default - but thanks to ASIO4ALL it does. Grab a copy of it, it's free and works wonders providing a lowlatency interface for mixing audio on windows.

try
{
waveOutDevice = new AsioOut();
}
catch (Exception driverCreateException)
{
MessageBox.Show(String.Format("{0}", driverCreateException.Message));
return;
}
The default ASIO device has hopefully been created (if it didn't work for you have you downloaded ASIO4ALL yet?). Now we have an audio device declared but nothing too exciting to your sense have started to occur - for that we need to load an audio file and we need to find one first. For the purpose of this tutorial you can either assume a fixed location and assign it to the fileName variable or add something similar to the OpenFileDialog snipet below:
OpenFileDialog openFileDialog = new OpenFileDialog();
openFileDialog.Filter = "Wave Files (*.wav)|*.wav|All Files (*.*)|*.*";
openFileDialog.FilterIndex = 1;
if (openFileDialog.ShowDialog() == DialogResult.OK)
{
fileName = openFileDialog.FileName;
}
I'll assume that needs no explanation of its own. This sample now needs to be loaded and stored in to a wave stream, which we have called mainOutputStream. We will use the CreateInputStream method from the NAudio Demo application to complete this task.
mainOutputStream = CreateInputStream(fileName);
Passing in the name of the file and having the input stream retuned. This doesn't require a lot of understanding if your just looking for the wave file to be loaded and the stream retuned, lets assume that's all you care about from this tutorial.
private WaveStream CreateInputStream(string fileName)
{
WaveChannel32 inputStream;
if (fileName.EndsWith(".wav"))
{
WaveStream readerStream = new WaveFileReader(fileName);
if (readerStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
{
readerStream = WaveFormatConversionStream.CreatePcmStream(readerStream);
readerStream = new BlockAlignReductionStream(readerStream);
}
if (readerStream.WaveFormat.BitsPerSample != 16)
{
var format = new WaveFormat(readerStream.WaveFormat.SampleRate,
16, readerStream.WaveFormat.Channels);
readerStream = new WaveFormatConversionStream(format, readerStream);
}
inputStream = new WaveChannel32(readerStream);
}
else
{
throw new InvalidOperationException("Unsupported extension");
}
return inputStream;
}
Now for assembly of all of the pieces. We have a waveOutDevice defined where we will be sending audio to and a wave file has been loaded in to our mainOutputStream. All that's left is to connect the two and hit play:

try
{
waveOutDevice.Init(mainOutputStream);
}
catch (Exception initException)
{
MessageBox.Show(String.Format("{0}", initException.Message), "Error Initializing Output");
return;
}
waveOutDevice.Play();
Assuming everything was setup right, you should now hear a wave file being played. If you have difficulties check ASIO4ALL is installed. And the finishing touch is to reset the wave file to play back from the beginning. Find a friendly on-click button event and add the following:

mainOutputStream.CurrentTime = TimeSpan.FromSeconds(0);

Ahem. Done.

Thanks to Mark Heath for this great library - check out Mark's Blog Here. All of this code was ripped from the NAudio Demo Application and is available within the source code package available on codeplex.

Next Time

I'll be looking at loading multiple wave files simultaneously and how to use the included Mixing functions. Stay tuned.

* It is licensed under the Microsoft Public License (Ms-PL), which I personally am not to farmiular with. However according to the FAQ there doesn't seem to be any issues distributing it along with another Open Source application. As I use the GPL I need to look a little further in to the licensing compatability but based on what I have read, I am hopeful for the moment that it is fine.

          21inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready        
21inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

21inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures: 3Gb/s ready - 1080/60PAdvanced Waveform (Y.Cb.Cr. selectable) & Vectorscope2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationDual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Cutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayClosed Caption (608/708)Color Temperature - User & VAR & 11000K & 9300K & 7500K & 6500K & 5000K & 3200KVarious and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendAdvanced Waveform (Y.Cb.Cr.selectable) & Vector ScopeAnalog & Embedded Audio Input & Digital Audio DecodingBuilt In Stereo Speaker & 16 CH Audio Metering DisplayIMD & Time code display and Wall Control SystemPixel to Pixel Mode & Tally & DC OperationUnderscan/ Overscan/ Normal / ZoomProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)OPTIONS: Battery Mount (PBM-217-3G & PBM-220-3G) & Carry Case &Rack mount: (PBM-217-3G & PBM-220-3G & PBM-224-3G) 


          24inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready        
24inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

24inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures:  3Gb/s ready - 1080/60PAdvanced Waveform (Y.Cb.Cr. selectable) & Vectorscope2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationDual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Cutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayClosed Caption (608/708)Color Temperature - User & VAR & 11000K & 9300K & 7500K & 6500K & 5000K & 3200KVarious and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendAdvanced Waveform (Y.Cb.Cr.selectable) & Vector ScopeAnalog & Embedded Audio Input & Digital Audio DecodingBuilt In Stereo Speaker & 16 CH Audio Metering DisplayIMD & Time code display and Wall Control SystemPixel to Pixel Mode & Tally & DC OperationUnderscan/ Overscan/ Normal / ZoomProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)OPTIONS: Battery Mount (PBM-217-3G & PBM-220-3G) & Carry Case &Rack mount: (PBM-217-3G & PBM-220-3G & PBM-224-3G) 


          32inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready        
32inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

32inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures: 3Gb/s ready - 1080/60PAdvanced Waveform (Y.Cb.Cr. selectable) & Vectorscope2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationDual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Cutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayClosed Caption (608/708)Color Temperature - User & VAR & 11000K & 9300K & 7500K & 6500K & 5000K & 3200KVarious and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendAdvanced Waveform (Y.Cb.Cr.selectable) & Vector ScopeAnalog & Embedded Audio Input & Digital Audio DecodingBuilt In Stereo Speaker & 16 CH Audio Metering DisplayIMD & Time code display and Wall Control SystemPixel to Pixel Mode & Tally & DC OperationUnderscan/ Overscan/ Normal / ZoomProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)OPTIONS: Battery Mount (PBM-217-3G & PBM-220-3G) & Carry Case &Rack mount: (PBM-217-3G & PBM-220-3G & PBM-224-3G) 


          40inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready        
40inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

40inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures: 3Gb/s ready - 1080/60PAdvanced Waveform (Y.Cb.Cr. selectable) & Vectorscope2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationDual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Cutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayClosed Caption (608/708)Color Temperature - User & VAR & 11000K & 9300K & 7500K & 6500K & 5000K & 3200KVarious and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendAdvanced Waveform (Y.Cb.Cr.selectable) & Vector ScopeAnalog & Embedded Audio Input & Digital Audio DecodingBuilt In Stereo Speaker & 16 CH Audio Metering DisplayIMD & Time code display and Wall Control SystemPixel to Pixel Mode & Tally & DC OperationUnderscan/ Overscan/ Normal / ZoomProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)OPTIONS: Battery Mount (PBM-217-3G & PBM-220-3G) & Carry Case &Rack mount: (PBM-217-3G & PBM-220-3G & PBM-224-3G) 


          46inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready        
46inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

46inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures:  3Gb/s ready - 1080/60PAdvanced Waveform (Y.Cb.Cr. selectable) & Vectorscope2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationDual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Cutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayClosed Caption (608/708)Color Temperature - User & VAR & 11000K & 9300K & 7500K & 6500K & 5000K & 3200KVarious and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendAdvanced Waveform (Y.Cb.Cr.selectable) & Vector ScopeAnalog & Embedded Audio Input & Digital Audio DecodingBuilt In Stereo Speaker & 16 CH Audio Metering DisplayIMD & Time code display and Wall Control SystemPixel to Pixel Mode & Tally & DC OperationUnderscan/ Overscan/ Normal / ZoomProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)OPTIONS: Battery Mount (PBM-217-3G & PBM-220-3G) & Carry Case &Rack mount: (PBM-217-3G & PBM-220-3G & PBM-224-3G) 


          55inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready        
55inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

55inch 3G Broadcast Monitor (1920x1080) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures:   3Gb/s ready - 1080/60PAdvanced Waveform (Y.Cb.Cr. selectable) & Vectorscope2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationDual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Cutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayClosed Caption (608/708)Color Temperature - User & VAR & 11000K & 9300K & 7500K & 6500K & 5000K & 3200KVarious and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendAdvanced Waveform (Y.Cb.Cr.selectable) & Vector ScopeAnalog & Embedded Audio Input & Digital Audio DecodingBuilt In Stereo Speaker & 16 CH Audio Metering DisplayIMD & Time code display and Wall Control SystemPixel to Pixel Mode & Tally & DC OperationUnderscan/ Overscan/ Normal / ZoomProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)OPTIONS: Battery Mount (PBM-217-3G & PBM-220-3G) & Carry Case &Rack mount: (PBM-217-3G & PBM-220-3G & PBM-224-3G) 


          Here's to Future Sound Effects Designers        
You have been around at a time when sound effects cannot get any louder. If you look at a waveform of many modern sound effects, the waveform covers the screen. When working on the sound for the pistol in Wrack, I decided to use a part of numerous gunshot effects I had. But first, this ....

Do you like to travel? Are you interested in creating sound effects? OK. Start travelling for the purpose of collecting raw sounds for your own sound effects library. Check with your tax expert, but you should be able to deduct the cost of these trips off of your Schedule C "Sound Designer" business taxes. The trip has to be for the purpose of collecting your sound effects. As I understand it, if you do anything as a tourist, you cannot write that off.

But, if you open your ears, there are worlds of sound effects waiting to be captured. Say you go to the Eiffel Tower. There are plenty of environmental sounds to be recorded. Has anyone ever recorded the sound of the tower using a contact mike? I'll bet there are some "unworldy" sounds there. Could you capture the same sound from some other tower? I doubt it. It certainly wouldn't have the same frequencies. Plus it wouldn't have the same "bragging rights."

Making something you love to do a business is not a secret, but lots of people don't yet know about it.

Check out the possibilities!

.... Back to the Wrack pistol. What I did was take my own and stock recordings of guns firing (not just pistols). There are twenty-one different pistol sounds layered into the one effect. There's one lightning effect (recorded during a Florida thunderstorm). There's a bit of cannon fire, too. I used no compression and was careful not to add too much of any single sound. The waveform is recognizable if you look at typical single pistol audio waveforms.

Sometimes you have to use compression, but I prefer not to if I can get away with it.

Now, take a trip for the purpose of recording some new raw material for your sound effects.
          Aliasing and the Heisenberg uncertainty principle.        
TL;DR

The Dirac comb is an example of a wavefunction whose position and momentum aren't fuzzy.

Introduction

The Heisenberg uncertainty principle says that if you have a particle in some state and observe either its momentum or its position then the products of the standard deviations of distributions of the outcomes satisfy this identity:

I think many people have a mental picture a bit like this:

You can know the position and momentum with some degree of fuzziness and you can trade the fuzziness between the two measurements as long as the product of their sizes is larger than ℏ/2.

Here's another way of thinking about that kind of picture (assuming some units I haven't specified):

position=123.4???
momentum=65?.???
The idea is that the question mark represents digits we don't know well. As you move towards the right in the decimal representation our certainty in the accuracy of the digit quickly goes downhill to the point where we can't reasonably write digits.

But this picture is highly misleading. For example, the following state of affairs is also compatible with the uncertainty principle, in suitably chosen units:

position=...???.123...
momentum=...???.654...

In other words, it's compatible with the uncertainty principle that we could know the digits beyond the decimal point to as much accuracy as we like as long as we don't know the digits before the point. It trivially satisfies Heisenberg's inequality because the variance of the position and the momentum aren't even finite quantities.

But being compatible with Heisenberg uncertainty isn't enough for something to be realisable as a physical state. Is there a wavefunction that allows us to know the digits to the right of the decimal point as far as we want for both position and momentum measurements?

Sampling audio and graphics

Maybe surprisingly, the worlds of audio and graphics can help us answer this question. Here's what a fraction of a second of music might look like when the pressure of the sound wave is plotted against time:

But if we sample this signal at regular intervals, eg. at 44.1KHz for a CD, then we can graph the resulting signal as something like this:

The red curve here is just to show what the original waveform looked like. The black vertical lines correspond to regular samples and we can represent them mathematically with Dirac delta functions multiplied by the amplitude measured at the sample.

There is a well known problem with sampling like this. If you sample a signal that is a sine wave sin(ωt) at rate f then the signal sin((ω+2πnf)t) will generate exactly the same samples for any integer n. The following illustration shows what might happen:

The two waveforms are sampled at the same regular intervals (shown by vertical lines) and give exactly the same amplitudes at those samples.

This forms the basis for the famous Nyquist-Shannon sampling theorem. You can reconstruct the original signal from regularly spaced samples only if it doesn't contain frequency components higher than half your sampling rate. Otherwise you get ambiguities in the form of high frequency parts of the signal masquerading as low frequency parts. This effect is known as aliasing. As a result, the Fourier transform of a sampled function is periodic with the "repeats" corresponding to the aliasing.

In the audio world you need to filter your sound to remove the high frequencies before you sample. This is frequently carried out with an analogue filter. In the 3D rendering world you need to do something similar. Ray tracers will send out many rays for each pixel, in effect forming a much higher resolution image than the resolution of the final result, and that high resolution image is filtered before being sampled down to the final resulting image. The "jaggies" you get from rendering polygons are an example of this phenomenon. It seems like jaggies have nothing to do with the world of Fourier transforms. But if you compute the Fourier transform of a polygonal image, remove suitable high frequency components, and then take the inverse Fourier transform before sampling you'll produce an image that's much more pleasing to the eye. In practice there are shortcuts to achieving much the same effect.

The connection to physics

Now consider a particle whose wavefunction takes the form of the Dirac comb:

This is a wavefunction that is concentrated at multiples of some quantity a, ie. ∑δ(x-an) summing over n = ...,-1,0,1,2,... If the wavefunction is ψ(x) then the probability density function for the particle position is |ψ(x)|². So the particle has a zero probability of being found at points other than those where x=na. In other words, modulo a, the particle position is given precisely.

But what about the particle momentum. Well the wavefunction has, in some sense, been sampled onto the points na, so we expect that whatever the momentum distribution is it'll be ambiguous modulo b where ab=ℏ. In fact, if we take the Fourier transform of the Dirac comb we get another Dirac comb. So in the frequency domain we get the same kind of phenomenon: the momentum is concentrated at integer multiples of b. So now we know we have a wavefunction whose uncertainty precisely fits the description I gave above. We know the position precisely modulo a and the momentum precisely modulo b. In some sense this isn't contrived: we know the momentum modulo b precisely because of the aliasing that results from knowing the position modulo a.

What this means

The message from this is that position-momentum uncertainty isn't fuzziness. At least it's not fuzziness in the ordinary sense of the word.

And in reality

I'm not very experienced in attaching numbers to results from theoretical physics so I'd find it hard to say how accurately we can create a Dirac comb state in reality. When we measure a position using interferometry techniques we automatically compute the position modulo a wavelength so this isn't an unusual thing to do. Also an electron in a periodic potential may take on a form that consists of a train of equally spaced lumps. Even if not described exactly by a Dirac comb, we can still know the position modulo a and the momentum modulo b much more accurately than you might expect from a naive interpretation of the Heisenberg uncertainty principle as fuzziness.

Exercises
1. Investigate approximations to the Dirac comb: eg. what happens if we sum only a finite number of Dirac deltas, or replace each delta with a finite width Gaussian, or both.
2. Investigate the "twisted" Dirac comb: ∑δ(x-an)exp(inθ) where θ is some constant.

          Echocardiography        
Introduction
Echocardiography in its current form, has becomean invaluable tool in a modern cardiac intensive care unit environment. Coupled with a clinical examination and monitoring techniques, echocardiography can provide real-time rapid and reliable diagnostic answers that are invaluable to patient care. This noninvasive test can be used to reliably evaluate cardiac anatomy of both normal hearts and those with congenital heart disease and has replaced cardiac angiography for the preoperative diagnosis of the majority of congenital heart lesions. In congenital or acquired cardiac disease, echocardiography may be further used to estimate intracardiac pressures and gradients across stenotic valves and vessels, determine the directionality of blood flow and pressure gradient across a defect, and examine the coronary arteries. Within the realm of critical care, echocardiography is useful to quantitative cardiac systolic and diastolic function, detect the presence of vegetations from endocarditis, and examine the cardiac structure for the presence of pericardial fluid and chamber thrombi. As with all tools, however, a thorough understanding of its uses and limitations are necessary before relying upon the information it provides.

Principles of Echocardiography
Echocardiography uses ultrasound technology to image the heart and associated vascular structures. Ultrasound is defined as sound frequencies above the audible range of 20,000 cycles per second. The primary components of an ultrasound machine include a transducer and a central processor. The transducer converts electrical to mechanical (sound) energy and vice versa. Electrical energy is applied to piezoelectric crystals within the transducer resulting in the generation of mechanical energy in the form of a series of sinusoidal cycles of alternating compression and rarefaction. The energy produced travels as a directable beam which may be aimed at the heart. The sound beam travels in a straight line until it encounters a boundary between structures with different acoustical impedance, such as between blood and tissue. At such surfaces, a portion of the energy is reflected back to the same crystals within the transducer, and the remaining attenuated signal is transmitted distally. Within the ultrasound, machine is circuitry capable of measuring the transit time for the beam to travel from the transducer to a given structure and back again then calculate the distance traveled. A cardiac image is constructed from the reflected energy, or so called ultrasound echoes.
Differing properties of tissues affect the portion of acoustic energy transmitted versus reflected. For example, air reflects the majority of the signal it receives and, therefore, prevents images from being obtained through windows where it is present. Anything hindering or augmenting the reflection of this acoustic signal, such as air, bone, dressings, an open chest, or lines, tubes, or other foreign bodies, will diminish the overall quality of the examination. Therefore, in the intensive care unit, an ultrasound study may be limited by difficulty in finding a good acoustic window to allow for accurate analysis.

The Anatomical Echocardiographic Examination
In order to obtain the best imaging windows, whenever possible, patients are placed in a left lateral decubitus position during a transthoracic echocardiogram. During two-dimensional (2D) echocardiography, all planes are described in reference to the heart and not the heart’s position within the body. For a complete pediatric study, standard views (see Fig.1–5) are obtained from the high left chest just lateral to the sternum (parasternal window), the left lateral chest just inferior and lateral to the nipple (apical window), sub-xyphoid area (subcostal window), and the suprasternal notch (suprasternal window). In patients with more complex anatomy, additional windows, such as the high right parasternal border, may be used to obtain additional information.
parasternal window 
Fig.1 Standard echocardiographic image planes from the high left chest just lateral to the sternum (parasternal window (a) and (b)), the left lateral chest just inferior to the nipple (apical window (c)), sub-xyphoid area (subcostal window (d)), and the suprasternal notch (suprasternal window (e) and (f)). RA right atrium; RV right ventricle; LA left atrium; LV left ventricle; Ao aortic valve; CS coronary sinus; RVOT right ventricular outflow tract; SVC superior vena cava (drawing from Steven P. Goldberg, MD) 

1.  Parasternal Window
In the anatomically normal heart, the parasternal window allows visualization of the heart aligned along its long axis and short axis. In the long axis (Fig.1a), the left ventricular inflow and outflow tracts can be seen well. As a result, comments can be made from this view regarding the aorta, including its annulus, the sinuses of Valsalva, and the proximal portion of the ascending aorta, as well as its relationship to the mitral valve. Additionally, the ballet-slipper appearance of the left ventricle is featured as the inferoposterior wall and interventricular septum are visualized. The anterior and posterior leaflets of the mitral valve can be visualized. By angulating the transducer and performing a sweep, the right ventricle is brought into focus and an examination of both its inflow including the right atrium and tricuspid valve and its outflow tract, including the pulmonary valve can be performed.
The transducer may be rotated 90° providing a series of short-axis views (Fig.1b) that assist in the evaluation of the chambers of the heart, the semilunar and atrioventricular valves, and the coronary arteries. Sweeping from the apex of the heart toward the base will allow a close cross-sectional examination of the ventricular chambers. The normal left ventricle has circular geometry with symmetric contraction, whether it is visualized at the level of the mitral valve, papillary muscles, or apex. In contrast, the normal right ventricle appears as a more trabeculated crescent-shaped structure when visualized at or below the level of the mitral valve. Sweeping farther toward the base of the heart, the mitral valve’s papillary muscles and the valve itself are viewed. Progressing to the base of the normal heart, the tri-leaflet aortic valve takes the center stage with the right ventricular outflow tract and pulmonary wrapping in an inverted “U” anteriorly and leftward. Additionally a portion of the atrial septum and the tricuspid valve may be profiled. Finally, continuing the sweep allows for the examination of the atrial appendages, ascending aorta in cross-section and branch pulmonary arteries.
parasternal window 
Fig.1b (continued)

2.  Apical Window
For those not trained in echocardiography, the images obtained with the transducer in the apical position (Fig.1c) are perhaps the most intuitive as it allows for visualization of all four chambers and valves in the heart with a simple left-to-right orientation. Imaging is begun in the four-chamber view, in which the anatomic right and left ventricles may be identified. Sweeps of the transducer from this position identify the posterior coronary sinus and may indicate abnormalities such as a left superior vena cava or unroofed coronary sinus. Proceeding more anteriorly to a five-chambered view, the atrial and ventricular septa may be visualized looking for defects and the left ventricular outflow tract and ascending aorta may be examined. The four chamber view allows for the examination of the anterior and posterior mitral valve leaflets and pulmonary veins as they enter the left atrium. By rotating the transducer to 90° from the four-chamber view, a two-chamber view of the left ventricle and left atrium can be obtained to evaluate the anterior and posterior left ventricular wall function.
the left lateral chest just inferior to the nipple (apical window) 
Fig.1c (continued)

3.  Subcostal Window
For pediatric patients with complex cardiac anatomy, the subcostal position (Fig.1d and Fig.1.e) provides the most detailed information and is often thebest starting place. In order to obtain images in this position, patients are placed supine with the transducer in the subxiphoid position. In larger cooperative patients beyond the infancy period, image quality may be improved by having the patient participate in the examination with held inspiration that allows the heart to move downward toward the transducer. Initial views in this position should determine visceral situs as well as the relationship of the inferior vena cava and aorta. Subsequent views and sweeps will provide detailed analysis of the atrial septum as well as the images related to the ventricular septum, the atrioventricular valves, atrial and ventricular chambers, and drainage of systemic veins. With the rotation of the transducer both ventricular outflow tracts may be visualized. Additionally in some patients the branch pulmonary arteries and the entire aorta may be examined from this position.
sub-xyphoid area (subcostal window) 
Fig.1d  (continued)
the suprasternal notch (suprasternal window) 
Fig.1e (continued)

4. Suprasternal Window
The views are obtained in this position by placing the transducer in the suprasternal notch (Fig.1.f) with the neck extended. The suprasternal longand short-axis views provide detailed information regarding arch sidedness, anomalies in the ascending and descending aorta and head and neck vessels, the size and branching of the pulmonary arteries, as well as anomalies of systemic and pulmonary venous systems.
the suprasternal notch (suprasternal window) 
Fig.1f (continued)

M-Mode Imaging
One of the earliest applications of ultrasound technology that remains an important tool in the evaluation of cardiac function, dimension, and timing, the M-mode echo provides an “ice-pick” view of the heart. An M-mode echo is obtained with the ultrasonic transducer placed along the left sternal border and directed toward the part of the heart to be examined. A single line of interrogation is repeatedly produced and the resultant image is displayed with time along the x-axis and distance from the transducer along the y-axis (see Fig. 2). M-mode obtains an estimate of ventricular function by measuring the short axis shortening fraction and wall thickness.
M-mode echocardiography obtained in the parasternal short axis through the right and left ventricular chambers at the level of the papillary muscles. LVEDD left ventricular end-diastolic dimension; LVESD left ventricular end-systolic dimension
Fig.2 M-mode echocardiography obtained in the parasternal short axis through the right and left ventricular chambers at the level of the papillary muscles. LVEDD left ventricular end-diastolic dimension; LVESD left ventricular end-systolic dimension

Doppler Evaluation
Frequently in an intensive care setting the clinician is concerned with new or residual flow disturbances from shunt lesions, an abnormal cardiac valve, or narrowing of a blood vessel. While 2D echocardiography determines anatomical relationships, additional information regarding movement of the blood or myocardium is provided by looking for Doppler shifts in the reflected ultrasound waves. The Doppler principle, first described by Johann Christian Doppler, states that for a stationary object, the frequency of ultrasound reflected is identical to the transmitted frequency. Inherently the heart and the blood it pumps do not fit this basic definition. Therefore, when performing a cardiac ultrasound, the moving objects alter the frequency of the reflected signal (the Doppler shift) according to the direction and velocity with which they are moving in relation to the fixed transducer. Additional insights to intracardiac and vascular hemodynamics may be obtained when velocity data is collected. Doppler data are typically displayed as velocity rather than the actual frequency shift. The velocities can then be translated into pressure data using the modified Bernoulli equation: P1 – P2= 4[(V2)2 – (V1)2]. If one assumes that the level of obstruction and therefore the velocity of V1 is negligible compared with the obstruction at V2 the formula becomes even simpler: DP = 4(Vmax)2. Although the modified Bernoulli equation can only be applied in appropriate situations, it does help predict the pressure drop across an abnormal valve or septal defect to give a general estimate of the severity of the lesion which can prove to be valuable information to help manage patients in the intensive care setting.
Of note, during Doppler imaging it is clinically important to recognize the angle of interrogation of blood flow and its impact on the accuracy of our velocity measures. It is important when performing Doppler studies that the line of beam interrogation should be directly in the line of flow, resulting in as little distortion of data as possible. The more off-angle the approach is, the increasingly more severe the underestimation of the true velocity will be. For practical purposes, an angle of interrogation less than 20° is essential to ensure clinically accurate information.
Two commonly used techniques are pulsed and continuous wave Doppler. Pulse wave Doppler allows determination of direction and velocity at a precise point within the imaged cardiac field. However, it is limited in its maximum detectable velocity by the Nyquist limit making it unusable for quantification of high-velocity flow (e.g., as seen with severe obstruction). In contrast, continuous wave Doppler interrogates all points along a given beam. Continuous wave Doppler imaging is not constrained by velocity limits and can hence record velocities exceeding those of pulsed Doppler imaging. The drawback is that while the line of interrogation is identifiable, knowledge of anatomy must already be obtained to identify the precise location of the maximum velocity. Clinically these two techniques are commonly used sequentially to identify the area of interest and then to obtain the maximum velocity.

1. Color Flow Doppler
Color flow Doppler is powerful technique for obtaining additional hemodynamic and anatomic data for patients undergoing echocardiography in the intensive care unit. Color flow Doppler allows velocity information to be overlaid on a 2D anatomic image therefore providing data regarding intracardiac and extracardiac shunts, valvar insufficiency or stenosis, and vessel obstruction. By convention, shades of red are used in identifying blood flowing toward the transducer and blue to indicate blood flowing away from the transducer. Therefore, color flow Doppler defines the presence and direction of shunts and is used to grade the severity of valvar insufficiency.

Current Clinical Applications
Clinical applications of echocardiography within the intensive care unit may be divided into the following major areas:
1. The diagnosis and post-intervention evaluation of anatomic lesions.
2. Evaluation of cardiac function.
3. Diagnosis of intracardiac masses and extracardiac effusions.
4. Guidance of intervention within the intensive care unit

Anatomic Lesions Pre and Post Intervention
Advances in technology have enabled most congenital heart defects to be diagnosed by echocardiography avoiding the risks, time, and cost of invasive cardiac catheterization. In addition, for infants and pediatric patients admitted to an intensive care unit due to being succumbed to shock, echocardiography may be useful for differentiating anatomic causes of shock from functional causes. Patients with obstruction to outflow on the left side of the heart who go undiagnosed at birth frequently present with signs of diminished cardiac output (CO) or frank shock. These lesions including aortic valve stenosis, coarctation of the aorta, and variations of hypoplastic left heart syndrome may be identified and defined by echocardiogram alone.
Following surgical or catheter-based intervention patients convalesce in the intensive care unit. Most patients undergo a postprocedural echo before getting discharged home to document adequacy of the repair and lack of significant complications. In postoperative patients this assessment may prove more complicated as access to the patient and the correct windows may be severely compromised by dressings, intracardiac lines, and chest tubes. Occasionally postoperative patients in the intensive care unit may be found to have unexpected residual lesions (see Fig.3). For example, following repair of septal defects, echocardiography may be useful to screen for the presence of residual shunts which may be less well tolerated secondary to myocardial changes following cardiopulmonary bypass. Often, the presence of a residual lesion is known in the operating room through transesophageal echocardiography or direct discussion with the surgeon. An important role of echocardiography is to distinguish those lesions with hemodynamic consequences from those whose presence has no impact on postoperative care. Transthoracic echocardiography may be used to diagnose and assess the hemodynamic sequelae of shunt lesions, residual stenosis, and function. More complicated is the assessment of coronary flow, right ventricular dynamics, and distal obstruction following intervention. In patients who are experiencing arrhythmias postoperatively, special attention should be paid to the flow within the coronary arteries to ensure that it has not been compromised or that a line or mass in the heart is not causing ectopy.
Parasternal short axis image in a patient with pulmonary atresia/VSD who acutely decompensated. White arrows demonstrate the large residual VSD than resulted when a patch dehisced. RA right atrium; RV right ventricle; AV aortic valve 
Fig.3 Parasternal short axis image in a patient with pulmonary atresia/VSD who acutely decompensated. White arrows demonstrate the large residual VSD than resulted when a patch dehisced. RA right atrium; RV right ventricle; AV aortic valve
Four chambered view demonstrating color Doppler of tricuspid regurgitation and the corresponding spectral Doppler pattern. 
Fig.4 (a) and (b): Four chambered view demonstrating color Doppler of tricuspid regurgitation and the corresponding spectral Doppler pattern. The velocity obtained by spectral Doppler may be utilized to estimate pulmonary artery pressures in the absence of downstream obstruction. A complete envelope by pulse wave or continuous wave Doppler provides the velocity of the regurgitant jet which may be translated into pressure data using the equation: DP = 4(Vmax)2. RA right atrium; RV right ventricle; LA left atrium; LV left ventricle.

Unanticipated pulmonary arterial hypertension may slow the progress of a patient in the intensive care unit. In the absence of a Swan Ganz catheter or a direct pulmonary arterial monitoring, echocardiography may be used to estimate the pulmonary artery pressures. There are several methods that may be used to determine the pulmonary artery pressures. In a patient with
tricuspid regurgitation, the velocity of the jet estimates the difference in pressure in the right atrium and the right ventricle (see Fig.4). If there is no stenosis of the pulmonary arteries, pulmonary valve, or right ventricular outflow tract, the difference in pressure between the right atrium and right ventricle plus the right atrial pressure (CVP) provides an estimate of the pulmonary arterial pressures. In the absence of tricuspid valve insufficiency, interventricular septal geometry may be used to help quantify the degree of pulmonary hypertension.

Analysis of Ventricular Function
One of the most frequent uses of echocardiography in the ICU is related to the evaluation of ventricular performance. Improvements in technology allow assessment of both systolic and diastolic function with increasing accuracy.
1. Systolic Function
Accurate and timely assessment of systolic function should be an integral part of the medical management of the hemodynamically unstable critically ill patient. Global assessment of LV contractility includes the determination of ejection fraction (EF), circumferential fiber shortening, and cardiac output (CO). There are several methods that may be used to garner this information. Each has its limitations and assumptions which are paramount to understand prior to clinically applying the information gathered. For assessment of left ventricular function, perhaps the simplest quantitative approach is to use M-mode echocardiography (see Fig.3) in either the parasternal short axis at the level of the papillary muscles or in the parasternal long axis at the tips of the mitral valve leaflets to measure the left ventricular end-diastolic dimension (LVEDD) and left ventricular end-systolic dimension (LVESD) for the determination of the fractional shortening (FS) percentage.
Fractional shortening is derived by the following:
Normal values for fractional shortening in children and infants vary slightly with age, falling typically between 28 and 44%.
Fractional shortening, therefore, provides a method of assessing circumferential change but has several obvious drawbacks. This method assumes that the ventricle being examined has a circular shape in the axis in which it is examined. As a result, changes in diameter may be mathematically related to circumferential fiber-shortening providing an estimate of ventricular function. Therefore anything that alters the circular shape of the left ventricle (anatomic abnormalities intrinsic to congenital heart disease, pre and afterload changes, or ventricular–ventricular interactions) may affect the assessment of fractional shortening by altering the movement of the septum and causing an under or over estimation of the either end-systolic or diastolic dimension.
A second method of assessing ventricular function is via ejection fraction. Ejection fraction is a volumetric appraisal of ventricular fiber shortening. Echocardiographically the most common method of calculating ejection fraction is the biplane estimation of volumes from the apical four-and two-chamber views. One of the more commonly used mathematical algorithms is the Simpson method in which the left ventricle is traced manually at the end diastole and end systole along the endocardium. Using the method of disks the left ventricle is divided into a series of parallel planes and the resultant disks are individually summed to create each volume. Ejection fraction is calculated using the following equation:
Unfortunately, the determination of an accurate ejection fraction is also subject to ventricular shape with the left ventricle assumed to be its normal prolate elliptical shape. Variations from this shape, which occur frequently in pediatrics, significantly alter the relationship between fiber shortening and volume dependence upon when this equation is applied. In addition, patients in the intensive care environment frequently have suboptimal imaging windows making the endocardium difficult to distinguish and trace.
Not infrequently in active pediatric intensive care units, a patient’s heart and/or lung function must be supported for a period of time. Two such modalities of support are extracorporeal membranous oxygenation and ventricular assist devices. Often the pediatric echocardiographer is asked to assist in the management of these patients by providing insight into the recoverability of cardiac function. This request can be one of the more challenging uses of echo in an intensive care setting. As discussed above, many of the techniques commonly used to determine ventricular systolic function and CO are dependent on the loading conditions of the heart as well as contractility. As a result, both of these support systems which unload the heart in an effort to allow recovery time severely limit echo’s utility as a prognostic indicator. Several newer methods of determining myocardial function including Tissue Doppler Imaging (TDI), strain and strain rate, color m-mode, calcium gating, and three-dimensional (3D) echocardiography are entering the realm of echo in the intensive care unit. These newer modalities may prove to be more efficacious than current standard echocardiography is at present.

Diastolic Function
Accurate assessment of diastolic function by echocardiography is an evolving field that has made great strides in the past few years. Diastolic heart failure and its impact on postoperative management also deserve consideration. Spectral Doppler evaluation is a relatively easy and useful method for evaluating diastolic function noninvasively at the bedside. A prominent pulmonary vein atrial reversal wave (a wave) is a marker of diastolic dysfunction. This finding represents marked flow reversal into the pulmonary veins during atrial systole in response to a noncompliant ventricular chamber. The mitral inflow Doppler pattern can also be a useful marker for diastolic dysfunction. Mitral inflow is composed of 2 waves – an E wave representing early passive ventricular filling (preload dependent) and the A wave representing active filling as a result of atrial systole. The E:A ratio, velocity of E wave deceleration and duration of the A wave can be altered in patients with diastolic dysfunction.
Tissue Doppler imaging (TDI) is a newer technique for assessing diastolic ventricular function. TDI allows recording of the low Doppler velocities generated by the ventricular wall motion and directly measures myocardial velocities. In spectral TDI, pulsed Doppler is placed along the myocardial wall (mitral, septal, or tricuspid annulus) recording the peak myocardial velocities. Three waveforms are obtained: a peak systolic wave (Sa), an early diastolic wave (Ea), and an end-diastolic wave (Aa) produced by atrial contraction. The tissue Doppler systolic mitral annular velocity has been shown to correlate with global LV myocardial function [14]. TDI has also been used to estimate diastolic function, and is relatively independent of preload condition. The pulsed Doppler peak early mitral inflow velocity (E) divided by the TD early diastolic mitral annular velocity (Ea) results in a ratio that correlates with the pulmonary capillary wedge pressure. The E/Ea ratio is also useful in estimating mean LV filling pressure. At this time, TDI represents one of the most accurate techniques to assess diastolic function and is therefore of particular interest in the critical care population in whom abrupt changes in preload and afterload are common, making Doppler evaluation of diastolic function less reliable.

Detection of Intracardiac Masses and Extracardiac Effusions
An abnormal area of dense reflectance that is well localized within an echo may represent a mass, thrombus, or calcification. In the postoperative or critical care patient with multiple lines in place, especially in the setting of low flow, care must be taken to evaluate these areas for thrombus formation. Echo is the imaging modality of choice for elucidating and evaluating cardiac mass lesions. Differentiating an area of concern from artifact, can be challenging. Areas that move appropriately throughout the cardiac cycle and the presence of an abnormality in more than a single view, suggest a mass rather than an artifact (see Figs. 5a–d). These findings must in turn be distinguished from such anatomical variations as a prominent Eustacian valve or Chiari network.
Demonstrate a thrombus in the right ventricle seen in parasternal short axis  
Fig.5 Demonstrate a thrombus in the right ventricle seen in parasternal short axis (a) and modified four-chamber (b) views. RV right ventricle; LV left ventricle. (c) and (d): Demonstrate a thrombus in the left atrial appendage in both parasternal short axis and a modified four chamber views. RA right atrium; RV right ventricle; AV aortic valve; AO ascending aorta; LV left ventricle.

Major factors that predispose a patient to the development of intracardiac thrombi are the presence of intracardiac lines, diminished CO, and localized stasis in addition to changes within the clotting cascade from sepsis, bypass, intrinsic clotting disorders, or heparin use. Echocardiographic evaluation of patients within the intensive care setting must include an awareness of the increased incidence of thrombus formation and a careful evaluation of areas predisposed to become a nidus for thrombus.
Following cardiac surgery it is not uncommon for patients to develop small collections of fluid in the pericardial space (see Fig.6). Typically, this is of little concern to the clinician; however, in a postoperative patient experiencing tachycardia and/or hypotension, the necessity of recognizing the potential for and screening for cardiac tamponade becomes paramount. In young infants and children, it is frequently difficult to rely on physical exam findings of increased jugular venous pressure or the late finding of pulsus paradoxus. In this instance, a directed and easily performed 2D and Doppler echocardiography can confirm the presence of an effusion and provide accurate assessment of its hemodynamic significance.
Subcostal image demonstrating a large circumferential pericardial effusion (green arrows) 
Fig.6 Subcostal image demonstrating a large circumferential pericardial effusion (green arrows)

The size and extension of a pericardial effusion may be diagnosed from parasternal, apical, or subcostal windows. The apical view is the easiest for obtaining information regarding the effusions hemodynamic significance. From the apical four chamber view both the mitral and tricuspid valve flow patterns are evaluated with the respiratory monitoring in place. Examining the changes in inflow hemodynamics with respiration allows for the evaluation of tamponade physiology. Greater than 25% variability in maximal e wave velocity of the mitral valve with inspiration or 50% of the e wave velocity of the tricuspid valve (see Figs.7a, b) is indicative of significant hemodynamic compromise resulting from the effusion. Additionally, collapse (differentiated from contraction) of the free wall of the right and left atrium (see Figs.8a, b) when the pericardial pressure exceeds the atrial pressure may be seen from this view in a patient with a significant effusion.
Respiratory changes in the mitral and tricuspid valve e wave Doppler patterns consistent with tamponade physiology. 
Fig.7 (a) and (b): Respiratory changes in the mitral and tricuspid valve e wave Doppler patterns consistent with tamponade physiology. The tricuspid valve inflow demonstrates more than 50% variability between inspiration and expiration (a). During mitral valve inflow Doppler, the peak E wave velocity alters more than 30% between inspiration and expiration (b).
Four chambered views 
Fig.8 (a) and (b): Four chambered views demonstrating right atrial and right ventricular collapse (green arrows) as a finding of tamponade physiology. RA right atrium; RV right ventricle; LA left atrium; LV left ventricle.

Echocardiography GuidedProcedures
1.  Pericardiocentesis
Performing “blind” percutaneous pericardiocentesis as a treatment for significant pericardial effusion dates back to the early eighteenth century and it is historically fraught with complications. Improved techniques in the 1970s with the advent of 2D echo allowed more accurate localization of the fluid and the development of echo-guided pericardiocentesis. Echo-guided pericardiocentesis (see Fig.9) has been found to be a safe and effective procedure with insertion of a catheter for drainage used to reduce the rate of recurrence found to complicate simple needle drainage and is considered the primary and often the definitive therapy for patients with clinically significant effusions.
Echoguided pericardiocentesis. 
Fig.9 Echoguided pericardiocentesis. Green arrow is in the pericardial space demonstrating the large fluid collection. Blue arrow is pointing to the needle that has been advanced into the pericardial space to drain the fluid collection. The large effusion allows the echocardiographer to direct the individual performing the pericardiocentesis away from areas that could lead to complications such as perforation of the myocardium.

2.  Balloon Atrial Septostomy (BAS)
Part of any echocardiographic assessment of a patient with congenital heart disease should include evaluation of the atrial septum. Cardiac lesions such as transposition of the great arteries, hypoplastic left heart syndrome, and tricuspid atresia require an adequate atrial communication. In the setting of a restrictive atrial septal communication or intact septum, a BAS is required to improve mixing and CO. In the past, the procedure, originally described by William Rashkind was performed in the cardiac catheterization laboratory under fluoroscopic guidance. However, during the last decade BAS has been routinely performed at the bedside in the intensive care unit under echocardiographic guidance (see Figs.10a–d). Most commonly either a subcostal view that includes a focused look at the atrial septum, pulmonary vein, and mitral valve or an apical four-chamber view is used. For the echocardiographer, the primary role is to provide continued visualization of the catheters and communicate well with the interventionalist. Advantages of this technique are multifactorial; echocardiography is superior to fluoroscopy during BAS due to a lack of radiation, the ability to perform the procedure at bedside rather than transporting to a catheterization laboratory, and direct, continuous visualization of the atrial septum, pulmonary veins, and mitral valve. The disadvantages of this technique include the potential for interference with maneuverability for both echocardiographer and catheter operator around a small neonate and therefore the risk of contamination of the sterile field. Additionally there is the possibility of poor acoustic windows in an ill neonate who may be mechanically ventilated. However, with proper planning and communication, the limitations of transthoracic echocardiographic guidance of BAS may be minimized.
Subcostal images demonstrating echo-guided balloon atrial septostomy (BAS) 
Fig.10 Subcostal images demonstrating echo-guided balloon atrial septostomy (BAS). (a): shows the initial small atrial communication in both 2 dimensional (2D) and color Doppler imaging. (b): reveals the deflated balloon that has been advanced across the atrial communication. It is important during this portion of the procedure for the echocardiographer to ensure that the balloon has not been advanced across the left atrioventricular valve. (c): demonstrates the inflated balloon within the left atrium. It is important to note the balloon’s position away from the mitral valve and pulmonary veins. (d): demonstrates the atrial communication following septostomy using both 2D and color Doppler imaging. RA right atrium; RV right ventricle; LA left atrium; LV left ventricle; Green arrows atrial communication.

Future Directions
There are several areas of advanced imaging that are becoming more commonplace in the practice of pediatric echocardiography. Primary assessment of cardiac mechanics by evaluating myocardial motion, strain, and strain rate has been validated in healthy children and provides additional information regarding myocardial performance. Three-dimensional real-time echocardiography has a growing role in evaluating anatomic defects, valves, and right and left ventricular function independently of geometric assumptions that constrained the previous methods.

1. Myocardial Mechanics
In the past several years, myocardial strain and strain rate have emerged as promising quantitative measures of myocardial function and contractility. Strain (e) is a dimensionless parameter defined as the deformation (L) of an object relative to its original length (Lo), and is expressed as a percentage. Strain rate (SR) is defined as the local rate of deformation or strain (e) per unit of time, and is expressed in 1/s. Strain and strain rate measurements can be obtained from data acquired by Doppler Tissue Imaging or 2D tissue tracking. Strain and strain rate should be of great help in the future in the evaluation of ventricular function, since conventional M-mode and 2D echocardiography have limitations due to complex morphology of the right ventricle and altered left ventricle morphology that occurs in complex congenital heart defects. Left and right ventricular values of strain and strain rate are available for healthy children.

2.  3D Echocardiography
Off-line 3D reconstruction consists of acquisition of sequential 2D slices which are converted to a rectangular coordinate system for 3D reconstruction and provides accurate anatomic information suitable for quantitative analysis. Left ventricular volume, mass, and function can be accurately assessed using RT3D independently of geometric assumption, and ejection fraction can be calculated. The wideangle mode is often used to acquire the entire LV volume, from which further analysis allows determination of global and regional wall motion. Wall motion is evaluated from base to apex with multiple slices from different orientations. The advantage of 3D over 2D is the ability to manipulate the plane to align the true long axis and minor axis of the LV, thus avoiding foreshortening and oblique image planes. LV volume assessment by RT3D is rapid, accurate, reproducible and superior to conventional 2D methods and is comparable to MRI, which represents the gold standard. Three dimensional reconstruction of the tricuspid valve has been shown to be helpful for anatomical assessment of Ebstein’s malformation or after atrioventricular septal defect repair. 3D Echocardiography is a useful adjunct to standard 2D imaging and should be increasingly used in the future.

          Hioki Releases iPad App for Memory HiCorders        

Wireless Application Streamlines and Enhances Analysis of Waveform Data

(PRWeb November 22, 2013)

Read the full story at http://www.prweb.com/releases/2013/11/prweb11361196.htm


          ABU Podcast #66 - 1970s Special DOWNLOAD        





Here's the latest ABU Podcast and as promised it's a 1970s special.

It was a blast going through tons of old tunes, and trying to pick out stuff was extremely difficult as there was just so much choice - there will definitely have to be another one.

Listen in the Player above or click the Download option (also in the above Player).

The tracklisting is below but I recommend you listening 'blind' and seeing if you can guess the audio first.

Cheers,

Mr Repo

\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

Welcome along to this edition of the ABU Podcast #66.

We're trying something different this time and taking an excursion way into the past, right back to the 1970s. So expect a trip down Memory Lane as we delve into some long-forgotten television theme tunes - some you may remember and others you may well have forgotten for good reason.

The 1970s is having somewhat of a renaissance recently and I'm more drawn to the spooky side of it. So that involves the worrying public information films that used to be shown very frequently during kids' tv and also the emergence of a musical genre called Hauntology which is a modern throwback to the unnerving synthesizer pieces you may have heard during those PIFs.

Anyway, sit back and enjoy the show. I've given this a 5-headphone rating so you'll enjoy more in a quiet darkened room with a pair of headphones on. Preferably with a can of Tizer and some Spangles.

01
John Leach - Sun-Ride (Theme from Ask the Family)
Album: Ole Jensen And His Music - Chappell ‎– LPC 1047
Year: 1970

02
Mike Mansfield - Funky Fanfare
Album: Flamboyant Themes Volume II - KPM Music ‎– KPM 1038
Year: 1978

03
Placebo - Balek
Album: 1973 - CBS ‎– S 65683
Year: 1973

04 
Play Safe - 'Frisbee'
Public Information Film
Year: 1979

05
Bernard Herrmann - Theme From Taxi Driver
Album: Taxi Driver - Original Soundtrack Recording - Arista ‎– AL 4079
Year: 1976

06
Tany Turens - High Life (Theme from Stop Look Listen)
Album: Rockin' In Rhythm - Regency Line ‎– RL 1021
Year: 1975

07
Franco Micalizzi  - Affanno
Album: Violence! - Cometa Edizioni Musicali ‎– CMT 1005/13
Year: 1977

08
Ted Heath Orchestra - Johnny One Note (Theme from John Craven's Newsround)
Album: Big Band Percussion
Year: 1961

09
Syd Dale - Marching There And Back (BBC Screen Test Theme) (1970)
Album: Strictly For The Birds - Programme Music ‎– PM 007
Year: 1975

10
Ruby - BART (BBC Schools 'Dots' intermission music)
Album: Ruby - PBR International ‎– PBRL 5001
Year: 1976

11
Hoyt Curtin - Hair Bear Bunch Theme
Year: 1971

12
Laurie Johnson - The New Avengers (1976)
Album: The Avengers & The New Avengers / The Professionals - Unicorn-Kanchana ‎– KPM 7009
Year: 1980

13
John Scott/ The Saint Orchestra - Return of the Saint (1978)
Single: Pye Records ‎– 7N 46127
Year: 1978

14
Lonely Water (Public Information Film)
Year: 1973

15
Structures Sonores Lasry-Baschet - Manège (Theme from Picture Box)
Album:  N° 4 - BAM ‎– LD 098
Year: 1965

16
Unknown - Theme from Near And Far
Year: Unknown

17
Dudley Simpson - The Tomorrow People Theme
Album:  The Tomorrow People Original Television Music - Trunk Records ‎– JBH017LP
Year: 2006

18
Theme from Sapphire & Steel
Year: 1979


19
Ron Grainer - Tales of the Unexpected Theme
Single: RK Records ‎– RK 1021
Year: 1979

20
Godiego - The Birth Of Odyssey / Monkey Magic
Album: Monkey - BBC Records ‎– REB 384
Year: 1980

21
Richard Denton and Martin Cook - Tomorrow's World
Album: Hong Kong Beat & Other BBC TV Themes - BBC Records ‎– REH 385
Year: 1980

22
Sparks - Number 1 Song in Heaven
Album: No. 1 In Heaven - Virgin ‎– V 2115
Year: 1979

23
Space - Magic Fly
Album: Magic Fly - Pye International ‎– NSPL 28232
Year: 1977

24
Yellow Magic Orchestra - Technopolis
Album: Solid State Survivor - Alfa ‎– ALF 85664
Year: 1979

25
Barry DeVorzon - Wonder Wheel (Main Title)
Album: The Warriors (Music From The Motion Picture) Waxwork Records ‎– WW010
Year: 2016/1979

26
Andre Previn - Executive Party
Album: Rollerball (Original Soundtrack Recording) - United Artists Records ‎– UAS 29865
Year: 1975

27
John Baker - New Worlds (Theme from Newsround)
Album: BBC Radiophonic Music - BBC Radio Enterprises ‎– REC 25M
Year: 1968

28
Ian Humphris - Theme from Words and Pictures
Year: 1970

29
Joe Campbell - Mr Men Theme
Year: 1976

30
Brian Cant Meets The Fabreeze Brothers

31
The Persuaders - Grange Hill
Album: Scored 1 - 0 - JTI Records ‎– JTI 001 LP
Year: 1996

32
Door Chain (Public Information Film)
Year: 1976

33
Gelg - Look Around You (Opening Titles)
Year: 2002

34
Peter Moss - Think Of A Number Theme
Year: 1977

35
Alan Hawkshaw - Its All At The Co-Op Now (1972)
Album: 27 Top T.V. Themes & Commercials - Columbia ‎– TWO 391
Year: 1972

36
Boards of Canada - Roygbiv
Album: Music Has The Right To Children - Warp Records ‎– warplp55
Year: 1998

37
Never Go With Strangers (Public Information Film)
Year: 1971


38
Actress - N.E.W. (2012)
Album: R.I.P
Year: 2012

39
We Are The Champions
Year: 1973

40
Focus - House Of The King
Album: Focus 3 - Polydor ‎– 2344-038
Year: 1972

41
Stepasaur - Stepisode 5
Year: 2015

42
Charley - Mummy Should Know (Public Information Film)
Year: 1973

43
John Carpenter - Assault On Precinct 13 ( Main Title)
Album: ‎Assault On Precinct 13 (The Original Motion Picture Score) - Record Makers ‎– Rec-12
Year: 2003

44
Brian Bennett - Discovery
Album: Fantasia - Bruton Music ‎– BRI 10
Year: 1980

45
Telltale - Rainbow
Album: Songs From The Thames Television Children's Programme Rainbow - Music For Pleasure ‎– MFP 50087
Year: 1973

46
Solid'N'Mind Featuring MC Whirlwind D* & Johnny F ‎- An Original Break
Single: Liberty Grooves ‎– LIB 001
Year: 1990

47
Juice MCs - Spydaman
Single:
Year: 2017

48
Theme from the Dukes of Hazzard (Good Ol' Boys)
Album: Music Man -  RCA ‎– PL13602
Year: 1980

49
Sunburst - Theme From Paramount TV Series Mork & Mindy (Mork & Mindy Rock)
Single: Logo III Records ‎– Logo 5/79
Year: 1979

50
John Gregory - Six Million Dollar Man
Album: The Detectives -  Philips ‎– 6308 255
Year: 1976

51
Charles Fox - Wonderwoman
Album: Superfriends - Warner Bros. Records ‎– 56582
Year: 1978

52
Corniche ‎- Theme From Chips
Single:
Year: 1979

53
Rhythm Heritage - Theme From S.W.A.T. (1975)
Album: Disco-Fied - ABC Records ‎– ABCL-5174
Year: 1976

54
Ennio Morricone  - My Name Is Nobody
Album: My Name Is Nobody - Cerberus Records (2) ‎– CEM-S 0101
Year: 1979




          Traktor Certification for Rane MP2015 Mixer        

Rane is pleased to announce our collaboration with Native Instruments to bring you Traktor certification for the new Rane MP2015 rotary mixer. This incredible mixer is generating a lot of excitement on its own, but now with Traktor Scratch certification people are freaking out, as this is the first Rane mixer to have Traktor Scratch certification! Now, Traktor Scratch users can control of two or more virtual decks with Traktor control vinyl or control CDs. It is also the first Traktor Scratch certified mixer with dual USB ports; allowing easy back-to-back DJing and quick changeovers.

Setup is easy with the MP2015's class compliant Core Audio drivers for Mac. Windows users simply need to install an the included ASIO driver. The MP2015's control surface is MIDI mappable to Traktor giving you software control directly on the mixing console. Traktor isn't bundled with the MP2015, but it is available for easy download from the Native Instruments website: www.native-instruments.com

The #futureofdjing has never been so bright!

 

 

Traktor Pro 2.8.0 Release Notes

 

TRAKTOR PRO 2.8.0 contains some substantial changes to the core of the software, along with numerous other feature enhancements, improvement, and bug fixes. The list below provides a comprehensive overview of all changes since the last public release (TRAKTOR PRO 2.7.3) including the improvements in the two Public Beta releases (TRAKTOR PRO 2.7.4 and 2.7.5):


1.1. 64-Bit Application Architecture

TRAKTOR PRO now has 64-bit architecture. By making TRAKTOR PRO a 64-bit application, it allows TRAKTOR PRO to access all the available RAM on computers with 64-bit operating systems; previously, TRAKTOR PRO could only access a maximum of 2GB of RAM regardless of how much RAM was actually installed and available on the computer. By giving TRAKTOR PRO access to more RAM, it will increase performance of the software by allowing management of more items (larger Track Collections, more Remix Deck samples, better playback caching, etc.).


Are you still using a 32-bit version of Windows? With this release, the 32-bit version of TRAKTOR PRO can now also access an additional 1GB of RAM (if the computer has it available) providing additional performance and stability.


ATTENTION 64-BIT WINDOWS USERS: If you are using an audio interface which only has 32-bit drivers, be sure you are also using the 32-bit version of TRAKTOR PRO in order to access the low-latency ASIO drivers for the audio interface, otherwise your audio interface won’t be selectable in the TRAKTOR PRO Preferences. If you have a 64-bit operating system, the Installer will have installed the 64-bit version of the application by default and you will need to take a few extra steps to run the 32-bit version detailed in the readme.


1.2. Multi-Processor Improvement

A significant update has just been made to TRAKTOR PRO where multi-processor support is concerned: TRAKTOR PRO’s old audio threading model has been completely updated and optimised. For users of multi-processor computers who were experiencing degraded audio performance since the release of TRAKTOR PRO 2.7.0, the new threading model should fix these issues automatically.


1.3. Automatic Deck Flavor Switching

TRAKTOR PRO now features full Automatic Deck Flavor Switching for all users; previously, this was only working when loading items via the KONTROL S8 Browser. Now, loading or dragging a Track or Remix Set onto a Deck will cause the Deck to automatically switch to the appropriate flavor to play the content.


1.4. Parallel Audio Analysis

Also new in this version is a special analysis mode called “Parallel Processing”. This option can be found at the bottom of the Analysis window which appears when you right-click on tracks and choose “Analyze (Async)” from the context menu. If you enable the Parallel Processing checkbox before clicking “OK", TRAKTOR PRO will then use multiple threads to process many tracks simultaneously. Our tests show that processing a large collection of files can now be done three times faster with this option enabled. Be aware, however, that TRAKTOR PRO will use lots of your computer’s resources to do this and it may affect playback of tracks. We therefore only recommend using this feature in an offline situation rather than during a live performance.


1.5. Support for the New TRAKTOR KONTROL D2

TRAKTOR PRO 2.8.0 supports the new KONTROL D2 hardware controller. The D2 will be officially released on May 4th, 2015.


1.6. Spin, Scratch & Hold a Playing Deck with KONTROL S8 or D2 Touchstrip

New is a preference for the KONTROL S8 and KONTROL D2 which changes the SHIFT-behavior of the Touchstrip while a Deck is playing. Previously, holding SHIFT and touching the Touchstrip would perform an Absolute Seek (the Deck’s position would jump to the location corresponding to the touch on the Touchstrip). With this new preference enabled, this behavior is changed so holding SHIFT will allow you to perform spins, scratches, and holds with the Touchstrip while the Deck is playing.


NOTE: Backspins are enhanced by the fact that TRAKTOR PRO will stop the spin as soon as you release the SHIFT button. You can therefore perform a backspin effect for 2 beats by turning on FLUX mode, holding SHIFT, and swiping backwards on the Touchstrip. Two beats later, release the SHIFT button and the spin will stop and normal playback will resume right on the beat you desire.


1.7. KONTROL S8 and D2 Beat Grid Edit Mode Zoom

When enabling Beat Grid Edit Mode on the KONTROL S8 or KONTROL D2, the left-most Performance Button will now be active. Pressing this button will zoom in on Beat 1 allowing you to set the position of the Beat Grid with greater precision. Press the button again to exit the zoom.


1.8. KONTROL S8 and D2 Position-Aware Beat Grid Tempo Adjustment

When in Beat Grid Edit Mode on the KONTROL S8 or KONTROL D2, the two center Performance Knobs are used for adjusting the Tempo of the Beat Grid—the left knob is a coarse adjustment while the right knob is a fine adjustment. The new improvement is that these knobs are scaled based on the viewing position of Beat Grid Edit Mode so that adjustments made far away from the Grid Marker don’t result in abrupt changes to the waveform position. For example, if you are near the Grid Marker at the start of a song and change the Tempo of the Beat Grid, you will see the waveform move under the Beat Grid by a particular amount. If you then scan later into the track, adjusting the Tempo will create a similar amount of motion on the waveform (rather than a large amount of motion) thus allowing for precise setting of the Beat Grid Tempo over the length of the song.


1.9. KONTROL S8 and D2 MIDI Controls

We've added a new feature to KONTROL S8 (which is also available on KONTROL D2) that allows you to use the Performance Knobs, Performance Buttons, and the Performance Faders below the Displays as MIDI output controls. You can therefore use these controls to send MIDI messages to other software or external gear. This feature is not enabled by default and requires some configuration, detailed in release notes.


1.10. Pioneer CDJ-900NXS and XDJ-1000 Integration

Full native support for these two Pioneer players, including all available new functions provided on the XDJ’s touch screen interface, is now integrated into TRAKTOR PRO.


1.11. Rane MP2015 Scratch Certification


The Rane MP2015 rotary mixer is now Scratch Certified and can be used as an audio interface for TRAKTOR PRO in conjunction with Timecode vinyl and/or CDs.

 

 


1.12. Additional Bugfixes
 

Beta #2 (2.7.5) had a problem with FX routing modes—this issue has now been fixed.

Beta #2 also had a problem with playback of long M4A (AAC) files in the 32-bit version of the application. This has been fixed.

Beta #2 sometimes exhibited crackling when loading new tracks into Decks. This version resolves this issue.

Beta #2 could have high CPU spikes when used on some low-performance systems. We have made a change which prevents this.

We fixed a problem where TRAKTOR PRO would unnecessarily update the tags of tracks which are in the Preparation List at startup.

An issue was reported where TRAKTOR PRO could hang when accessing the Explorer node of the Browser Tree. This issue has been fixed.

At the same time, we also fixed an issue that could cause TRAKTOR PRO to crash when opening an Archive folder containing over 2500 .nml files.

We fixed an issue where TRAKTOR PRO would unnecessarily update all file tags when clicking on a Playlist .nml in the Explorer node of the Browser Tree.

We also fixed an issue where TRAKTOR PRO would unnecessarily update file tags when deleting items from a Playlist.

We fixed the CPU Load spike that can sometimes occur when engaging Keylock or Filter for the first time.

There was a problem where you sometimes couldn't re-order tracks in a Playlist without first clicking the "#" column twice and this has been fixed.

The Battery Indicator in TRAKTOR PRO's header was broken on some 64-bit systems and wouldn't show the battery level. This is now fixed.

An improvement has been made to MP4 (AAC) audio handling on Windows which should remove crackling during playback of these file types.

When adjusting the Master Clock Tempo via the S8 or D2, we have removed the "Hold BACK to Reset" text since this function wasn't valid for the Master Clock.

This version contains a fix for some crashes on startup which were part of the first Beta (2.7.4)

There were also reports of crashes or hangs on shutdown on some Windows systems and we have made a fix for it.

A bug was reported in 2.7.3 where a Deck would stop when loading a track into the playing Deck regardless of preference settings. This issue has now been fixed—loading a track into a playing track will leave the Deck playing so you immediately hear the newly-loaded track.

We also fixed an issue where jumping out of an Active Loop via a HotCue would disable the Loop—the Loop will now remain active when doing this just like in TRAKTOR PRO 2.6.8.

Fixed a problem where memory corruption could occur when browsing and sorting the Explorer node under very specific conditions with the S8 or D2.

Lastly, we fixed two issues which occurred when making changes to tracks (such as changing the track Rating) in the Explorer node while that same track was already playing in a Deck; doing so could result in the analyzed tempo being lost (causing the track to fall out of sync) or removal of the “played” checkmarks from the track. These issues should no longer occur.


1.13. Controller Editor

Controller Editor has been upgraded to version 1.8.0.262 to support KONTROL D2 in addition to other improvements and bugfixes.

 

About Native Instruments

Native Instruments is a leading manufacturer of software and hardware for computer-based music production and DJing. The company's mission is to develop innovative, fully integrated solutions for all professions, styles and genres. The resulting products regularly push technological boundaries and open up new creative horizons for professionals and amateurs alike. www.native-instruments.com

 

 


          To The Keppe Motor Team - What Is This Waveform ?        
To Keppe Motor Team, The following image is the scope capture from your video entitled "Keppe Motor going open source". I would like to know exactly what coil arrangement, power and control arrangement was used to generate the complimentary pulse waveform shown in the scope capture. Is this showing a motor running and if so is it under load? I would like to duplicate this. Please explain this. Sincerely, Greg -from the video "Keppe Motor going open source"
          Remstar Pro Cpap        
stop snoring




REMstar® Pro with C-FlexTM is the perfect combination of the advances that have made us the leader in sleep therapy. On the surface, it's easy to trace this device's lineage it's pure REMstar from its sleek design down to its integrated heated humidification unit. But therapy with C-Flex is a totally unique experience. The difference is the C-Flex waveform which offers a more comfortable way to deliver sleep therapy by taking the work out of exhalation. To do this, C-Flex tracks and reacts to every breath throughout the night. This gives the device the ability to make breath-by-breath adjustments to ensure the optimal level of pressure relief during exhalation to deliver more comfortable therapy. Other features include Encore® Pro SmartCard® compatibility, onboard FOSQ, automatic altitude compensation, auto on/off, leak tolerance that makes treatment easier, and flexibility and comfort for patients on the go. The Encore Pro SmartCard and onboard FOSQ (Functional Outcomes of Sleep Questionnaire) help you track quality-of-life improvements, so you can better manage treatment outcomes - for healthier patients and a healthier business.

Remstar Pro Cpap
          MEASURING VOLTAGE        
Most of the readings taken with a multimeter will be VOLTAGE readings.
Before taking a reading, you should select the highest range and if the needle does not move up scale (to the right), you can select another range.
Always switch to the highest range before probing a circuit and keep your fingers away from the component being tested.
If the meter is Digital, select the highest range or use the auto-ranging feature, by selecting "V." The meter will automatically produce a result, even if the voltage is AC or DC.
If the meter is not auto-ranging, you will have to select http://www.talkingelectronics.com/projects/Testing%20Electronic%20Components/images/v.gifif the voltage is from a DC source or http://www.talkingelectronics.com/projects/Testing%20Electronic%20Components/images/v-.gifif the voltage is from an AC source. DC means Direct Current and the voltage is coming from a battery or supply where the voltage is steady and not changing and AC means Alternating Current where the voltage is coming from a voltage that is rising and falling. 
You can measure the voltage at different points in a circuit by connecting the black probe to chassis. This is the 0v reference and is commonly called "Chassis" or "Earth" or "Ground" or "0v."
The red lead is called the "measuring lead" or "measuring probe" and it can measure voltages at any point in a circuit. Sometimes there are "test points" on a circuit and these are wires or loops designed to hold the tip of the red probe (or a red probe fitted with a mini clip).
You can also measure voltages ACROSS A COMPONENT. In other words, the reading is taken in PARALLEL with the component. It may be the voltage across a transistor, resistor, capacitor, diode or coil. In most cases this voltage will be less than the supply voltage.
If you are measuring the voltage in a circuit that has a HIGH IMPEDANCE, the reading will be inaccurate, up to 90% !!!, if you use a cheap analogue meter.


Here's a simple case.
The circuit below consists of two 1M resistors in series. The voltage at the mid point will be 5v when nothing is connected to the mid point. But if we use a cheap analogue multimeter set to 10v, the resistance of the meter will be about 100k, if the meter has a sensitivity of 10k/v and the reading will be incorrect.
Here how it works:
Every meter has a sensitivity. The sensitivity of the meter is the sensitivity of the movement and is the amount of current required to deflect the needle FULL SCALE.
This current is very small, normally 1/10th of a milliamp and corresponds to a sensitivity of 10k/volt (or 1/30th mA, for a sensitivity of 30k/v).
If an analogue meter is set to 10v, the internal resistance of the meter will be 100k for a 10k/v movement.
If this multimeter is used to test the following circuit, the reading will be inaccurate.
The reading should be 5v as show in diagram
A.
But the analogue multimeter has an internal resistance of 100k and it creates a circuit shown in
C.
The top 1M and 100k from the meter create a combined PARALLEL resistance of 90k. This forms a series circuit with the lower 1M and the meter will read less than 1v  
If we measure the voltage across the lower 1M, the 100k meter will form a value of resistance with the lower 1M and it will read less than 1v
If the multimeter is 30k/v, the readings will be 2v. See how easy it is to get a totally inaccurate reading.




This introduces two new terms:
HIGH IMPEDANCE CIRCUIT and "RESISTORS in SERIES and PARALLEL."

If the reading is taken with a Digital Meter, it will be more accurate as a DMM does not take any current from the circuit (to activate the meter). In other words it has a very HIGH input impedance. Most Digital Multimeters have a fixed input resistance (impedance) of 10M - no matter what scale is selected. That's the reason for choosing a DMM for high impedance circuits.  It also gives a reading that is accurate to about 1%.


MEASURING VOLTAGES IN A CIRCUIT
You can take many voltage-measurements in a circuit. You can measure "across" a component, or between any point in a circuit and either the positive rail or earth rail (0v rail). In the following circuit, the 5 most important voltage-measurements are shown. Voltage "A" is across the electret microphone. It should be between 20mV and 500mV. Voltage "B" should be about 0.6v. Voltage "C" should be about half-rail voltage. This allows the transistor to amplify both the positive and negative parts of the waveform. Voltage "D" should be about 1-3v. Voltage "E" should be the battery voltage of 12v.

MEASURING VOLTAGES IN A CIRCUIT


          Tracktion Waveform now included with all ROLI Seaboards        
Tracktion ROLI Seaboard BlockTracktion and ROLI have announced that Waveform, the newest and most advanced digital audio workstation from Tracktion, will be bundled with every ROLI Seaboard instrument including the new Seaboard Block. The hardware-software combination will allow more musicians to easily create and edit projects with the full expressivity of MIDI Polyphonic Expression (MPE). Waveform offers a […]
          Oscarizor spectrum analyzer plugin updated to v3.5.0        
Sugar Audio Oscarizer 3.5.0Sugar Audio has released an update of Oscarizor, a 2D/3D multi channel spectrum analyzer effect plugin for Windows and Mac. The new version features technical changes, UI resizing up to Ultra-HD & Retina, RMS/Peak/Peak hold metering, UI Lock and other changes. Parallel waveform, spectrum & stereo field comparison in 2D and 3D using side-chaining and […]
          Fragments for Kontakt released by Homegrown Sounds        
Homegrown Sounds FragmentsThe Fragments instrument library is an attempt at exploring granular style synthesis within the Kontakt system. It uses Long Evolving Waveforms and the sound is generated by playing Looped Sections from these WAVs. The Loop Position and Length can be Sequenced in real-time via the Dedicated Loop sequencers which can create both Glitchy and interesting […]
          Digital Exploration for UVI Falcon released by Oberheim 8000        
Oberheim 8000 Digital ExplorationOberheim 8000 has released Digital Exploration, a soundset for the Falcon hybrid synthesizer instrument by UVI. 200 presets for UVI Falcon created only with Falcon Synths (no samples). That means VA, Wavetable, Noise and Pluck oscilator are used for generating sound. Soundset contain Bass, Drums, FX, Instruments, Leads, Pads, Synths and resources (single waveforms and […]
          Analog-to-digital video preservation and a site visit to a repair facility at the NASA Ames Research Center        
Hello everyone, happy day of digital archives! I'm Lauren Sorensen, Preservation Specialist working at Bay Area Video Coalition, a technology non-profit devoted to inspiring social change by enabling the sharing of diverse stories through art, education and technology. Our preservation department specifically works to preserve and provide support to archives with film, video, and moving image and audio material in their collections. We are one of the only non-profit vendors for high-quality preservation of video and audio in the country.

My blog entry can be found on our home site blog here, and involves a site visit from earlier this year, when I met in person for the first time Ken Zin, who works on the NASA Ames Research Center campus (at the site of a former MacDonald's there!) repairing obsolete reel-to-reel videotape machines. His work is essential to us because he is one of the only experts left in the country doing this specialized type of work. Because these decks feature heads (the part of the machine that reads the magnetic waveforms on the tape) that are proprietary to the companies that made them in the 1970s and 1980s, it is a real challenge keeping them in proper working order for high quality preservation. Realignment and regular maintenance is important in maintaining a facility that is appropriate for preservation services; we maintain these decks as we would museum artifacts because they are some of the last working machinery that is available to transfer 1/2" open-reel machines; after these decks are no longer operational, any magnetic recordings held in archival collections will be lost.

Please enjoy the photos!
          DaVinci Resolve 12.5 Is Ready for Your Grading Pleasure        

The all new Blackmagic DaVinci Resolve 12.5 is out and, as always, it’s available as a free download. The company claims to have implemented 1,000 enhancements and 250 new features, so this new version is definitely worth a closer look!

The video below highlights some of the new and enhanced features.

DaVinci Resolve 12.5
1,000 enhancements and 250 new features, now this is a bold statement! Sure, not every one of these bits and pieces, but there are some very exciting improvements to be found in this new version of DaVinci which was initially introduced as a beta version at this year’s NAB show in Las Vegas.

For a quick overview, these are the major enhancements of DaVinci Resolve 12.5:
First up is a brand new feature called Fusion Connect. With this you can send clips from Resolve straight to Fusion 8 (also available as a free download) for further effects work. After having finished the FX, the clip will be sent straight back to Resolve. This feature comes built in natively, no more painful rountripping breaks. The Adobe Premiere & After Effects tag team can expect some competition here!
There’s no doubt that DaVinci Resolve is the gold standard in terms of colour grading, but this isn’t really true when it comes to the edit tab. It’s become obvious that Blackmagic wants to change their reputation in this regard. A lot of editing features have been added to the 12.5 release of Resolve; some of them may be pretty much standard, but at least they are available now. And it doesn’t seem that Blackmagic will stop developing their flagship software anytime soon.
New editing feature for version 12.5
Blackmagic has put a lot of effort into the edit page. As the whole editing capability is relatively new to Resolve, it’s nice to see the edit page flourishing in such rapid fashion. Some of the highlights include:
advanced edit overlay, including ripple overwrite

Improved clip retiming, including 2 new curves
Drag and drop for clip reorganisation on the timeline
Ability to view and edit clip metadata right from the edit page
Power Bins: like Smart Bins but spanning across multiple projects
Advanced editing features like ripple overwrite, ripple cut and paste/insert
Markers now can work as duration markers, too
New dissolves, wipes and other transitions
Ability to edit keyframes directly on the timeline
Audio waveforms can be displayed as an overlay in the source viewer
New text tool. Edit text directly in the viewer

New colour features
The colour tab, an already advanced workspace, has also been improved. Now you can control the colour temperature and tint of a clip via dedicated sliders. Finally! Another improvement is the new Resolve FX effects library which is built right into DaVinci. The effects range from things like glow or film grain to weird stuff like JPG Damage, and they work either as a CPU or GPU effect. Other features include:

New point tracker for hard to track clips (works with Resolve FX, too)
Improved node editor: copy, swap, extract and insert nodes easily. Even compound nodes are possible
New HDR mode for nodes
Log contrast control can be set to either linear or S-curve mode
New layering composite modes like colour dodge, colour burn, exclusion, luminosity and others

new point tracker tool in conjunction with Resolve FX
Some of the new features can only be found in the paid studio version of Resolve 12.5:

Enhanced temporal and spatial noise reduction
Automatic (and manual) lens correction for minimizing lens distortion

New deliver page with presets
The all new deliver page finally sports presets for popular video platforms like Youtube or Vimeo. Also, Premiere Pro XML presets are available and even audio only options can be found for further audio editing. All in all, the deliver module gets a much needed update and is really easy to use now. Everything you need is right there.
the all new deliver page with presets
Conclusion
It’s impossible to walk you through each and every new feature of this massive update. You really should try it for yourself and explore the myriad enhancements in DaVinci Resolve 12.5. I must say I’m really impressed by the developing speed and all the new features in such a brief period of time. Not long ago, there wasn’t such thing like an edit page in Resolve at all! We will see what the future holds for DaVinci but it seems that it will be bright.
Check out Blackmagic Design’s offical website for all the details. There are plenty of videos to be found, too.

The post DaVinci Resolve 12.5 Is Ready for Your Grading Pleasure appeared first on cinema5D.


          Evidence Gaps in the Use of Spinal Cord Stimulation for Treating Chronic Spine Conditions        
imageStudy Design. A review of literature. Objective. The aim of this study was to define and explore the current evidence gaps in the use of spinal cord stimulation (SCS) for treating chronic spine conditions. Summary of Background Data. Although over the last 40 years SCS therapy has undergone significant technological advancements, evidence gaps still exist. Methods. A literature review was conducted to define current evidence gaps for the use of SCS. Areas of focus included 1) treatment of cervical spine conditions, 2) treatment of lumbar spine conditions, 3) technological advancement and device selection, 4) appropriate patient selection, 5) the ability to curb pharmacological treatment, and 6) methods to prolong efficacy over time. New SCS strategies using advanced waveforms are explored. Results. The efficacy, safety, and cost-effectiveness of traditional SCS for chronic pain conditions are well-established. Evidence gaps do exist. Recently, advancement in waveforms and programming parameters have allowed for paresthesia-reduced/free stimulation that in specific clinical areas may improve clinical outcomes. New waveforms such as 10-kHz high-frequency have resulted in an improvement in back coverage. To date, clinical efficacy data are more prevalent for the treatment of painful conditions originating from the lumbar spine in comparison to the cervical spine. Conclusion. Evidence gaps still exist that require appropriate study designs with long-term follow-up to better define and improve the use of this therapy for the treatment of chronic spine pain in both the cervical and lumbar regions. Level of Evidence: N/A
          Unexpected Optimization #2: fixed point arithmetic        

I’ve wanted to add a ‘unison’ feature to Twytch for a while but without a significant performance hit. Unison works by having many oscillators per voice so improving the oscillator CPU usage was a must.

The way many oscillator implementations work is by having a ‘phasor’ which is a phase value that cycles through the values from 0 to 1. When the phase value is 0, we are at the beginning of the waveform, 0.5 we’re half way through and when we get to 1, we’re at the end and we set the value back to 0. Checking if we have to wrap the phase every sample for every oscillator can get pretty costly. One way to improve this is by using the modf function instead of a comparison and an if/else branch but it still is has a relatively large CPU hit.

int table_size = 2048;
double phase = 0.0;
double phase_diff = ...; // Usually we don't know what value this is.

for (int i = 0; i < samples; ++i) {
  phase += phase_diff;
  if (phase >= 1.0)
    phase -= 1.0;

  // Lookup value with phase. No interpolation for this example.
  int index = phase * table_size;
  output_buffer[i] = lookup[index];
}

// Alternatively with modf which is slightly more efficient.
double integral_part = 0.0;
for (int i = 0; i < samples; ++i) {
  phase = modf(phase + phase_diff, &integral_part);

  // Lookup value with phase. No interpolation for this example.
  int index = phase * table_size;
  output_buffer[i] = lookup[index];
}

There’s another solution though and it’s using ‘fixed-point’ instead of ‘floating-point’ numbers. Floating point numbers can represent a crazy large range of numbers but for our phasor implementation we only care about number between 0 and 1. What we can do is use an unsigned integer type to represent these values. 0 will still remain the beginning of our waveform, but UINT_MAX will represent the end of our waveform. Cool thing about integers is that when we add to our phase and go past UINT_MAX, we get the wrapping for free! Another benefit is if our wave lookup table is a power of two, we can get the lookup index by bit shifting our current phase down which is another (albeit small) performance improvement.

int table_bits = 11;
int shift = 8 * sizeof(int) - table_bits;
int phase = 0;
int phase_diff = ...; // Usually we don't know what value this is.

for (int i = 0; i < samples; ++i) {
  // Automatially wraps :D
  phase += phase_diff;

  // One bit shift is more efficient than a multiply and cast I believe.
  int index = phase >> shift;
  output_buffer[i] = lookup[index];
}

After this improvement I’m would say the DSP is at a releasable efficiency. Comparing Twytch to other similar synths on the commercial market, it’s in the middle of the pack. The next thing I’ll be focusing on is improving the UI efficiency as there are a lot of moving parts in Twytch and most of them are not CPU friendly.


          Waveform 50% off during Steam Sale!        
Spread the word and tell your friends! During the Steam sale you can grab Waveform at 50% off. The Mac version is just about ready to go too for all those who have been waiting. And of course if you own the Windows version you get the Mac version for free and vice versa.

Enjoy!
          Weekly Leaderboard Challenges have begun!        
Hey everyone, you may have noticed in Monday's update that a new button has appeared on the level select screen saying "Weekly Challenge". This represents the first challenge in a series of week-long challenges we're planning.

This first one has you take on the Sun's Deep Space Mode, which for anyone who hasn't managed to get there already will reveal a ton of the awesome content that awaits you throughout Waveform. Compete to get a high score on the leaderboards, and the top 5 finishers will receive a free Steam copy of the fantastic indie game Atom Zombie Smasher!

So what are you waiting for? The competition is already underway! :)
          Waveform nominated for Best Design        
The finalists in the 2012 Independent Propeller Awards were announced today, and we’re incredibly honoured to see Waveform nominated for Best Design!

Here's hoping it wins the award when all is said and done :)
          Free DLC for early supporters!        
Hey everyone, I hope you're enjoying Waveform so far! I just want to say a huge thank-you to everyone who has supported Waveform. It truly means a ton to us!

But instead of just saying thank-you, we have a gift we'd like to give you! As we speak, a DLC package is being prepared that will be offered free to anyone who has purchased Waveform at the time of its release. It'll feature a new planet to explore, which means new levels, a new object to interact with, and of course a new Deep Space Mode to experience!

So look forward to that and keep riding the wave!

Edit: Just to clarify, the DLC will be free for anyone who buys Waveform up until the time the DLC is released, and for a short time thereafter as well. Thanks!
          Podrid's Real-World ECGs: Volume 3, Conduction Abnormalities 3: A Master's Approach to the Art and Practice of Clinical ECG Interpretation        
Volume 3, Conduction Abnormalities, explores the essentials of AV nodal and intraventricular conduction abnormalities seen in everyday clinical practice:
  • AV conduction abnormalities, including first-, second-, and third-degree AV block and enhanced AV conduction
  • Intraventricular conduction abnormalities, including intraventricular conduction delay, fascicular block, and bundle branch blocks

Podrid's Real-World ECGs combines traditional case-based workbooks with a versatile Web-based program to offer students, health care professionals, and physicians an indispensable resource for developing and honing the technical skills and systematic approach needed to interpret ECGs with confidence. ECGs from real patient cases offer a complete and in-depth learning experience by focusing on fundamental electrophysiologic properties and clinical concepts as well as detailed discussion of important diagnostic findings and relevant management decisions. Six comprehensive volumes encompass more than 600 individual case studies plus an online repository of hundreds more interactive case studies (Podrid's Real World ECGs website) that include feedback and discussion about the important waveforms and clinical decision-making involved. From an introductory volume that outlines the approaches and tools utilized in the analysis of all ECGs to subsequent volumes covering particular disease entities for which the ECG is useful, readers will take away the in-depth knowledge needed to successfully interpret the spectrum of routine to challenging ECGs they will encounter in their own clinical practice. 

Download link:

          Poetry in the Electronic Environment        
by
Stephanie Strickland
1997-04-15

Talk given at Hamline University, St. Paul, MN, April 10, 1997

I want to start by evoking some of the many times that poetry is not a “book” of poetry: for instance, Prospero’s Books, a film, itself a version of Shakespeare’s theater poem, “The Tempest”; poetry videos, poetry spots on the radio; and many kinds of live performance, from slams to sonic poetry. We have also, today, for the first time, hypertext. Poems, and collections of poems, can be composed as, or into, hypertext, using the many specific capabilities of hypertext software, which itself comes in many flavors. I will be describing how I composed my book of poems, True North, into hypertext in a moment.

First, I would like to talk a little about the electronic environment. Hypertext becomes possible in an electronic environment, and it is only possible there. The best known example of a hypertext is the World Wide Web, an enormous structure, almost biological in the way it communicates and propagates by proliferating links. The electronic space, often called cyberspace, has some very unusual qualities, to judge by pre-electronic categories. It is characterized as tidal sea, web, sky, and solid. Thus, people surf it, send out web-crawlers to explore it, gophers to tunnel through it, engines to mine data from it, and they fly through and above it in game simulations. They establish “home” pages in it, as though it were rooted, although at their own location distance has disappeared - New Zealand, New York, St. Paul, equally present, and equally speedily present.

All these metaphors suggest a great freedom of movement, but electronic space is also where you lock up, if the power goes down, if the network crashes, if your machine fails to harmonize with its software. Maybe space metaphors are not the right ones to choose; maybe time is more to the point, and you will think so as you wait for your host connection, or wait for sound to download, a graphic to paint.

What actually happens when one goes from a print to an online environment? Let me give an example from outside literature. Most of us remember card catalogs in libraries. Very roughly speaking a card catalog is a system with 3 cards for each physical book - author, subject, title. When that card catalog goes online, the book collection, as experienced, becomes enormously larger for the person who can search the electronic catalog. Each word, each date, each descriptor, and logical combinations all become gateways to the collection. One gains access at once more precise and more far-reaching if one is able to search the electronic space.

Similarly, when a set of poems is composed in or into hypertext, the space in which they exist literally opens up. Released from the printed page into this floating space, readers are often uneasy. What is the poem? Is it the sum of every possible way to proceed, the sequence of such journeys, or one particular path privileged as a saved reading? Only slowly does one assimilate the truth that one may return each time differently.

With print, one does encounter pages 1 - 85 differently at each reading because of being in a different frame of mind, but in hypertext, the pages change too - both you and your counterplayer, the hypertext, bring difference to the table. What you find are not really pages, of course. In hypertext, the unit that replaces the page is called the writing space, or lexia, which not only holds text as does a page, but unlike the page, has its own title and also an embeddable interior. New little text spaces can be implanted in it to the memory depths of the program. Rather roughly, what pages and their numbering are to books, writing spaces and their titles are to hypertext. They could be thought of as labeled file folders, able to hold anything from less than one poem to many; but, as with folders, they not only hold poems but also other folders, and possibly folders within folders, within folders.

As a way to begin thinking about the nature of electronic poetry, I would like to describe specific technological features that I made use of in the Eastgate Storyspace software, but I need first to tell you a little bit about True North, and why hypertext was appropriate for it. True North, as a manuscript, rings the changes on two image/themes - that is, themes which are also images.

The first of these is Embeddedness or Nestedness. In True North embeddedness appears on a continuum from the most embodied example, the pregnant body of a woman who is trying to speak, to the most abstract example, the numbers as we know them on the number line.

The second theme is an American heritage of formal structuring devices that are at once abstract and graphical, a heritage equally of American science and American poetry. I chose two contemporaries from the Connecticut Valley, Willard Gibbs, the country’s first mathematical physicist, and the poet, Emily Dickinson, to focus on. Gibbs invented a new way of modeling and talking about a multiplicity of dimensions, and he invented the notation for physics. Dickinson’s formal innovations include the truly radical preservation of simultaneous alternative readings by her use of a cross-shaped footnote mark to reference the additional versions inscribed at the bottom of the page in her hand-made booklets.

I play with these image/themes across five different registers, so of course each of them echoes the other. Here, the possibility of having direct access from any poem to any other was an immediate advantage of hypertext. How exactly does this access occur?

First, by the link structure, which I will speak more about in a minute; second, by the ability to search for any word in the text; third, by searching keywords assigned to the text (like bell, or pole, in True North); fourth, by following color cues; and, fifth, as in print, by choosing poem titles from the Contents. A sixth option is to choose the more readily available writing space, or lexia, title.

The writing space title, the file folder label in the previous analogy, is a form of on-screen address. Since each of my poems already had a title, I did not wish simply to repeat that title as their screen-address - so I chose a “second” title which would resonate with the first and with the poem. For instance, “Real Life Is White in Connecticut” became “Bramblepoints”; “Heaven and Earth, 1666” became “Isaac Newton”; and “Holding the Other Hostage” became “Boundary.” Since it is possible to locate both the poem-title list and the lexia-list online, it is possible to arrive at a poem you have already read, but this time as it appears under another name. In this way, the text acquires a double, or shadow - provided exclusively by the way it is named. This kind of shadow is a persistent concern in True North, and its formal implementation occurs quite naturally in a hypertext environment, whereas double-titling is so unconventional, and so unhappily accommodated on the print page, as to be unreadable in paper.

What about hyperlinks, how do they work? There are two sorts and Storyspace hides both of them unless the reader intervenes to reveal them. One is a basic, or lexia-to-lexia link. Each writing space links to one primary other in a manner that yields a default page-through (or click-through) of the work, but unlimited links of this sort beyond the primary one may be created.

The second type of link is called a text-link. Any number can be created, and I used as many as five per lexia, but on average about three. To reveal these, which appear as red outline boxes around the linked words, the reader must hold down the control key. I worked against the current World Wide Web convention of using color for words or phrases that are linked, since another link-display mechanism, the red boxes, was already forced by the software.

Instead, I used the possibility of coloring words to create non-electronic links. Colors are different on different monitors and different systems. On my system the seven basic colors provided by the software palette were red, blue, an apple green, gray, dull gold, magenta, and what was called cyan, a sort of aquamarine sea-green. I colored from one to 15 words per lexia, depending on the length of the poem, to represent something like leitmotifs in music, the same color claiming a subliminal unity for items perhaps not otherwise seen as similar by the reader. For instance, “shadow,” “fallout” and “cave” are gray; “All,” “subsumed,” and “Exact” are gold; “Sermon,” “slaveship,” and “promise” are red; “mother,” “grave” and “freemen” are green; “true,” “equal,” and “star,” blue; “pole,” “compass,” and “vortex,” magenta; and “secret,” “echoing” and “starrier” are cyan. The colors do have a flowing-through kind of meaning throughout the piece but are also modulated by the local context. In particular, different forms of the same word, active or passive, singular or plural, may be differently colored.

Storyspace, as I’ve said, allows for many sorts of text access, but the most beautiful formal devices it provides are graphical and map-like. One such device is the ability to make embedded meta-poems out of the presentation of navigational tools. For instance, the display of lexia titles corresponding to a given keyword can itself be composed to form an inner sort of poem, or meta-poem.

Another graphical device is the Storyspace map, which can be actively shaped by dragging and dropping with the mouse. The map shows the lexia as shadowed boxes and the links that connect them as loopy lines. I was able to shape these elements into two-dimensional emblematic images that are themselves graphical cues to the poem. Of course, if one does not actively shape these maps, the software will present them in a default, cleaned-up, rectilinear display.

Graphical cues to text, before hypertext, have been mainly absent from contemporary poems apart from artist’s books and the realm of concrete poetry, but the older traditions of manuscript illumination, emblem books, and the work of William Blake are a rich heritage in this respect. Each of the five True North registers - and the overall poem - have emblem maps.

“The Mother-Lost World” section of the book evokes the pressure language practice has put on women’s bodies. Its map, a shape that suggests a breast, a part of the DNA spiral, a cornucopial basket, repeats the shape of the image given to the overall True North map one logical level up, and, in turn, is repeated as the profile or contour shape of many of the poems.

The “Blue Planet Blues” section, concerned with the pressure scientific language puts on the earth, is mapped as an irregular blue sphere.

The third section, “Language Is a Cast of the Human Mind,” deals with the language-makers, Gibbs and Dickinson. The image that maps it evokes the two independent planes of any natural language and the mediating gesture ongoingly required to activate the distance between those planes.

The fourth section concerns the human side of numbers and refers to the history of their discovery. Its map is a graph.

And finally, “There Was an Old Woman,” the fifth section, concerns a different sort of navigation, a recaptured time/space that connects prehistory with the present. Here, the old woman tossed up in a basket who sweeps cobwebs from sky, is the presiding spirit. She is mapped by a feather-like form.

To summarize, then, even though True North is a purely textual hypertext (by which I mean it contains no voice or music files, no scanned-in images, only words), it is still true that to compose it many aesthetic choices had to be made that weren’t required for its printed version. These include choices about scaling and color, choices about mapping, choices about links, and choices that orient the text to the electronic world.

One of the ways I made True North hypertextual was undertaken unconsciously. In fact, I had begun something of this procedure in my previous book, The Red Virgin: A Poem of Simone Weil, which dealt with a woman writer who had no control over the editing, production, or publication of her own work. Her situation, as it turns out, has some formal similarities to the state of electronic text, which is also only available in fragments which are re-combinable at will by many different readers with many different agendas. To accommodate that state of Weil’s texts and to represent how I was forced to encounter them, I wrote The Red Virgin so it could be entered at any point, as suggested by the index-like appearance of its table of contents. This mode of composition, which I carried over to True North, inadvertently forced the solution to a problem that arises in hypertext. If every poem is potentially a point of entry, it must be written to that need. Of course, on-line, all solutions do not need to occur at the level of the text; solutions at the level of the link, or map, are also possible.

In general, I think one could say about contemporary hypertext poetry that radical innovation does not reside at the level of the alphabetic text, with the major exception of authors who are themselves programmers. For those using off-the-shelf products, the changes reside in how to structure and divide text and how to accommodate the powerful set of co-players the text has acquired, that make for on-screen reading experiences both more radically individual and more adventurous than page-reading.

What gives the sense of adventure? I would suggest that the reader’s active involvement is structured by the following five dynamic elements:

1) the myriad transformative choices permitted by the buttons, keys, icons, menus, tiles, zooms, and cascades of the Windows software;

2) an elusive inwardness behind the glowing screen which has a drawing power similar to a husky or sultry voice, and the same sounding board quality of seeming to testify to the structure of what lies within - be that vocal cord, smoking habit, or spirit, in the case of the voice; be it digital code, programming choices, or composing identity, in the case of hypertext;

3) the power of secrets in hidden links, whether these are displayable on command - as in True North - or are the guard links commonly found in hyperfiction;

4) the power of non-hidden links, the power of providing a wealth of prominently displayed links that invite what Barthes would call an erotic reading, the ability to follow some detail which draws me in and, in hypertext, draws me on, shifting the focus away from interpretation and toward co-composition; and

5) the accommodation of an old-style nostalgia by the ability at any time to go home, to go back, to keep an electronic history - a sort of orienting Ariadne’s thread - as well as the accommodation of a new need to be in touch with the web’s vibrations which allows me to be in my past and my present, as well as to go forward, simultaneously, by means of keeping several Windows open, but inactive, on my screen.

As I worked in electronic space, I felt the book come to resemble an album, whose chronology may be strict or casual or a combination of both, its main use being to prompt conversation about shared experiences. Movement through the electronic screens, by comparison with print, was more whimsical, more riffling, more waiting for something to catch your eye, more like riding melodies through silence, and less like lockstep. Since the thematic sections of True North are meant to play the same melody, in different arrangements as it were, this aspect of the cybertext pleased me very much.

The radical transiency of the electronic medium allies it to an old world of oral culture, but except for the real-time on-line gatherings called MOOs, the warrant of the actual assembled community does not make its presence felt on-line, or its choices known. The old oral world also kept people on their toes - its messages, too, disappeared into the air without a trace - but people didn’t get lost, because they could always recover their connection by recovering their place in the assembled community. How do conditions in the electronic environment contrast with this?

To answer this question, I want to go beyond the present state of hypertext on disk to the larger world of electronic art, and I’d like to suggest some ethical questions.

What does poetry in the medium of electrons mean to the technically advanced? I judge by the work, often designated poems, at Digital Mediations, a recent exhibition at the Art Center College of Design in Pasadena. First of all, the artwork or poem had become, there, an interactive installation. In one case, a world of electronically-generated waveforms came crashing across a bed of salt, a sort of large raised sandbox filled with salt which one could pile up, spread out, trace patterns in. The images that came across it were only partially decipherable, fragmented bodies doing exactly “what” one needed to observe them for a long time to know.

It was never clear whether one’s own movement in the room, or some algorithmic formula, varied the output on the saltbox; whereas, in other installations, viewers had explicit input into wall display scenes by the use of a sort of flashlight that set off varying cascades of images. Again, only long experimentation yielded the secret of the repetitions.

One of the most interesting pieces was based on Goethe’s novel, Elective Affinities. Four separate pedestal stands were erected in a pattern representing the four seats of a car. On each pedestal was a screen on which was projected a film of a person speaking. On the wall behind the four, a moving highway was projected, giving the convincing sensation that “the car” was moving. Coming close to any one pedestal enabled you to hear that person’s voice tape. As we came to understand, each was a member of one, or more, of the possible couples. These cinematic images that don’t leave their separate pedestal screen stands but do continuously “move” in their car, that appear to glance at each other, refer to each other, but never touch, never engage in conversation, made a powerful and disquieting image of electronic “connectedness.” Overhearing was the main action permitted the spectator in this case.

Often, at installations that permitted and invited more extended kinds of interaction, I would have a different feeling. At one point I felt like I was playing on a monkeybars, or in a stand of treetops, as my choices simultaneously activated cinematic flows, streams of text, and a reader’s voice, not to mention my own sense of balance being trained and tested.

What really is the analog, I wondered, in this tremulous world, to picking up the turtle shell, to stringing it with gut, to remaking the Aeolian harp? And what do screens themselves reference? This is a question being actively investigated by digital artists. In True North, there are several analogs to the screen: the written page, the Western number system - with its historical growth by extension and embedding, and the wall of the prehistoric cave, all treated as earlier “screens,” as the site of codes that have both formal and emotional significance.

I take it that Lascaux and similar caves were sites of cultural instruction about the most important forms of orientation for a nomadic people, whose path intersected with their main food supply but a few days in each year. The fact that their winter temperatures were lower than minus sixty degrees Fahrenheit also put a premium on springtime birth, both of humans and animals. These people needed a practical astronomy that tracked weather and timed their migrations.

The cave drawings, which make use of cave protuberances for three-dimensional effects, are positioned so high on the cave wall that extensive scaffolding was required to paint them. The animals portrayed are sometimes given two heads, or heads in two positions. Since the images can only be seen by a flickering lamplight, this yields a sort of primitive animation, making them the world’s first known form of “infotainment,” a spectacle designed to be scary - a herd of giant aurochs bearing down on you in the dark, when you have never seen a simulation of any kind. Did the cave child react to that spectacle the way we do to frightening films or to immersive interactive digital simulations?

The cave drawings appear to refer to the life cycle of a fertile world system and the place of young humans in it - the footprints of children are fossilized on the floors of several of the caves in Southern France. The drawings do not seem to represent hunting magic, one explanation that some scholars have proposed, since the animals pictured, though many and various, in no case include the one whose bones, cast aside after eating, line the cave floor. They are perhaps an attempt to initiate growing children into the need to reverence the lifeworld. In particular, anyone who views them has been taught to look up in wonder, to treat animals and their habitats as important, to decode images from tangled lines, and to note such matters as the angle of inclination of the ecliptic, that belt of the sky which they must inspect for seasonally changing constellations.

What is the comparably responsible use of the computer screen? How can we use it to orient, rather than disorient, our children? The work of the architect Christian Moeller offers one example. He created a building facade in Frankfurt, Germany, which interacts with its environment so as to mirror it, not escape from or distract from it. Temperature, wind speed and direction, hours of daylight, and ambient street noise are all reflected in the proportion of yellow and blue on the face of the building, the patterned flow of color-change across it, the hours of operation. One can imagine a great need for information of the form, “this is where you are,” in the virtual age, but surely the task of the 21st-century child is not more daunting than that of the child of 17,000 years ago in this respect. From a similar concern, True North attempts to look at the ways in which we find and orient ourselves, the ways in which we navigate.

We will necessarily henceforth orient ourselves by digital means. And we should not forget that electronic text is created with many hidden collaborators, the most powerful of whom are the people who designed the authoring software. In my own case I found that the software I used gave me many opportunities, but it also failed to support me in ways I had thought it would. As I said a few moments ago, Storyspace offers two kinds of links, but I can think of at least 5 kinds of links I wanted to use, and to track separately. Other options I explicitly refused, for instance the use of guard links or the ability to name a path - a sequence of links. For me, if a path is other than a narrative one in a suspense-adventure story, “This Way to the Treasure,” naming it destroys the reader’s pleasure in discovery.

Pleasures in discovery are close to the heart of hypertext poetics. And the resulting poetic structures will seem quite different to those who prefer being pointed outward, or inward, when they stop reading. Those who conquer, solve, or “get” texts will also tend to feel differently in this environment than those who meander, muse, or take delight. Both kinds of readers may enjoy discovering patterns of connectivity.

Beyond delight - which I am not sure we should go beyond - hypertext allows for different experiences than are available in print, and I believe that some of these have an ethical importance. For instance, they force us to reevaluate our bipolar categories, a reevaluation which I believe we need to undertake if we want to live together on the globe, appreciating differences. Perhaps even our survival will depend on our learning to use categories of considerably greater subtlety than the simplistic “us”/”them” sort we have become used to. But how are we to acquire an openness to new categories? It does no good to command it, if people have no context or experience to help them feel what non-bipolar thinking might be. I know of no better place for people to be eased through the first shock of learning something so different than the world of story, the world of art and games, a good place to gain confidence about a new way of being in the world.

At the beginning of my talk, I mentioned how the electronic environment undercut our old ideas about space and time, in some respects collapsing time and space, or allowing them to stand in for each other. Let me give another example of how this bypass, or collapse, of the bipolar becomes part of our experience when reading hypertexts.

In a hypertext, the devices of metonymy and metaphor slide into one another, depending on whether the links (the gaps) are experienced as adjacencies or arcs of flight. In a hypertext, one really understands that these are but aspects of a single process, and most of us can experience both the link and the gap without needing to oppose them, or call them contraries or opposites. Indeed they seem to be the same, but can be experienced as one and the other.

It is also possible, in a hypertext, to experience what it is like to abandon Cartesian space. The difference between feeling this abandonment, and talking about it, is considerable.

Far in advance of present societal need, both Emily Dickinson and Willlard Gibbs experimented with new strategies to navigate the multi-dimensional spaces they had contrived. I believe we should follow their lead and investigate how to shape our intuitions about digitized data, how to learn to “read” meaning in geometries of representation, how to understand more fully the meaning of numbers, number-systems, and the modes of number-use which we are invoking to incarnate data, literally to construct virtual bodies.

I will leave you with a humble image for electronic hypermedia - that of a tumbleweed, which represents itself and is also identifiable with the process that is changing and enhancing it. And, since this is about hypertext, and thus about having many choices, I will leave you with another image, that of Salmon Rushdie’s sea of stories. The sea is not a storeroom - the sea is an ocean comprised of the streams of story held in fluid form. Now if to that metaphor we add the oceanographer’s knowledge, gained only in the last 35 years, of how the oceans store and exchange energy through the movement of water masses from basin to basin and through the activity of eddies, which hold more than 90 percent of the ocean’s energy, we can amplify the metaphor and see that to access energy and life the stories must move from basin to basin and swirl in the eddies, becoming new versions of themselves. I suggest to you that hypertext supports exactly such movement.

This talk is indebted to many. My chance, as a non-academic poet, to develop hypertext came as a result of Professor N. Katherine Hayles’s NEH seminar in 1995. I am deeply grateful to her and to the National Endowment for the Humanities. I would also like to refer interested readers to Michael Joyce’s print book, Of Two Minds: Hypertext Poetry and Poetics, and to three excellent sites, Marcel O’Gorman’s “How to Wread Hypertext” and John Cayley’s two sites, “Hypertext/Cybertext/Poetext” and the frames version of his presentation.


          The Minds Of Many        
Released by: UKW Records
 
Release/catalogue number: ukw 3008
 
Release date: Mar 24, 2010 

Download Link:
themindofmany.zip
 
ENJOY IT, PLAY IT, SPREAD IT !
 

sounds by: 48oooooooooo, Certifiedsick, Coronium, D.L.I.D, Dastie, Drop The Lime, Flex Rock, Greg Shin, Hayashi, jhk, Jumping Jack Flash, Kim Moyes, L33ch, MAMM, Memorex, Nina Lou, Noize Generation, Pablo Decoder, Plastique de Reve, Brodinski, Prozac Polka, Randee Bugga, Shir Khan, Smashsnap, Solar Explosion, Springboard , Swam, Textbreak, The Model, Tocadisco, Uboot, uπit, Was, Yibn

Artwork was sent by hannes at freieradikale.at
Song name “the mind of many” was proposed by Stephanie Nazywalskyj

In CD:
1. the-mind-of-many (original)
2. the-mind-of-many (MAMM-remix)
3. the-mind-of-many (Noize-Generation-remix)
4. the-mind-of-many-(Doberman-rmx)
5. the-mind-of-many-(Operette-Remix)
6. the-mind-of-many-(Plastique-de-Reve-rmx)
7. the-mind-of-many-(Obi-blanche-remix)

Pre-listen here:

          What is PQ? Whose responsibility is it?        
India and most developing countries continue to struggle for 24x7 power supply, good Power Quality (PQ) environment, and Energy Efficient (EE) economy. See APQI earlier blog that tries to quantify economic impact from poor PQ environment - Are developing economies at risk due to power quality issues and challenges?

Another follow-up question will be WHO IS RESPONSIBLE FOR MAINTAINING GOOD POWER QUALITY? Answer to this is usually divided and depends upon whom we pose this question. Network Operator will blame his end-customers, Device Manufacturer will blame both the Network Operator and the end-customer, and end-customer usually has not much awareness, and believe that its supply problem from Network Operator.

In addition to above 3 key stakeholders, there are others like - Designers, Commissioning Engineers and Maintenance Engineers that also plays role in sustaining good PQ environment.Commissioning Engineer play the role in ensuring the quality of installation based on certain design standards. Good installation is one of the necessities to maintain power quality during operations and to minimize voltage drop, sparking, overheating, etc. Maintenance engineers are subsequently responsible for preventing any glitches in operations which can result in power quality issue. For ex. loose connections lead to sparking;poor quality of wires results in voltage drops at customers premise, etc.

Further on, we shall focus upon 3 key stakeholders – Customers, Network Operators and Equipment Manufacturers.

INTERCONNECTEDNESS OF OUR GRID AND PQ

Simply put, Power Quality is a measure of quality of power supply on the grid. A PQ disturbance occurs in case of any deviation of voltage and current waveforms from the ideal. Voltage disturbances commonly originate in the network and affect the customers. On the other hand, current disturbances originate at a customers installation and affect the network components and other installations. Therefore, VOLTAGE QUALITY is considered to be primary responsibility of the network operator, while CURRENT QUALITY is primary responsibility of the end-customers.

Because of interconnected grid, PQ disturbances are caused both by upstream and downstream elements. Across various PQ disturbances, it is observed that customers are responsible for roughly 70% of the PQ problems, while the remaining 30% come from the network. One study by LPQI for 25 European Union countries in 2005-06, reported following % distribution of poor PQ electrical manifestation - transients and surges (29%), voltage dips (23.6%), short interruptions (18.8%), long interruptions (12.5%), harmonics (5.4%) and others (10.7%). (See European Power Quality Survey Report)

‘CUSTOMERS’ AND PQ INTERCONNECTION

Today’s customers are highly dependent on digital technology and power electronic devices, with increasing use of various types of electronic appliances, ballasts, variable speed drives, etc. These devices when used produce current distortions in the network due to their non-linear operating characteristics. These disturbances then travel upstream because of insufficient isolation of each customers from the grid. This increase current in turn causes additional energy losses in the system and also pose increased demand of apparent power to individual customer and also entire network faces the risk of premature aging and failure.

Some of commonly reported PQ complaints from end-customers:

Equipments affected by poor PQ
External  Manifestation of poor PQ
Electrical Manifestation of poor PQ
IT equipments

Computer lock-ups and data loss
Presence of earth leakage current causing small voltage drops in earth conductors
Variable speed drives, telecom equipments, arc furnace, welding equipment, relays, static converters, security and access control systems, etc.
Motors and drives malfunctioning, computer screen freeze, loss of data
Capacitor bank failure, shocks due to neutral voltage, Flickering of lights, noise in telecom lines
Motors and process devices
Malfunctioning of motors and process devices. Extra heating, decreased operational efficiency and premature aging of the equipments.
Presence of voltage and current harmonics in the power supply
Relays, circuit breakers and contactors
Nuisance tripping of protective devices
Distorted voltage waveform because of voltage dip
Sensitive measurements of
process control equipment
Loss of synchronization in processing equipment
Severe harmonic distortion creating additional zero-crossings within a cycle of the sine wave
Table 1. Customers reported problem due to poor PQ Environment (Source: 2. Sharmistha Bhattacharyya and Sjef Cobben, Technical University of Eindhoven)

With increased awareness, Customers can take below precautions to support building healthy PQ environment:
  • Maintain power factor within prescribed limits to reduce reactive power demand, which in turn will balance the voltage in their premise and also overall network
  • Reduce harmonic currents while using more energy efficient equipments at their premise
  • Keep a log of faced power disturbances at premises, which may come handy in finding effective solutions
‘NETWORK OPERATORS’ AND PQ INTERCONNECTION%

The Network Operators design and maintain key network characteristics like feeder length, number and sizing of Distribution Transformers (DT), DT load balancing etc. which in turn determine grid impedance and that influence the PQ level in the network. With high impedance in the network, PQ issues (mainly flicker and harmonics) become more prominent. Further, DT winding configurations and earthing problems also add to the harmonic behavior and voltage dips in the network. 
Thus, technical Loss reduction and fixing PQ environment are strongly interrelated, and could be addressed though same investments. The main network components which get affected in terms of faster wear and tear through PQ disturbances are:
  • Transformers
  • Cables
  • Power-factor correction (PFC) Capacitors
  • Protective Devices, Digital Relays
  • Revenue Meters
The Network Operator can streamline its loss reduction initiatives with improving PQ environment in following ways: 
  • Controlling voltage level at customers point of connection by reactive power management and take appropriate steps at the broader network level.
  • Maintaining load balance at Feeder and DT level, and reduce current losses. This will ensure increased power availability.
  • Doing regular PQ Measurements with advancement in technologies like SCADA, Smart Metering, etc. and designing relevant dashboards to facilitate timely actions
  • Isolating customer loads and their variations from main grid through use of Capacitor banking. Different variants like automatic power factor correcting devices, switched capacitors, Statics VAR compensators, dynamic voltage regulators etc. are available.
‘EQUIPMENT MANUFACTURERS’ AND PQ INTERCONNECTION

Organized and branded Equipment Manufacturers usually specify PQ immunity (EMI/EMC) of their equipment in terms of harmonic current emission and other parameters, as applicable under some Standards. However, in a real life situation, the network voltage is already distorted (and is non-sinusoidal) because of harmonic current emissions from other loads and customers in the network. This can result into distortions from their devices exceeding the ‘compatibility level’ of the system.

The optimum performance of Equipment Manufacturers’device is not guaranteed when the supply voltage is distorted. Experiments show that devices produce higher harmonic currents when the supply voltage is distorted. Below table compares total harmonic current distortions (THD) of some households’ devices under sinusoidal and distorted supply voltage condition.

Device
THD with respect to the total RMS current drawn by the device
Under clean voltage condition
Under distorted voltage condition (THD = 6%)
TV
48%
55%
Personal Computer
87%
89%
Refrigerator
10%
18%
CFL
72%
79%
Table2. THD of devices under clean and distorted network voltage conditions (Source: 2. Sharmistha Bhattacharyya and Sjef Cobben, Technical University of Eindhoven)

The device manufacturer, however, cannot be blamed directly for such a situation as he is not responsible for his devices’ operations under a distorted supply voltage condition. But at the same time the manufacturers also need to ensure the immunity of equipments they manufacture against Electro Magnetic Interference (EMI) and specify the tolerance limits for same.

Question is should Manufacturer built in this isolation into its equipments, with resulting price hike, or should customer take collective facility isolation from main grid interference? While answer to this will be driven by market and regulations, still Manufacturers together with their installation teams could start giving weight to PQ issues during their installations and checking some basics like - capacitor bank, cables with larger neutral conductors, adjusting under voltage relays, etc.

CO-CREATING AN ECOSYSTEMFOR BETTER PQ MANAGEMENT

As we saw, poor PQ environment is caused by Customers, Network Operators and also Equipment Manufacturers. At the same time, poor PQ impacts all of them negatively - Network Operator has to face high losses, customers has to face increased break-down of equipments and higher bills, and Equipment Manufacturers has to face increased warranty costs. Therefore, to implement PQ mitigation, a systematic approach needs to be followed starting with to identify the responsibilities of each stakeholder in the network. (A good detailed ‘Decision making flow-chart’ on PQ solutions is shown in report ‘Consequences of Poor Power Quality – An Overview’)

Below figure illustrates the mutual responsibilities sharing among various stakeholders in the network.

Figure 1. Mutual responsibilities among various stakeholders in the network (Source: 1. Sharmistha Bhattacharyya)
With growing complexities both on the on end-customer side (via increasing usage of electronic appliances) and Network Operator side (via higher adoption of smart grids, smart meters etc), it is important that each stakeholders understand their contribution and impact from PQ, and take appropriate measures. End-Customers have to become more aware and demanding in their procurements of both power and also equipments, as at the end, it is them who pays out for these in-efficiencies. In US and Europe, there are clear SLAs for voltage supply and harmonics emission at the point of supply between the Network Operator and end-customers. Improved Regulation, policies, standards and end-customer awareness and reinforcement will play key role in guiding market for optimum equilibria for good PQ environment.

References
  1. Sharmistha Bhattacharyya and Sjef Cobben, Technical University of Eindhoven, “Consequences of Poor Power Quality – An Overview”
  2. A. de Almeida, L. Moreira. J. Delgado, ISR – Department of Electrical and Computer Engineering, “Power Quality Problems and New Solutions”
  3. Ministry of Power, GoI, “Strategic Blueprint” 
  4. Central Electricity Authority, “Regulations for Grid Standards”
Asia Power Quality Initiative (APQI) aims to create and build awareness on issues related to Power Quality (PQ). The group continues spreading the essential message of PQ to various stakeholders, helping businesses and industries with improved understanding and insights. pManifold is supporting APQI team in content generation and wider sharing of message.

Posted by: Kunjan Bagdia @ pManifold

          OG Status: Feel Me        
Electronic music has a new supergroup to contend with, hailing from none other than the creative hotbed of Boulder. This summer,Crushendo’s Julian Garland, DYNOHUNTER’s Clark Smith, and Two Fresh’s Colby Buckler teamed up to make “future ratchet” music as OG Status. The trio’s latest release, “Feel Me,” is a glitchy, trappy track with deep waveform dips and cymbal riffs. The track exudes the polished Colorado chill we’ve come to expect from its undeniably hip-hop influenced creators and combines everything from distorted vocals to dubby drops to chirpy “ays.” A nice departure from in-your-face trap, “Feel Me’s” successful blend gives OG Status, well, OG status.
          Newly formed Waveform Agency introduces its roster each Wednesday        
Since the early days of TheUntz.com, we have been given a leg up here and there from members of the industry who saw what we were trying to do and gave us a shot. One of those was agent and manager Jonathan Griffin. A well-known figure in the underground electronic scene, he has amassed quite a roster of UNTZ faves including Random Rab, ill-esha, saQi, and more.
          Raytheon demos new protected tactical waveform on a small, lightweight, low-cost modem        
MARLBOROUGH, Mass. -  Raytheon Company (NYSE: RTN), whose terminals protect the military's most sensitive satellite communications, recently held a demonstration that proved sensitive data could be passed through small, low-cost satellite terminals using an unclassified but secure waveform. A benefit of this approach would be that front-line tactical users, such as forward deployed forces or remotely piloted aircraft, could execute missions more securely and reliably than is now done in environments where ...
          Uncovering Invisible UFOs        

Uncovering Invisible UFOs


The Reality Behind a Large Percentage of Current UFO Sightings in the US and around the World.

by Robert Hughey (GOOGLE+)

4UFOs.com Search for Real UFOs | UFO Sightings Today


UFO Reports of a New Kind of Unknown Craft making appearances circa mid-to-late 2012


Atlanta, GA
Current research has been seeking some sort of proof  and finally identifying the reality behind  a type of UFO that's been steadily becoming more and more common. It's made several appearances in Video Sightings, and the number of UFO Sightings of this Type of UFO has increased in the reports filed to MUFON, NUFORC, and on a smaller scale, to my own Network of Ufology web properties.


 A suspicion about these particularly confusing UFOs: I am trying to lay down this information logically to ensure I present it in a way that makes the most coherent sense about something that does not wish to be seen. The mechanics about this subject also had a level of physics and math I barely remember hearing about, certainly never fully comprehending, but the important "gist" of most scholarly reports and research I can understand with the help of a dictionary and the Internet.

So let's start at the beginning, regarding the types of UFO Sightings and the types of known UFOs.

Known Types of UFOs and UFO Sightings


There are a few types of UFO Sightings that happen more than others.
Perhaps you're familiar with them as well:

  • the Black Triangle Sightings
  • the Flying Saucers or the disc-shaped craft were the first
  • the great variety of the Spherical UFOs, which are by far the most common.
    • Spheres that are metallic and methodical in their movements
    • the floating fiery orbs that come in many colors, green and red being the most common, and these also are where I'd place the incorrect identification of beautiful Chinese Lanterns, as they can reach high elevations and confuse many people.
  • the Cigar-shaped UFOs mostly cylindrical in shape, and these have been seen in greater and greater numbers recently
  • the Jellyfish UFOs or the Organic Living UFOs that, in my personal opinion, are space-faring organisms / extremophiles that have been known to exist for some time now, but it was deemed dangerous to reveal as it might mess with the "common" man's perception of the Universe.
There are also ways of separating the different kinds of Sightings other than shape. You can separate Sightings of unidentified phenomena by the potential of being a hoax, of being a delusion or influenced by drugs or alcohol. You can also differentiate by Aerial or Stellar Phenomena, or you can label them by way of Terrestrial and though as yet unproven, it is possible that perhaps some could be labeled Extraterrestrial as some Sightings were labeled when the US Government first investigated the UFO Phenomenon in the mid 20th Century. Alas, that conclusion was quickly reversed, but it's still the reality of what causes a small percentage of UFO Sightings.


I found an answer to a question about a kind of Sighting most prevalent in current video of UFOs from this past year only. Particularly over nations that are perhaps less-than-friendly to the US.

There are several exceptional daytime videos of this particular type, which perhaps I should actually show you what I'm referring to, if you're curious about the type of common Unidentified Flying Object:

UFO Sighting Video


Here, watch this example of the type of UFO Sightings I mean: the Invisible UFO.




While I don't know if this video for sure is an American Aircraft, I do know that the technology has been researched and developed since the 1990's to further the stealth technologies utilized in the stealth aircraft from the 1960's and 1970's. I also remember a story that went something like this: after watching the Arnold Swarzeneger movie Predator, supposedly a very high-ranked US Military Official asked loudly to his support staff, "Why don't we have that?" in reference Alien's ability to refract light and appear invisible. Supposedly, research began that year on light-bending and refracting technologies.



Partially Invisible UFOs

You see, what I did was trace the innovations from 1970's Stealth Technology to see what each inventor or scientist would attribute to the next. These are called citations, and they work backwards.  I take your invention or discovery (or math problem or whatever) and I use it to solve some electrical engineering issue, so I cite your work for its help, inspiration or to give credit. The point is, if my work is supposed to be hidden from the public... then maybe I shouldn't cite anyone or publish in Google Scholar.

Though now that I think about it: you don't publish to Google Scholar. It finds your work and indexes it for you based on the formatting. So I wonder if they are aware or if all the pieces of the tech aren't aware of what they've created:




I'd call that video to be exactly what it would look like if part of a masked, stealth or cloaked ship was exposed due to some form of machine failure. Wouldn't you?.  A search through YouTube brings up a few other videos with very similar answers to the question of: "what would partially malfunctioning 21st Century Stealth Technology look like from the ground?"


The answer, dear readers? Well, according to the mountain of scholarly evidence I have from the inventors of these cloaking devices (as well as the patent applications too), we've had fully visually and electromagnetic cloaking technology in the skies since last Summer.

They're ours. They were compelled to finish their studies soon, but they didn't think anything of setting up a long series of Google Scholar Citations - something to be proud of that no one actually pays any attention to except giant nerds in the pursuit of knowledge, and its a way of guaging the publishing and research experience of other giant nerds...

...giant nerds like me: a nerd perfectly happy to read through a mountain of physics and optics journals to find the patent applications and the experiment reports, of which my favorite of course is called:


 Wenshan Cai, Uday K. Chettiar, Alexander V. Kildishev, and Vladimir M. Shalaev, "Designs for optical cloaking with high-order transformations," Opt. Express 16, 5444-5452 (2008) 




Abstract of the Paper on the Results from the work by Wenshan et al.


Recent advances in metamaterial research have provided us a blueprint for realistic cloaking capabilities, and it is crucial to develop practical designs to convert concepts into real-life devices. We present two structures for optical cloaking based on high-order transformations for TM and TE polarizations respectively. These designs are possible for visible and infrared wavelengths. This critical development builds upon our previous work on nonmagnetic cloak designs and high-order transformations.



It ends up that in 2009, in the field of Optics it was decided they were now seeing the birth of a new specialty/subset in their Field. And it is called Transformational Optics.

So what's been developed since then?


That's just the first of 32 citations all leading to an advanced technology that will blow your mind. Truly. I have several other of the papers, or at least the abstracts available down in the Bibliograhy/References of this article. Heck, just from the titles alone... and I have a whole path from Stealth Tech into Transformational Optics and on down the line to an experiment that successfully rendered a large object both invisible and super invisible.

If you're like me, you'd love to know what "super invisible" means.  It means that not only is the "object " cloaked in visual light, but super invisible is when it is invisible to all Electromagnetic radiation in general - from the energetic gamma wave to the slow, methodical waveform of ...of whatever this one is going to be called.


Metamaterials and the problem of creating invisible objects. 2. Invisible shells that conceal the objects contained in them from an external observer
   Journal of Optical Technology, Vol. 76, Iss. 6, pg. 350 (2009).

At the bottom of this page is a citation/reference list that should supposedly enable Google Scholar to index this page while at the same time place citations to the articles themselves from my article. I'd love to have a link from this article to the researchers, as if it is a way online for me to say, "hey, I know what your research was used for. Way to go guys" without actually saying it to anyone in particular. I wonder if it's even wise for me to be flippant about the whole matter.


If you want to just get future updates and posts from 4UFOs.com delivered to your inbox, you can get that by adding your email to the location at the following link:
Subscribe to 4UFOS.com UFO Feature Articles by Email

Thanks for visiting my little corner of the Internet.

Robert Hughey
Atlanta, GA
4UFOs.com Search for Real UFOs



Citations |Bibliography


Wenshan Cai, Uday K. Chettiar, Alexander V. Kildishev, and Vladimir M. Shalaev, "Designs for optical cloaking with high-order transformations," Opt. Express 16, 5444-5452 (2008)

Optical source transformations Optics Express, Vol. 16, Iss. 26, pg. 21215 (2008).

Invisible cloak design with controlled constitutive parameters and arbitrary shaped boundaries through Helmholtz’s equation. Optics Express, Vol. 17, Iss. 5, pg. 3581 (2009).

Designs for electromagnetic cloaking a three-dimensional arbitrary shaped star-domain Optics Express, Vol. 17, Iss. 22, pg. 20494 (2009).

Electromagnetic localization based on transformation optics Optics Express, Vol. 18, Iss. 11, pg. 11891 (2010).

Cylindrical optimized nonmagnetic concentrator with minimized scattering Optics Express, Vol. 21, Iss. S2, pg. A231 (2013).

Metamaterials and the problem of creating invisible objects. 2. Invisible shells that conceal the objects contained in them from an external observer Journal of Optical Technology, Vol. 76, Iss. 6, pg. 350 (2009).

          WAVEFORMATEX        
WAVEFORMATEX was most recently changed by: -87.2.12.102 (8/26/2009 6:01:22 PM), Max Wild-81.187.101.89 (9/3/2007 2:49:08 PM), dimzon541@gmail.com-195.38.23.42 (11/30/2005 11:42:57 AM)
          WAVEFORMATEXTENSIBLE        
WAVEFORMATEXTENSIBLE was most recently changed by: -84.150.145.180 (12/26/2007 4:45:35 PM), Max Wild-81.187.101.89 (9/3/2007 2:52:30 PM)
          Exelis SideHat Radio to be Tested by US Army        
The U.S. Army Contracting Command has awarded Exelis (NYSE: XLS) a task order to deliver a limited quantity of SideHat® SRW (Solidier Radio Waveform) Applique radios for evaluation. SideHat provides a critical networking capability specifically developed for the vehicular electromagnetic and physical environment experienced on the battlefield. The task order was from an indefinite-delivery, indefinite-quantity contract awarded to the company in April for the specific purpose of providing SRW ...
          Depending graphs on time, rebooting PC crashed circuit        

Hello. I faced with  big problem.  Alomsot all my circuits that i bulit up  are loosing form when i'm  changing time in simulation. I.e., i have perfect signal 60-100 ms, but i have no signal 0 -100ms or 60-200ms.Is it normal?

And one more thing. I made circuit and  used it almost week,  i had great waveform and it worked fine, but today i rebooted my PC and this circuit stoped to work. I have no signal now. I'm shoked and dissappointed, because  my supervisor approved this circuit already and now i'm trying to figure out new, but it dioesn't work.  I know that these circuits  work in real life and LT Spice, but in PSpice i have no signal.

As you can see on the last picture, when i'm trying to increase frequency( reducing R4 or C2), i'm loosing signal. 

p.s. also. i'm intersting what means error ORCAP-15052?


                  
The "Spooky Science" of Heart Rate Variability (HRV)

Over at Track Your Plaque we are always pushing the envelope on heart disease prevention and reversal.  I have been in the program so long that (barring another ground breaking find of which there have been many over the years) I am near the end of what I can do physically.  However, I have always been intrigued by how the mind might be used to alter the body.

Throughout history there have been many wild claims of yogis and mystics doing incredible things with their minds and living to ripe old ages disease free so I have always kept on the look-out for some hard science to back it up.  But I am a skeptic and, like Houdini, skeptical of whether there was any scientific truth to the wild claims - until recently.

I started to read about findings that the heart and other organs have clusters of neuronal cells (rudimentary "brains") and independent nervous systems that interact with the brain.  In fact, the heart has been reclassified in some precincts as an endocrine organ because of research that indicates it produces hormones.  Sound a little spooky?  Hang on, it gets even spookier.  I have begun to uncover literature that suggests these "brains" in conjunction with the "master brain" in our heads can even alter the transcription of certain genes.  It does not change your DNA but the suggestion is that these "brains" can "order" the body to either up-regulate or down-regulate the production of specific proteins created by DNA, proteins that literally govern how our body behaves.

So where does HRV come in?  Recent research has revealed that the beat to beat variation in heart rate (the time between "T to T" peaks in the QRST waveform of a typical EKG) is an exquisitively sensitive measure of the functioning of the Sympathetic Nervous System (SNS) and Parasympathetic Nervous System (PNS), the two major components of our body-wide Central Nervous System (CNS).  In a nutshell, when all the body's "brains" are communicating coherently they exert minute variations in the heart rate.  However, in situations of stress or disease the heart essentially runs on auto-pilot with little beat to beat variation.

We were so intrigued at Track Your Plaque over the possibilities we approached the HeartMath people, the leaders in HRV monitoring, to provide us with their HRV product so we could put it to the test - and it did not disappoint.  Once again, I was the guinea pig.  The next few posts will chronicle my personal experience with the basic PC Desktop version of the HeartMath emWave HRV monitor and training device.

The graphic below illustrates my baseline HRV waveform and it is very instructive of what most people will experience.



Notice how choppy and irregular it is which is exactly how the HeartMath people said it would be for a "noob" like myself.  A "coherent" nervous system, with the brain in your head in coherence with the rest of the body's "mini-brains" produces a smooth, sinusoidal trace.  The HeartMath "emWave" device is essentially a training tool with a built-in "coherence coach" to help you practice reaching a coherent state )with multiple challenge levels (kind of like resistance training for the body.

After several sessions and a little frustration at not be able to instantly master it (what can I say, I'm impatient) I did indeed improve coherence - but that will be the subject of my next blog!

Looking out for your heart health,


HeartHawk
          SocaLabs Releases 3 Free VST/AU Instrument Plugins        

SocaLabs has announced the release of three new free VST/AU instrument plugins for Windows and Mac. SN76489 emulates Texas Instruments SN76489 of the Sega Master System and other consoles. It consists of 3 square wave channels and has a 3 note polyphony. The features continue with 1 noise channel with 2 types and waveform display. [...]

The post SocaLabs Releases 3 Free VST/AU Instrument Plugins appeared first on flstudiomusic.com - Free & Fresh Sounds For Musicians.


          Create a Waveform Image with ffmpeg        

Waveform images have a variety of uses and I’ve started seeing waveform images overlaying at the bottom of videos.  That type of feature seems useful if you want to see identify music in a video or specific spaces in a video which feature action.  If you’re creating an audio-centric app, you may have a dozen […]

The post Create a Waveform Image with ffmpeg appeared first on David Walsh Blog.


          ITT Exelis teams with SAIC to counter future radar threats        
CLIFTON, N.J. -   (NYSE: XLS) has been selected by Science Applications International Corporation (SAIC) [NYSE: SAI] to provide engineering support for the Adaptive Radar Countermeasures (ARC) program. The five-year contract could be worth $15.6 million if all options are exercised. Administered by DARPA (the Defense Advanced Research Projects Agency), the ARC program will enable U.S. airborne electronic warfare (EW) systems to detect and counter digitally programmable radar systems whose waveforms a...
          Fairly Confusing Waveforms releases Cracklebot & Cracklebot Red for Kontakt        
Fairly Confusing Waveforms has released Cracklebot, a free vinyl noise machine for Native Instruments Kontakt. Cracklebot is automated drum machine based on vinyl noise samples. I sampled worn and dirty, empty grooves from my vinyl collection. Lead-ins, lead-outs, silent parts in between tracks, those have been chopped to 115 short slices or pops, crackles, scratches […]
          Hill IFC Interferential Unit        
Hill IFC Interferential, Premod, Russian Stim, Galvanic and TENS

Product Description

Four Channel Multi-Waveform Electro-Therapy Unit

The Hill IFC is a state-of-the-art interferential therapy device cleared by the FDA for mult-waveform, multiple channel use. Both outputs can be programmed to Start/Stop independently. The clinician can pro..

Price: $995.00


          jenis jenis speaker        
jika kalian ingin dengar musik atau suara lainya pastinya membutuhkan speaker karena memang speker adalah output device berupa suara. ada berbagai jenis spreaker yang beredar di pasaran dengan ukuran dan bentuk yang berbeda mulai dari speker yang berukuran kecil hingga speaker outdor yang berukuran besar. besar kecilya suara yang dihasilkan speaker tergantung dari frekuensi getaran speaker itu sendiri. speaker di katakan bagus apabila menglurkan suara yang sama dengan suara aslinya. efek suara speaker bisa kita stel sesuai keinginan kita seperti bass, trable dan efek lainya.   

Jenis speaker itu sendiri dapat dibagi lagi dalam 3 jenis berdasarkan kelompok besarnya, yaitu :
1. Speaker Dual Cone
Jenis speaker ini memiliki kualitas suara yang sangat standar, bentuknya dilengkapi dengan dua buah konus. Biasanya speaker jenis ini sudah digunakan sebagai speaker standar pabrikan untuk tiap mobil. Speaker jenis ini juga bisa disebut sebagai speaker full range karena memang mampu menghasilkan   rentang frekuensi yang luas, namun tidak maksimal, terutama untuk speaker OEM mobil.
2. Speaker Coaxial
Speaker jenis ini memiliki desain dengan posisi woofer, midrange ataupun tweeter yang menyatu dalam satu poros. Speaker coaxial pun memiliki jenis 2 way (tweeter , woofer), 3 Way (tweeter, midrange, woofer) dan ada juga yang 4 way (2 tweeter, midrange, woofer)
3. Speaker Split / Speaker Component
Jenis speaker ini dilengkapi dengan midbass, midrange, tweeter yang terpisah-pisah untuk dijadikan dalam satu set sistem speaker. 1 set speaker split juga dilengkapi dengan perangkat elektronik bernama crossover passive yang berfungsi untuk memila-milah atau membagi frekuensi suara agar tweeter, midrange dan midbass mendapat frekuensi yang sesuai dengan kemampuannya. Jenis ini dapat terbagi lagi dalam 1 set jenis 2 way (Midbass, Tweeter), 3 Way (Midbass, Midrange, Tweeter). Jenis ini adalah yang paling baik untuk mendapatkan kualitas suara paling bagus untuk sistem audio mobil, karena posisinya yang juga dapat diposisikan secara terpisah-pisah.
           semua speaker memiliki cara kerja yang sama kurang lebih begini cara kerja speaker sehingga bisa menghasilkana suara. Ketika anda mendengarkan suara dari sound card,data digital suara yang berupa waveform .wav atau mp3 dikirim ke sound card. Data digital ini di proses oleh DSP (Digital Signal processing : Pengolah signal digital) bekerja dengan DAC (Digital Analog Converter :Konversi digital ke Analog ). Mengubah sinyal digital menjadi sinyal analog, yang kemudian sinyal analog diperkuat dan dikeluarkan melalui speaker.Ketika anda merekam suara lewat microphone. suara anda yang berupa analog diolah oleh DSP, dalam mode ADC ( Analog Digital Converter : Konversi analog ke digital). Mengubah sinyal analog menjadi sinyal digital yang berkelanjutan. Sinyal digital ini di simpan dalam format waveform table dalam disk atau dikompresi menjadi bentuk lain seperti mp3.

          TV SERIES SHAMELESS DROPS A FEW NUGGETS OF INFO ABOUT WAVEFORM THEORY. Realllly??        
In the latest episode of SHAMELESS [U.S.] (season 6 episode 11), Lip' and the Professor get heated over wave theory...

While watching the latest U.S. episode of SHAMELESS, a twisted modern day dysfunctional family series in it's 6th season, the family anomaly Philip, or 'Lip as he goes by, got into it with his physics professor about wave theory. Lip had been teaching the class for his consistently drunk professor, as his teacher's assistant. In this particular scene the teacher managed to show up and began writing on the blackboard and narrating:
The Klein -Gordon Equation
The Dirac Equation
and the Free Field Einstein Equation

Lip enters the class and condescendingly interrupts, as he and the prof had an unsettled personal rift earlier that morning,

" [I]Orr...we could use the Scale Relativity Theory to explain the origin of how wave functions are generated.[/I]"

After a long uncomfortable pause the prof says,

"That's an option of course"
Lip quickly interrupts, " Yeah, uh' that's the standard. People stopped using field theory for this shit back in the 90's...which is a..'bout when you started pickling your frontal cortex with scotch."

Prof thinks for a moment and says, " Both ways of reconciling wave functions are perfectly valid"

Lip, " Nope. Uh I mean not any more........you know you should read up on it." Lip smirks and puts cigarette in his mouth, " Maybe you'll learn something."

The Professor takes Lip to the hall and they argue hardcore like they did last episode, but they made up with each other last time. This time it looks like it could be the end of this turbulent relationship of student and teacher. Wait' didn't Lip just come out of another turbulent relationship with his anthropology professor...

-------------------------------------------------

Well we all know that the characters of Shameless are difficult to analyze, but perhaps we can attempt to see if the Professor is right or is Lip right, or are they both right. And also, can we see from our findings if this can somehow benefit my/the ( the narcissist in me,sorry) all new musical instrument science of MEMMIS ( Micro-Electro-Mechanical-Musical-Instrument-Systems).

First of all today I feel like a character from Shameless. Going on little sleep and so this robs my brain power. I shall continue this later, but do feel free to comment away:-)
          Bitbuf delivers some of the best chiptune effects around        

Wow. And furthermore, WOW! Just looking at that clean prototype you know that a lot of work has gone into the project, but when you hear this chiptune MIDI device you’ll really be impressed. We know what you’re thinking, but really, you’ve got to hear this to appreciate the quality [Linus Akesson] achieved in this synthesizer. You can catch it after the break.

He does a great job of showing off the different waveforms that can be produced by the ATmega88 on this board. But there’s much more. It also serves as a 16 frame, 16 channel sequencer for creating …read more


          Tapehead - Sample Playback for Sound Designers        



For this, the first post, we're going to look at a simple device which utilises the sample playback capabilities of Max/msp - essentially we are making a playback device. It's fairly basic, but can be expanded on in the future to create a more complex system (more on that later). I'm not going to cover how to re-create this device step-by-step, so this post assumes that you have a basic competence with both Max and msp. If you've gone through some of the tutorials or spent a bit of time noodling around with Max you should feel at home here.

In part, this was inspired by a story which stuck in my head about Frank Warner, sound editor on Raging Bull, Close Encounters and a whole host of other great films. Here is a section from it, part of an interview with Walter Murch in the book Soundscape: The School of Sound Lectures:


'He [Frank Warner] would take one of the reel-to-reel tapes from his huge library and put it on almost at random and move it with his fingers. He'd just move it manually, at various speeds, backwards and forwards across the playback head. Meanwhile , he'd have the recorder turned on, capturing all these random noises.... But he was in the darkness, letting the breeze blow through his brain, waiting for the kernal of an idea that would emerge, fragments of unique sounds on which he could build everything else'


(Murch, 1998)


Being able to play sounds back at different rates is one of the oldest, but still one of the most useful techniques for creative sound design. This device is designed to facilitate simple pitch manipulation in a way that is playful and experimental, embracing a bit of randomness and the unexpected along the way. The idea is to load up a recording, and experiment with just playing back the sample data at different rates and in different directions. There is no musical tuning, no measurements in semitones and cents, just the waveform, playback time and the playback pattern over that time. 

Here is the link to download the patch:

Tapehead 1.0 

You will need either a full version of Max/msp 6 or the Max/msp 6 runtime. Both are available from Cycling '74 here. The runtime version of Max allows you to run patches but not edit them.

(This patch is tested up to Max version 6.08, the current version 6.12 has issues with the replace message to [buffer~] so will not work, if you do have problems try an earlier version)

The best thing to do is load a sound and flick through the presets, try selecting different areas of the file, different playback times and shapes. With the breakpoint editor shift-click removes points and alt-click+drag adjust the curve if in curve mode. You can also drag the manual scrub bar at the top and scan over manually.

So this doesn't exactly break new ground, as this is all possible in most DAW's, but it does provide a convenient tool for experimentation. Also, within your DAW this is usually achieved through editing and off-line effects such as the pitch-bender. This player is capable of changing playback direction and position very quickly and specifically, and can control this using complex playback curves. The other key factor here is that as this process is live, there's no waiting for offline processing. 

I'm not going to explain how every single part of this patch works, but we are going to look at the main playback mechanism at its heart. Max has a range of different objects which can be used for sample playback, all which have slightly different attributes and capabilities. When I first started using Max I remember finding this quite confusing and overly complex, as sample playback is considered a really basic capability of any audio system. However, I soon learnt that with this complexity comes versatility, and that through this Max is capable of creating a range of sample driven instruments or playback systems. 

These are the objects associated with sample playback:

sfplay~
groove~
wave~
2d.wave~
play~

The first on the list, [sfplay~] is the odd one out here, as it plays back from disk. The others all play from an object called [buffer~] so the audio they use is stored in memory, like a sampler. 

With Max I often find that making a connection between two different objects is the inspiration for a device, and that's what happened here. I was tinkering with an object called [function] which is usually used for creating envelopes of different kinds and thought of a slightly unorthodox use for it; driving a [play~] object to playback samples in interesting ways.

Here is a simple patch below which demonstrates the core of this mechanism:




Here's a link to the patch itself:

Tapehead Basic


And here's a step by step rundown of what happens inside:

1. You load a sample into the [buffer~] called soundA  

2. This triggers [info~] to spit out some information about the sample we have stored in our [buffer~]. In this case we are interested in the total time of the sample, or how long it is, at the sample rate which it was recorded.

3. Moving over to the [function] object (the XY graph), we first set a duration which it will cover using the setdomain message. The message box here will add the text setdomain onto the beginning of any message which passes through its left inlet.

4. Trigger playback using the button at the top of the patch. This causes function to pass on information about the breakpoints you've created to [line~]

5. [Line~] generates a signal matching the shape and time which you set for the function. So a straight line from 0-1, left to right is linear playback, forwards. The opposite - a straight line from 1-0, left to right is linear playback, backwards. Between this you can set a playback shape which scans the wave in any way you see fit, backwards or forwards.

6. As the output from [line~] is between 0-1 we use [scale~] to scale the signal up to the length of our sample.

7. The signal then drives the [play~] object, playing back sample data from the [buffer~] and outputting the sound you have created through the [ezdac~].


I've expanded on the device further by adding the manual scrub option, as that can often be a good way of discovering new sounds, and adds more of a physical dimension to the process. I expect everyone who uses this in Protools has accidentally discovered a sound which is more interesting backwards than forwards in this way! The rest of the completed application is composed of UI objects (menus, sliders etc) and other control objects like [preset]. The beauty of this patch is the potential for expandability here. Now we have the main control mechanism in place we can duplicate it to add in other parameters. Multimode filter with envelope control over cutoff and resonance? envelope driven adjustable delay line? Amplitude envelope? Envelope controlled pitchshift? LFO controlled vibrato? Envelope controlled LFO speed? A bank of presets for each effect? Randomised preset recall? It's all possible.

Please feel free to comment. I'll also be expanding the system in a future post, so keep an eye out for that. 






          The Annotated Artwork: “Preoccupied Waveforms”        
An installation turns synesthesia into something you can visit.
          Comment on Impulse Response by How to Disappear Completely: My Year Without EQ | Flying Eye Productions        
[…] channels to pop in the mix. This technique improved when I switched to the Earthworks mics. The transient response of the capsule accurately captures the very quickly changing waveform, impossible with the much […]
          Sugar House Review: Review of Macgregor Card's Duties of an English Foreing Secretary        


Duties of an English Foreign Secretary by Macgregor Card (2009 Fence Books)
Reviewed by Curtis Jensen

An electric generator is a device that transmits mechanical energy into electrical energy. A simple AC generator consists of a strong magnetic field, conductors that rotate through that magnetic field, and a means by which a continuous connection is provided to the conductors as they rotate. Each time a complete turning-over is made by the rotor, a cycle of alternating current is created. Thus a rotational energy is converted into an electrical energy. Rotation over time can be graphed as a sine wave, fixed points along the wave’s curve corresponding to events along a rotation’s unfolding in the flow of time. If such a waveform is centered on 0, its point of equilibrium, and its high peak is 1, then its low peak must be -1. The line of a sine wave turns and returns (or returns and turns) to its high and low peak as it unfolds in time.

In the poem, “Nary A Soul” in Macgregor Card’s Duties of an English Foreign Secretary, Card’s speaker states:

If I could
If I no could

If I could: high peak. If I no could: low peak. Here the waveform is centered on I, the couplet’s subjective equilibrium. The peak to peak voltage of the couplet is something like the relative value of could + the relative value of no could. In this case, the peaks are understood to be of a class of subjective possibilities, If I could: the speaking subject in the conditionally possible mode; If I no could: the speaking subject in the conditionally impossible mode.

As the figure rotates its conductive high and low peaks through the charged field of the poem unfolding in time, energy is generated. Of course various devices might be operationalized to conserve and/or also generate more energy:

If I could
If I no could...

If I could could could
No, could NO could could...

The figure of the first waveform is present in the second couplet, but its material spine has been reordered in rhythm, repetition, and variation. If oscillation can be understood as repetitive variation in time about a central value (a point of equilibrium) or inversely between two or more different states (in this example could and no could, but the states need not be opposing), then oscillation is what’s happening here.

From “The Merman’s Gift”:

“Take care.”
“Take care forever, no!”

Another reversal, another oscillation. From “The Libertine’s Punishment”:

Something is moving beside me
Nothing’s supposed to be there

Equilibrium here is the position between the something that is and the nothing that is not. Oscillation occurs in the charged field of presence, absence, expectation, fear, doubt... Cartesian geometry is insufficient to the task of this field’s mapping as there are too many planes for it to express.

In Duties of an English Foreign Secretary, Macgregor Card searches for (and finds!) those figural planes capable of expressing and so transmitting the energy of his nimble, terrifying, hilarious, melodic and significant poetic oscillations between sets of peak values: contemporary cityscapes to depth charges of historical conventions and texts; plunges into the complexities of a relationship (romantic and platonic modes both) to recoilings back from the social milieu; the subjective plane of present earth to the objective heights of the air, which turns out to be just as contingent in its flickering phenomena as anything perceived at the firmament. In the wash of the work’s music, points of equilibrium blister out of the text as certain subjective perspectives. Often roles such as juror, maudit, and my favorite: the sun’s own paned ajudicant. Roles are taken up or avoided, embraced or shunned, constituting another oscillational plane of the text. Oscillations set into the fields of other oscillations, e.g. in “Gone to Earth” a social interaction in the air permutates to a private kind of night in the tomorrow possible on the ground.

Often feeling talked about
or bored
I’ll start to count, but it will pass
Haven’t seen one beast today
Gone to Earth
It is too near–maybe I can tell
It’s difficult to clear the air

Tomorrow I will find a kind of private night

Card is at all times clearly conducting the oscillations of the poems in Duties. He does not do so from behind a shroud, like an idiot tractor-driver with a paper bag over his head expecting the children at the field’s edge watching him to believe the field plows itself; nor is he standing on one foot on the tractor seat, with his scalp dyed red and his clapping hands, screaming at the children over the knocking engine to collectively acknowledge a projection of his self. Card is clearly present as the conductor within each poem of Duties, driving the works’ turns and returns phrase by phra se. Card shows the movements of his hands in his struggle with the material of the text in its necessarily non-Cartesian geometry, and Card’s secret suit lies in this open handling of the poems’ material. Furthermore, through motif, melody, pathos, humor, rhyme and theme and variation, and other devices, Card beckons the reader to join him in the poems’ oscillations and transmission of energy, in the working out of its movements. It is in this aspect of his work that Card draws his cues most significantly from the Spasmodics, the group of Victorian era poets characterized by their verse dramas and lengthy introspective soliloquies. The Spasmodics ascended quickly to popularity, and just as quickly to derision, their namesake taking on a derogatory aspect in most modern criticism in spite of its link to canonical figures like Tennyson and Browning. Sidney Dobbel is a Spasmodic Poet who Card has promoted outside the text at firmilian.blogspot.com and acknowledged within by Duties’ title and inscription.

Card’s struggle to manage the sonic/linguistic material of the poem is something that can be heard and read throughout Duties. In essence, Card shows his work at every turn (or return), thus his authority is transparent in his open struggle with the text’s material. We see, in fact, we hear and therefore feel, phrase by phrase, how Card made his compositional choices. Paradoxically it is Card’s quickness and poetic skill, his nimbleness in music, word play, and phrasal movement that makes the book wholly his own. So we have another oscillation, between transparency and mastery. But at certain moments it is this mastery that can sling the reader from the text. Certain moves perhaps might be considered over-nimble, moves so quick as to wrench the reader from the poem and into the dirt of pragmatics’ arena. Perhaps that is the cost of such productive experiments in the generation of energy through poetic oscillation. Nevertheless, through his precise management of affective devices, the motifs, melody, pathos, humor, rhyme and theme and variation mentioned previously (devices of which Dobbel was a master), Card by in large supports the reader through Duties’ interelational unfolding, and in so doing he harnesses Duties’ high-charge oscillations to powerful poetry.

What geometries then could describe the energy dynamics of interelational oscillations such as those that Card executes in Duties of an English Foreign Secretary?
          AO-ATOMNJB001 Ninja Blade HDMI Recorder-Monitor for Filmmakers        
AO-ATOMNJB001 Ninja Blade HDMI Recorder-Monitor for Filmmakers

AO-ATOMNJB001 Ninja Blade HDMI Recorder-Monitor for Filmmakers

ATOMOS AO-ATOMNJB001 (AO-ATOMNJB 001 AO-ATOMNJB-/001 AO-ATOMNJB001) Ninja Blade  The ultimate portable HDMI recorder-monitor for filmmakers. In the box: Ninja Blade works right out of the box, you can be using it within minutes! We supply everything (except storage media). Ninja Field Recorder NP Series-Compatible Battery Battery Charger 110-240V AC Adaptor D- Tap Adaptor 2 X Master Caddy Cases 2.5” HDD/SSD Docking Station USB 2/3 YOUR 10-BIT HDMI SMART PRODUCTION WEAPON The new Ninja Blade offers a stunning 1280 x 720 5” SuperAtom IPS touchscreen, at 325ppi 179-degree viewing, 400nit brightness and multi-frequency (48/50/60Hz) operation, you can expect crisp, super smooth monitoring and playback.The capacitive touch panel gives lightning-quick response times and gesture capability. Controlling AtomOS 5.1 on Ninja Blade is silky smooth. The Ninja Blade is the world’s most advanced smart production recorder, monitor and playback deck. Every part of its physical and operational design has been carefully crafted to deliver the ultimate in simple operation and mission critical reliability. The Blade combines multiple devices – external monitor, capture card, playback deck and cut edit suite – into a single affordable tool. It’s lightweight, tough and robust for operation in the field. 10-BIT 4:2:2 QUALITY FOR THE MASSESYour affordable DSLR or camcorder is ripping the life out of your sensor with MPEG 420 8-bit recording. Increase the quality of any camera to professional edit friendly codecs – the files are large and the result amazing, only Atomos deliver the ultimate in quality recording to the masses.So why record 10-bit from the sensor of a camera, if it's 8-bit? Well, if you want to edit, use CG or 3D effects, green screen or add titles and transitions, these will all be crushed to 8 bit. We bypass the 8-bit and record 10-bit colour registries to ensure your video plays nicely with all computer effects. Your Cameras best FriendToday’s camera sensors and lenses are quite spectacular, and we harness even large 36 MP sensors on Mirrorless, DSLR’s & video cameras to ensure you get the very best from your investment.YES - we support your make and model of Canon, Sony, Nikon, Panasonic!Powerful MONITOR and Test SuitePacked into the light and mobile camera-mountable Ninja Blade are all the essential tools you need to set up on-site accurate colors and exposure including full waveform monitor functions - Vectorscope, RGB and LUMA parades. Each provides an intuitive transparent interface offering full screen, lower 3rd or bottom right corner positioning to give you a range of options when setting up, recording and playing back your shot.The Ninja Blade is the world’s most advanced smart production recorder, monitor and playback deck. Every part of its physical and operational design has been carefully crafted to deliver the ultimate in simple operation and mission critical reliability. The Blade combines multiple devices – external monitor, capture card, playback deck and cut edit suite – into a single affordable tool.It’s lightweight, tough and robust for operation in the field.INTRODUCING THE Atomos Spyder Color Calibration FOR NINJA BLADEColors You Can TrustAtomos is the worlds first to offer a portable calibration unit for the Ninja Blade - a Monitor, Recording Device and Deck. No other competing device offers this level of color precision. Developed in partnership with Datacolor the Atomos Spyder gives the Ninja Blade one button color calibration normally only found on high end monitors. With Spyder, the Ninja Blade gains the ability to accurately calibrate to the SMPTE Rec 709 color space with a D65 white point with 100% gamut.Being able to trust the colors on a monitor while setting up the shot with RGB, Luma Parade and Vectorscope tools means perfect results every time. Professionals will spend far less time color correcting and finishing in post helping you save time and money.


          AO-ATOMSAM002 Samurai Blade 10-bit HD/SDI Field Rec.&HD Monitor        
AO-ATOMSAM002 Samurai Blade 10-bit HD/SDI Field Rec.&HD Monitor

AO-ATOMSAM002 Samurai Blade 10-bit HD/SDI Field Rec.&HD Monitor

Atomos AO-ATOMSAM002 (AO-ATOMSAM 002 AO-ATOMSAM-002 AO-ATOMSAM/002) Samurai Blade 10-bit HD-SDI Field Recorder and HD Monitor (Retail Kit) What's in the box:   1x Samurai Blade 10-bit HD-SDI Recorder/Monitor  Retail Box (carry case is optional)  1x 2600mAh 2-cell Sony N, L Series compatible  1x D-Tap included (no cable)  1x 1000mA single-plate AC battery charger  1x USB 2/3.0 Docking Station including cables  2x Master Caddy (HDD/SSDs not included)  1x AC Adapter included     Super sharp, super bright, super blacks.. down to the last atom!   At 325 dpi and 1 million pixels (1280x720) this 5" SuperAtom IPS Panel delivers amazing resolution, super accurate colours and super deep blacks, with an image representation that oozes atmosphere. When you see this screen you will not believe your eyes. OLED seems lifeless and dull by comparison at normal brightness levels - In-Plane Switching technology really brings your images to life!   The capacitive touch panel gives lightning-quick response times and gesture capability. Controlling AtomOS 5.0 on Samurai Blade is silky smooth.   Fully adjustable gamma, brightness, contrast and a built in wave form monitor make this the must have video tool for professionals.The Atomos Samurai Blade is the world’s most advanced smart production recorder, monitor and playback deck. Every part of its physical and operational design has been carefully crafted to deliver the ultimate in simple operation, mission critical reliability and the converging of separate devices – Monitors, Capture Cards, Playback decks and cut edit suites – these are all in one and are affordable to boot!   It’s lightweight, tough and robust for in the field operation.   YOUR CAMERAS BEST FRIEND   The camera is the king and today’s sensors and lenses are quite spectacular, the Samurai Blade harnesses large 5K+ sensors on professional video cameras to ensure you get the best from your camera - Atomos supports your make and model of Canon, Sony, Nikon, Panasonic, JVC, RED & Arri.   CONSTRUCTION   Designed using aircraft grade aluminium, the Samurai Blade delivers durability and portability. Locking mechanisms for each removable part means sturdy reliable operation. Weighing a mere 380 grams this pocket rocket will never weigh you down and is completely at home on top of any camera, whether in the studio or up a mountain.   COMPLETELY CUSTOM   Atomos doesn't believe in jigsaw-puzzling a best of breed product, they have designed every circuit, coded every function and invested thousands of hours in testing, quality control, design and manufacturing. They do not buy standard IP cores like codecs and do not lease HDMI/SDI interfaces – they write everything from scratch to deliver you a finely tuned thoroughbred video machine!   UNBELIEVABLY LOW POWER CONSUMPTION   At 6 watts of power, nearly five times less than the nearest competitor, the Samurai Blade is perfect for battery-powered, in-the-field operation. Boasting no less than 4 power options, including the supplied NP series batteries, DC power adapter for larger batteries, AC mains power and Atomos' patented Continuous Power dual battery system. You’ll never be without power when you need it most. The batteries even lock into place so you don’t lose power in any event.   AFFORDABLE 2.5” HDD OR SSD   2.5” hard disks are the most affordable digital storage on the planet. They outperform SSDs, SD card, SxS and P2 cards in cost and reliability for video use. In normal video shooting environments the 2.5” HDD is your new tape. Extremely low running costs, long record times (up to 30 hours) and endless supply. For those vibration sensitive shoots, around the race track or in the helicopter, Atomos supports modern SSDs. Bang for buck you can’t beat hard disk for 90% of video shooting. Like the rest of their hardware, the OS used on Atomos products is smart. It’s lightening fast & snappy; childishly simple to operate as well as robust and reliable. It encompasses all aspects of video production, from recording, to playback and review, monitoring assist and even simple cut and tag editing. Atomos have spent thousands of hours refining and updating their OS and have released over 50 updates in 2 years with functionality improvements and operational enhancements, all this for free to all customers every day – they don’t believe in paying for software upgrades. You will find AtomOS will be the video partner you can’t live without. New to AtomOS 5 – Alpha channel and transparency support, full waveform monitor RGB and Luma Parade, vectorscope with zoom.   RECORD DIRECT FROM THE SENSOR   Direct from the sensor – the best quality from your great lens and sensor, no matter when it was made. Old and new cameras alike can get the benefit of higher quality recording, enhancing new cameras and breathing new life into older ones.


          AO-ATOMSHSTU01 Shogun Studio        
AO-ATOMSHSTU01 Shogun Studio

AO-ATOMSHSTU01 Shogun Studio

ms faced by multi camera events with long run times. Our Master Caddy delivers affordable & reliable media while features like time lapse, pre-roll cache & 4K/HD simultaneous recording make this the perfect choice for Event professionals. Our open approach with media means you record up to 2.5 hours of 4K 30p or 5 hours of 1080p60 onto a single 1TB SSD, keeping media costs less than $150 per hour. The Shogun Studio has the ability to then monitor the input, view waveforms/scopes, setup complex time lapses and simultaneously record a proxy - under pinned with the safe guard of pre-roll cache recording (8 sec for HD / 2 sec for 4K) to ensure you never miss the action. MOBILE PRODUCTION With rack space at a premium and resolution on the rise, Mobile Production professionals will welcome the ability to record, monitor, test, measure, convert and play out from a single 3RU device. Dual codec, dual resolution recording maximises the flexibility in mobile situations. Combine the ability to record 4K resolution ISO feeds with a hardware down converted HD proxy version, record ProRes on one channel and DNxHR on the other or even playback and convert from one codec/resolution combination to another. Connecting to infrastructure is simple as well with built-in HDMI and SDI conversion that is bi-directional. Shogun Studio is ready for the 4K future, but 4K acquisition doesn’t have to mean 4K delivery – when 4K can be achieved on a HD budget and you have the capability to downscale or record codec to suit you have the perfect product now and for the future in the tight space of Mobile production. ON SET / DIT Shogun Studio in the rack of a DIT cart arms the set with the fastest path from acquisition to editing by recording direct to 10-bit 422 ProRes, DNxHR and RAW all housed within a world class monitor with test measurement for the highest quality 4K/2K/HD viewing and setup. For colour analysis, the RGB and Vector scopes provide real time feedback and can be overlaid in several positions on the colour accurate 7.1” 325ppi Full HD monitor. You can double check focus pulls with 2:1 zoom, focus peaking and edges only view and with 8 LUT slots on board and an infinite number on disk previewing the final result is easy. These decisions can be made with the confidence you are using a 100% sRGB factory calibrated monitor that can be calibrated over time using the optional Spyder unit. Outside of monitoring, DIT’s can start the editing process on set with favourite and reject tagging to start the formation of a play list and XML. SECURITY In security applications a wide angle lens on a 4K camera can mean a single camera replaces the equivalent of 4 x HD cameras. The Studio Shogun, with its long record time, time lapse, record scheduling and monitor zoom capability makes it the perfect weapon for 4K security applications.   In ProRes LT at 25 FPS the Shogun Studio can record 6 hours of Ultra High Definition 4K per channel, this not only provides superior resolution but also an increase in the number of frames captured per second providing a more finite approach to high resolution security recording. In applications where the additional frames aren’t important, the timelapse functionality can be set up to record specific frames at specific points over a specified duration. All of this can be reviewed on the dual 7.1” Full HD monitors and easily zoomed in for closer inspection of areas of interest.      


          AO-ATOMSHSTU01ED Education Only Shogun Studio        
AO-ATOMSHSTU01ED Education Only Shogun Studio

AO-ATOMSHSTU01ED Education Only Shogun Studio

d, synchronized 4K, 2K and HD simultaneous play out to high resolution projector, jumbotron or signage systems. EVENTS Shogun Studio solves many of the problems faced by multi camera events with long run times. Our Master Caddy delivers affordable & reliable media while features like time lapse, pre-roll cache & 4K/HD simultaneous recording make this the perfect choice for Event professionals. Our open approach with media means you record up to 2.5 hours of 4K 30p or 5 hours of 1080p60 onto a single 1TB SSD, keeping media costs less than $150 per hour. The Shogun Studio has the ability to then monitor the input, view waveforms/scopes, setup complex time lapses and simultaneously record a proxy - under pinned with the safe guard of pre-roll cache recording (8 sec for HD / 2 sec for 4K) to ensure you never miss the action. MOBILE PRODUCTION With rack space at a premium and resolution on the rise, Mobile Production professionals will welcome the ability to record, monitor, test, measure, convert and play out from a single 3RU device. Dual codec, dual resolution recording maximises the flexibility in mobile situations. Combine the ability to record 4K resolution ISO feeds with a hardware down converted HD proxy version, record ProRes on one channel and DNxHR on the other or even playback and convert from one codec/resolution combination to another. Connecting to infrastructure is simple as well with built-in HDMI and SDI conversion that is bi-directional. Shogun Studio is ready for the 4K future, but 4K acquisition doesn’t have to mean 4K delivery – when 4K can be achieved on a HD budget and you have the capability to downscale or record codec to suit you have the perfect product now and for the future in the tight space of Mobile production. ON SET / DIT Shogun Studio in the rack of a DIT cart arms the set with the fastest path from acquisition to editing by recording direct to 10-bit 422 ProRes, DNxHR and RAW all housed within a world class monitor with test measurement for the highest quality 4K/2K/HD viewing and setup. For colour analysis, the RGB and Vector scopes provide real time feedback and can be overlaid in several positions on the colour accurate 7.1” 325ppi Full HD monitor. You can double check focus pulls with 2:1 zoom, focus peaking and edges only view and with 8 LUT slots on board and an infinite number on disk previewing the final result is easy. These decisions can be made with the confidence you are using a 100% sRGB factory calibrated monitor that can be calibrated over time using the optional Spyder unit. Outside of monitoring, DIT’s can start the editing process on set with favourite and reject tagging to start the formation of a play list and XML. SECURITY In security applications a wide angle lens on a 4K camera can mean a single camera replaces the equivalent of 4 x HD cameras. The Studio Shogun, with its long record time, time lapse, record scheduling and monitor zoom capability makes it the perfect weapon for 4K security applications.   In ProRes LT at 25 FPS the Shogun Studio can record 6 hours of Ultra High Definition 4K per channel, this not only provides superior resolution but also an increase in the number of frames captured per second providing a more finite approach to high resolution security recording. In applications where the additional frames aren’t important, the timelapse functionality can be set up to record specific frames at specific points over a specified duration. All of this can be reviewed on the dual 7.1” Full HD monitors and easily zoomed in for closer inspection of areas of interest.        


          CD-ODYSSEY-7Q+ Odyssey7Q+ 7.7" OLED Quad Monitor & Multi-Format Recorder        
CD-ODYSSEY-7Q+ Odyssey7Q+ 7.7" OLED Quad Monitor & Multi-Format Recorder

CD-ODYSSEY-7Q+ Odyssey7Q+ 7.7" OLED Quad Monitor & Multi-Format Recorder

Convergent Design CD-ODYSSEY-7Q+ Odyssey7Q+ - 7.7" OLED Quad Monitor & Multi-Format Recorder Having the Best Professional Monitor/Recorder is a Big Plus!! The Odyssey7Q+ is the most advanced, most capable, most versatile monitor/recorder in the world. The Odyssey7Q+ can record HD/2K/UHD/4K via SDI and HDMI. Record Apple ProRes, uncompressed DPX, and RAW (with Record Options).    4K ProRes Recording for A7S and GH4 Included FREE  4K/UHD Capture Over HDMI or SDI  SDI Single/Dual/Quad Link  Low Power, Light Weight, and Rugged Magnesium Case  Intuitive Touchscreen OLED Interface    Apple ProRes 422(HQ) / Apple ProRes 422 / Apple ProRes 422(LT) The Most Versatile Monitor/Recorder In The Industry OLED MONITOR WITH TOOLS, RAW AND VIDEO RECORDER FOR HD THRU 4K, MULTI-STREAM MONITOR/SWITCHER The Odyssey7Q+ is the most advanced, most capable, most versatile monitor/recorder in the world. The Odyssey7Q+ can record HD/2K/UHD/4K. It can record them via SDI and HDMI. It can record RAW (with Record Options), uncompressed DPX, and Apple ProRes 422 (HQ). The Odyssey7Q+ features an OLED 1280x800 monitor with true blacks, accurate colors, extended color gamut and a 176 degree viewing angle. Along with the best image in the industry, the Odyssey7Q+ also features an extensive array of image analysis tools, including an RGB waveform, RGB Histogram, False Color, Pixel Zoom with finger drag, three-mode Focus Assist and monitoring LUTs.   The unique Multi-Stream Monitoring mode allows up to four HD video inputs to be viewed at once in a quad-split view or to be live-switched between in full screen. The Odyssey7Q+ weighs a little over one pound, is just one inch thick and can run on any power source from 6.5-34 volts. MONITORING The Odyssey7Q+ features a 7.7” OLED screen with 1280x800 resolution. The OLED display provides true blacks and accurate colors. A full complement of image analysis tools include a waveform (luma or RGB parade), a histogram (luma or RGB parade), zebra, programmable False Color, Pixel Zoom with finger drag, a three-mode Focus Assist and LUTs. The included LUTs provide proper viewing of the LOG and RAW modes available from numerous popular cameras, with programmable LUTs coming in the future. These tools are controlled by the easy to use touchscreen interface and are available to view on the OLED or optionally on the SDI and HDMI video outputs of the Odyssey7Q+.   RECORDING The Odyssey7Q+ is able to record more formats and signal types than any other recorder in the world. Included in the Odyssey7Q+ is Apple ProRes recording in HD/2K up to 60p and UHD/4K up to 30p. Also included is Uncompressed DPX video recording in HD/2K in 10/12-bit up to 60p. Record Options available for purchase and/or rent via this website allow recording of ARRIRAW (ALEXA), Canon Cinema RAW (C500) Sony FS RAW (FS7/FS700 and POV RAW (IO Industries, Indiecam). FS RAW and Canon Cinema RAW also feature the ability to convert RAW signals into video and recording in Apple ProRes. Additional recording formats will be added via firmware updates. Recording is to fast and reliable Odyssey SSDs, available in 256G, 512G and 1TB sizes. Two SSD slots allow for extended record times and high dataload recording.   PLAYBACK   All formats are available for immediate playback directly on the Odyssey7Q+. Full deck controls or tablet-style touchscreen scrubbing through clips makes playback quick and easy. On ProRes files, clip markers are available to quickly pinpoint important material. Image Analysis Tools are also available in Play Mode.   MULTI-STREAM MONITORING View up to four HD video signals at once as a quad-split screen, or live-switch between inputs. Great for multi-camera shoots and internet streamed productions. Future Record Options will allow recording of four separate simultaneous Apple ProRes files, including a live-switch file and an edit decision list (EDL) in an XML file.   FUTURE-PROOF Use Odyssey7Q+ as a monitor today on any modern camera. Add an Odyssey SSD and the Odyssey7Q+ is ready to record. When shooting with a RAW output camera, Record Options can be purchased off the website at any time. Firmware updates continue to expand and refine the capabilities of the Odyssey7Q+   AFFORDABLE HAS NEVER BEEN THIS VERSATILE The same price as comparable OLED monitors in the market today, at US list $2295 the Odyssey7Q+ is the best on-camera monitor in the industry, PLUS it’s the most capable and versatile recorder available.   LIGHTWEIGHT, LOW POWER, SMART DESIGN Clever design and a magnesium alloy case keeps the Odyssey7Q+ to just over 1lb (560g). The Odyssey7Q+ consumes 8-19W depending on mode, can run on anything from 6.5-34v, and has no fans or venting to make noise or let in the elements.


          PCM-D100 High Resolution Portable Audio-Recorder        
PCM-D100 High Resolution Portable Audio-Recorder

PCM-D100 High Resolution Portable Audio-Recorder

Sony PCM-D100 (PCMD100, PCM D100, PCM/D100) High Resolution portable Audio-Recorder, 2 x Condenser Mics, up to 24-Bit/192 kHz Linear PCM, DSD with 2.8MHz, noise limiter, WAV, MP3 and more. Backlite LED Display & 32GB internal Memory. No AC adaptor included  Portable High Resolution Audio Recorder High performance, high resolution portable audio recording Sony’s PCM-D100 audio recorder is designed to deliver the highest sound quality in professional audio applications including live music events, theatrical performances, and news gathering. The recorder supports the latest high-resolution codecs and formats, including 192kHz/24bit PCM and DSD. Compatibility with the DSD format enables the recording of source sounds using digital signals, but in a format that closely resembles analogue waveforms. Compatible with recording and playback in 192 kHz/24-bit linear PCM, the unit can reproduce ultra-high range, delicate music components with excellent audio quality from low to high range. Its broad playback frequency band easily exceeds the audible band of 20 Hz to 25 kHz.  A highly sensitive directional microphone uses a new 15 mm unidirectional mic unit. The mic’s sound collection range adjusts to suit various sounds, from performances with a small number of people to concert halls with a large group of performers. The highly sensitive, broadband recording functionality expresses frequency properties up to 40 kHz, to maximize the advantages of DSD recording.  The PCM-D100 recorder has 32 GB of built-in flash memory and a combination SD Card/Memory Stick slot for expandable storage. The recorder’s lightweight metal aluminium body is built to withstand the demands of professional applications and offers long battery life via four AA batteries (up to approximately 11 hours in DSD (2.8 MHz/1-bit).  The PCM-D100 recorder is part of Sony’s newly announced High-Resolution Audio initiative, a complete series of products designed to help music lovers conveniently access and enjoy the digital music they love in the best playback quality. Features: High Resolution Recording Supported recording formats include DSD2.8, LPCM up to 192kHz/24 bit and MP3 Built-in Electret Condenser Microphones The PCM-D100's electret condenser microphones have exceptional high sound quality. The X/Y or wide position stereo microphones are uni-directional with a flat and wide frequency response and natural sound characteristics. Flexible Playback Features The PCM-D100 includes both digital pitch control and key control for both LPCM and MP3 recordings. Digital pitch control maintains pitch while slowing down or speeding up playback speed. Key control allows changing the pitch while maintaining playback speed. Simple uploading to computers The recorder includes a USB 2.0 high-speed port for simple uploading/downloading to/from Windows® PC or Macintosh® computers. Versatile Recording Functions The PCM-D100 offers comprehensive signal processing features for location recording including a limiter and low cut filter. The PCM-D100 also includes a 5 second pre-record buffer and cross-memory recording function.


          R8 recorder, interface, controller, sampler        
R8  recorder, interface, controller, sampler

R8 recorder, interface, controller, sampler

Zoom R8 (R 8, R-8, R/8) Recorder, Interface, Controller, Sampler Zoom takes the turbocharged design of the R24 and scales it down for an ultra-portable music production solution. Like its predecessor, the R8 combines four production tools in one versatile device. In addition to being an 8-track recorder that utilizes SD memory, the R8 is an audio interface, a DAW control surface and a sampler complete with drum pads and a rhythm machine.  Recorder Simultaneous recording of 2 tracks and playback of 8 tracks The R8 is the perfect tool for capturing audio on-the-go. Record live music performances, rehearsals, songwriting sessions or even audio for film and video. Playback up to eight tracks of audio at up to 24-bit/48kHz resolution as WAV files. If you make a mistake, use the UNDO/REDO function to cancel the last recording operation and restore the previous state. You can even mix down completed songs inside the R8 and save a mix for each project. Interface  Audio interface supports 2 inputs and 2 outputs When combined with your computer, the R8 becomes a powerful audio interface. Connect the R8 to your computer via USB, launch your favorite DAW or use the included Cubase LE software and start laying down your tracks. Simultaneous 2-in/2-out capability allows you to record up to 24-bit/96kHz high definition audio. If you use the 44.1kHz sampling rate, the internal DSP effects of the R8 are also available for your computer tracks. A dedicated control lets you adjust the mixing balance between the DAW playback sound and the direct sound for monitoring. Controller Control surface functions for most DAW software The R8 can be used as a control surface for DAW transport functions (play, record, stop) and mixing operations. Through a USB connection, you can control the transport and mixing functions of major DAWs such as Cubase, Logic, and Sonar from the R8. In addition, you can easily move multiple faders at the same time. No more mixing with a mouse! The R8 makes mixing a pleasure. Sampler 8-voice sampler with 8 pads The built-in sampler functions allow you to loop audio data on any track. You can play the pads in real-time and combine loops to create a performance for an entire song. When setting loop intervals, you can see the waveforms for visual confirmation. Time-stretching, which allows you to change the tempo without changing the pitch, and trimming the unneeded parts of loops, is also possible. You can use the sampler and recorder functions together seamlessly to play back loop tracks while recording instrumental performances on other tracks. 500 MB of drum loops inside The included 2GB SD card contains 500 MB of drum loops from Big Fish Audio, one of the top producers of sample libraries. With an emphasis on rock, the collection of standard drum phrases can be used to create professional-sounding rhythm tracks just by lining them up. Create your own beats with the powerful rhythm machine The R8’s rhythm machine includes 10 types of drum kits that use linear PCM audio samples. Use the touch-sensitive drum pads to program up to 511 of your own beats, and use the rhythm patterns that you create just like audio loops. Use the R8’s pads to start playback of patterns and use the R8’s sequencer to arrange them into songs. You can start fast with the 472 preset patterns, which include intros, fills, endings and other phrase variations.   Soundside provides "Pattern Editor" for ZOOM R8/R24.  Built in stereo microphone The built-in high-sensitivity stereo microphones are convenient for recording sketches of musical phrases and melodies as they come to mind. Use these mics for clear recordings of vocals and acoustic instruments. Over 140 DSP effects, including guitar amp modeling The DSP effects, which include 146 types and 370 patches, can be used for recording, mixing and mastering. Insert effects are comprised of seven modules and include algorithms for guitars, basses, vocals and other sounds. One insert effect can be used simultaneously with the two types of send-return effects. 18 types of guitar amplifier models from our best-selling G2Nu and 6 types of bass amp models are included, so you can re-create a realistic amp sound simply by connecting a guitar or bass directly to the R8. Supports SDHC cards with capacities of up to 32GB The R8 uses compact and readily-available SD and SDHC memory cards as recording media. Unlike tape and disk recorders, this unit has no motor, making it more resistant to physical interference and concerns about mechanical noise. Locate functions make editing easier Set up to 100 markers and directly move to them whenever you want. Also, use the A-B repeat function to play or re-record a designated interval and use auto punch-in and punch-out for efficient editing.


           Information-theoretic algorithm for waveform optimization within ultra wideband cognitive radar network         
Nijsure, Y., Chen, Y., Rapajic, P., Yuen, C., Chew, Y.H. and Qin, T.F. (2010) Information-theoretic algorithm for waveform optimization within ultra wideband cognitive radar network. In: International Conference on Ultra-Wideband (ICUWB). IEEE International Conference on Ultra-Wideband (ICUWB) . IEEE Xplore Digital Library, Nanjing, pp. 1-4. ISBN 978-1-4244-5305-4 (print) ISSN 978-1-4244-5306-1 (online) (doi:10.1109/ICUWB.2010.5616308 )
          7inch 3G Broadcast Monitor (1024x600) Class A 3Gb/s Ready        
7inch 3G Broadcast Monitor (1024x600) Class A 3Gb/s Ready

7inch 3G Broadcast Monitor (1024x600) Class A 3Gb/s Ready

The PBM-3G Series offers an elegant slim design with fast response time for smooth videostreaming in 7" & 17" & 20" & 24" & 32" & 40" & 46" & 55" display sizes & native full HD resolution with high contrast ratio & wide viewing angles & accurate color reproduction and quality picture consistency that meets your HD SD Monitoring application.It features intelligent connection for Calibration Alignment and Adjustable Colormetry and Gamma Correction. Multiple monitors can be controlled by a centralized Wall control system which can be utilized to connect different size monitors in any combination.PBM-3G Series also provide many powerful display functions such as Powerful Dual 3Gb/s input display. Auto Calibration. Advanced Waveform. Vectorscope. Closed Caption (708/608) or Teletext 801 & Subtitle OP-47 for North American or Australian markets respectively. VPID. IMD. UnderScan/ZeroScan/OverScan/Zoom. 1:1 Pixel mode. PIP PAP. Various Digital Audio Metering Scales Digital Audio Decoding. Built-In Speaker. Time Code Display. Tally Lamp and Wall Control SystemFeatures: 3Gb/s ready - 1080/60P2 x Auto-detect HD-SDI & SDI with active loops (3Gb/s / 1.485Gb/s / 270Mb/s)Dual HD SDI YPbPr 4:2:2 - Dual HD SDI YPbPr/RGB 4:4:4 & 2KRGB & YUV & Y/C & Composite & UXGA & DVI(DHCP) & HDMI(DHCP)Complies with EBU-3320 TECH & SMPTE-C and ITU-R BT.709 Standards(ICAC) - Plura Intelligent Connection for Alignment & CalibrationClosed Caption (608/708) - North American MarketTeletext 801 & Subtitle OP-47 - Australia & NZ MarketCutting edge De-interlacing and Scaling TechnologyFast Response Time for high motion videoRGB 10 Bit Digital Signal Processing178° Viewing Angle DisplayInternal Monitor Display (IMD)Underscan/ Overscan/ Normal / ZoomPixel to Pixel Mode & Tally & DC OperationFalse Color and Peaking Filter / Focus AssistVideo Range Test with adjustable Y & C valuesDisplay LTC & DVITC time code with line selectInternal Pattern Generator and Wall Control SystemBuilt In Stereo Speaker & 16 CH Audio Metering DisplayAnalog & Embedded Audio Input & Digital Audio DecodingColor Temperature - User & VAR & Adjustable (11000K TO 3200K)Various and User defined Markers Display & Safe Area in HD & SDPicture and Picture (PaP) & Picture in Picture (PiP) display / BlendIntuitive OSD Display & Graphic based & 6 Languages (UNICODE System)Display Advanced Waveform & Vector Scope simultaneously with line selectProgrammable Front Pushbutton Controls & GPI & RS232 Remote ControlOPTIONS: Battery Mount & Carry Case & Rack mount & Sunvisor 


          Korg DW-6000 Vintage Digital Analog Waveform Synthesizer AS IS 8000 poly 800 61        
$159.00
End Date: Monday Sep-4-2017 5:33:26 PDT
Buy It Now for only: $159.00
Buy It Now | Add to watch list

          Warnings when synthesis using RTL compiler        

 Hi All,

 

When I use RTL compiler to synthesis my design, I found there were a lot of warnings like

"The following sequential clock pins have no clock waveform driving them" and

"Referenced signal not in sensitivity list.  This may cause simulation mismatches between the original and synthesized designs." during elaboration.

 I am wondering whether these warnings are serious, can I ignore them.

 Thanks in advance

 

 

 


          RE: Warnings when synthesis using RTL compiler        

Hello Greatrebel,

I am facing the same "Sequential clock pins without clock waveform

The following sequential clock pins have no clock waveform driving them. No
timing constraints will be derived for paths leading to or from these pins" problem while synthesizing my RTL code.

Did you ever get around to fixing it?

Thanks
Azhar


          Tracktion Software – Waveform 8 8.0.20 [Win x86 x64]        

Unleash your creativity! Waveform is a fast-paced application specially developed for modern music producers. Specializing in creative and inspiring workflows and avoiding unnecessarily unnecessary functions, the application remains surprisingly intuitive. While many other applications try to appeal to a wide range of users, for example, music for movies, live sound, performance – we focus on music production. All… Read More »

The post Tracktion Software – Waveform 8 8.0.20 [Win x86 x64] appeared first on VSTI Torrent.


          Download Song Tracks From Soundcloud For Free        
download_soundcloud_tracks_free
Sound Cloud is a great platform for audio experts and for guys who loves to listen and discover new audios and songs. Its really easy to upload and share audio files on soundcloud, And they allows users to embed any audio tracks anywhere on the web just like embedding a youtube video. Soundcloud allows users to sell their work through soundcloud, Download option will get disabled for such files. In this article, I'll tell you the trick to download such files with the help of a simple bookmarklet.

Bookmarklet For Downloading Soundcloud Tracks



Just create a new bookmark with the following javascript as the link/address.
javascript:(function(b){var a=b.createElement("a");a.innerText="Download MP3";a.href="http://media.soundcloud.com/stream/"+b.querySelector("#main-content-inner img[class=waveform]").src.match(/\.com\/(.+)\_/)[1];a.download=b.querySelector("em").innerText+".mp3";b.querySelector(".primary").appendChild(a);a.style.marginLeft="10px";a.style.color="red";a.style.fontWeight=700})(document);

need help?

How To Use This Bookmarklet?

* Visit any downloading disabled soundcloud track (click here for a sample page).

* Click on the bookmarklet 'Download Sound!'

* You could see a button 'Download Mp3' next to 'Save to favorites' or 'Buy it'
download-soundcloud-songs

* Just click on 'Download Mp3' and you can get the Mp3 version of the track that you are listening.

Try Our Other Bookmarklets


          Audjoo updates Helix synthesizer plug-in (incl. 64-bit)        
Audjoo has released an updated version of Helix, a virtual synthesizer instrument for Windows and Mac. Helix is a unique synthesizer-plugin with a sonic clarity beyond the competition. Soaring leads, solid basses, glimmering pads… Helix does it all. The main oscillators of Helix allow you to pick from hundreds of included waveforms, or load your […]
          Monolith Synth        
Monolith Synth paul Tue, 2017-05-23 12:06

Over the last several weeks I collaborated with Ben Davis, Darcy Neal and Ross Fish on this Monolith Synth interactive sculpture we took to Tested and Maker Faire.

This was a pretty typical usage scene at Maker Faire:

A post shared by Darcy Neal (@drc3p0) on

This crazy adventure started with Kickstarter reached out to me, only 6 weeks before Maker Faire, looking to showcase 4 successful projects in their booth. They wanted to show "creative tools" and how people used them. So I reached out to a few synthesizer folks I've met and who've used Teensy. They also suggested bringing it to Tested to make a video. So it began...

From the beginning I had a step sequencer using illuminated arcade buttons in mind. So I quickly designed this little I/O expander board and sent it off to OSH Park's Super-Swift service.

The whole project came together over just 4 weeks. Our first meetup was just to discuss what to build, followed a week later by our first build night. By then the I/O expander boards had arrived. We made not the final Monolith, but 3 breadboard prototypes, so the software development side could begin!

Another meetup focused only on software. Almost all the software was developed on these prototype panels.

In this picture you can also see the panel layout sketches on the notepad on the right side, and a blue tape model underneath on the table, which we made to get an idea of the overall size.

Ross and Darcy had synthesis plans that needed a signal-controlled PWM waveform and improvements to the envelope feature, so I worked on improvements to the Teensy Audio Library while they wrote the Arduino sketch code.

The day before our next meetup, I started turning those sketches into a design for the laser cutting. I made this 1/4 scale model of the front and side pieces. At this point, none of the back side or interior ribs (for strength) had been designed, and you can see the model lacks the many holes for screws & brackets which joined everything.

Only 2 weeks before Maker Faire we had an epic 13-hour build day where all the final parts were laser cut and assembled. Here's a photo of Darcy & Ben putting the panels together on my kitchen counter!

All the clear acrylic plastic parts were completely drawn, with all mounting holes, and made that day.

Here's the complete layout of all parts (mk2017_design):

Here's a large high-res copy of this image, and a big ZIP file with all the original Corel Draw files for anyone who wishes to try making their own.

While the laser did most fabrication work, other steps like countersinking for the potentiometers were needed. It was indeed an epic 13 hour day of making.

A couple days later, I spent a whole day completing the wiring we couldn't get done in those 13 hours. Erin Murphy (the "Soldering Goddess" at PJRC) put in a few hours on aesthetic improvements to the messy tangle of wires from so many buttons.

Just a few days later we had our last "build" session, to get the 3 separately written Arduino sketches merged and working together as one integrated project. Even though everything has been designed to go together, this session went very late. Ben did much of the heavy lifting to merge the 3 programs.

This is the final audio DSP system settled upon that late night.

Here's a large high-res copy of this image.

This was the first actual usage of the Monolith, well past 1am when we finally had it all up and running.

The next day I took it all apart and packed all the pieces and spare parts into these 2 big boxes, weighing in at 55 and 40 pounds!

This is the first time I've ever shipped a project to Maker Faire, rather than driving a truck or hauling cases of checked baggage on a plane. So much easier, and it allowed time to work on a nice handout card. After some back and forth with the others and last-minute proof reading by Robin, who caught what would have been embarrassing typos and grammatical errors, we sent this card off to be fast-turn printed.


Here is a printable PDF file for the front side.


Here is a printable PDF file for the back side.

Darcy and I flew to San Francisco early and spent the day with Tested, putting it back together while they shot that awesome video. Sometime I hope to have even 1/10th that sort of video production skill.

Since it was already put together, we had little to do setup-wise. Friday morning Ben, Ross and Darcy did some adjustments of the sound levels which really made it come to life in the space. For anyone who wishes to dig deeper into the technical details, thecomplete source code is available on Github.

All weekend long people really enjoyed playing with it. There were many really awesome moments, like this one:

Here is Kickstarter's coverage of the event. Scroll down a bit to the part about Teensy. :)

During the 3 days of Maker Faire, things went very well. We did experience a couple minor issues. Massive electrical noise from so many other projects played havoc with the capacitive touch sensing. Saturday evening I rewrote the code to look for changes from an average rather than just an increase from a threshold, which allowed it to usually work well enough. The other tech issue was a bass. When turned up louder, the bass notes would shake all the plastic panels, rattling screws and even some of the connectors loose at time. Easy to fix.

Towards the end of Sunday, the Maker Faire folks came around and gave up an award. At first I shrugged it off, since they've done the same for other stuff I've brought in prior years. But those were the blue ribbons. Apparently the only hand out one of these red one each in "zone". They said it's a big deal...

Really, the best thing about this year was working with a great team. Ross, Darcy and Ben really stepped up and did a great job on so many parts.


          Experiencing Digital        

Last night I went to the new Digital Revolution exhibition at the Barbican, and a few weeks before that I went to the Digital City event at the Museum of London. Both events ‘digital’ as a theme, and had a variety of different exhibits to see or take part in.

At Digital City there was an art installation using receipt printers to print out machine-transcribed recordings, an LED screen which displayed waveforms of audio recordings of tweets collected during the Olympic Games, a life drawing session which invited people to either sketch a human model or a video project of the same model, and a silent disco.

At Digital Revolution there’s a room of early computers and early computer games/software (many of which you can play), a section looking at the CGI in the films Inception and Gravity, several new art installations using projection, sensors and computer graphics, a 3D printer, an installation/song by will.i.am, a section where you can play some contemporary computer games by indie producers, and, downstairs in The Pit, an impressive installation using lasers and smoke machines to create lightforms you can interact with.

In short: both were a collection of disparate exhibits, some more engaging that others. Both times though, the overall experience was less the sum of its parts.

I think this is because digital, on its own, simply isn’t coherent or meaningful as an exhibition concept. I can see why both organisations might feel the need to ‘do digital’, but in the absence of anything else, it makes about as much sense today as doing an event about ‘canvas’.

The Barbican event describes itself as a “the most comprehensive presentation of digital creativity ever to be staged in the UK” - a noble ambition, but one that it’s bound to fall short on (there's only so much space, after all). It would have been more interesting, I think, to focus on a particular form (video games perhaps, or digital film, or art apps) or, to maintain a multidiscipline focus, a particular subject (conflict, or privacy, or time travel).

The vague notion of digital is particularly highlighted by exhibits where the digital-ness isn’t really the point. A silent disco might use digital codecs to encode and transmit the audio, but it could equally have used an analogue FM-style signal. Similarly, the lasers-and-smoke installation at the Barbican may use computer software to interpret the sensors and drive the laser projection, but this is invisible to the audience, and it’s at least conceivable that the effect could have been achieved with non-digital technology.

There’s plenty to be excited about within the realm of digital technology, but I’d like to see arts organisations treat ‘digital’ less as a novelty, and more as a regular part of their programming.


          Bottom Ten Android Apps        
Everyone knows what the top 10 are so thats boring, also this isn't the Top 10 worst apps on the market, because the muck at the bottom is too mucky to be distilled to 10. It's the bottom 10 of the hand full of apps that make it from the millions of apps on the market to my Android and are still installed, this is starting to sound like the birds and the bees, so i'll get to it.

(I recommend QRDroid App to turn the barcodes into links if you are reading this on your phone)

Bottomest 10:

GStrings : A chromatic tuner for any musical instrument. I use to keep my whistle in perfect pitch.

GasAppGassApp: Tracks your fuel consumption and expense on a monthly basis derived from info at each fill up at the pumps.

Chroma DozeChroma Doze: Generate continuous colored/white noise by sketching a spectrum on the screen. I use it for waking up animals at the zoo with the annoying high pitch sounds, makes for better pictures.

Audalyzer: displays sound readings from the microphone as a waveform display, as a frequency spectrum, and as a dB meter. I use it to identify the frequency of annoying buzzes, or to see how high a pitch i can whistle.
Antennas: Plots the cell towers that are in your area on a google maps map. I use it for debugging my cell connection issues.
QR Code

Caffeine Tracker LiteCaffeine Tracker: Tracks your current level of caffeine. Great way to make sure you have let your body metabolize enough caffeine by your bedtime.

Police Radio: Lets you listen to police, emt, and fire radio broadcasts, it really good, i'd say the delay is less than 10 seconds.
LogoRing Droid: recording and editing sounds, and creating ringtones, directly on the handset. I use it to cut clips from MPSs and make them into ringtones.


Thats only 7, deal with it.




           Numerical relativity waveform surrogate model for generically precessing binary black hole mergers         
Blackman, Jonathan and Field, Scott E. and Scheel, Mark A. and Galley, Chad R. and Ott, Christian D. and Boyle, Michael and Kidder, Lawrence E. and Pfeiffer, Harald P. and Szilágyi, Béla (2017) Numerical relativity waveform surrogate model for generically precessing binary black hole mergers. Physical Review D, 96 (2). Art. No. 024058. ISSN 2470-0010. http://resolver.caltech.edu/CaltechAUTHORS:20170801-103324737
           A Surrogate model of gravitational waveforms from numerical relativity simulations of precessing binary black hole mergers         
Blackman, Jonathan and Field, Scott E. and Scheel, Mark A. and Galley, Chad R. and Hemberger, Daniel A. and Schmidt, Patricia and Smith, Rory (2017) A Surrogate model of gravitational waveforms from numerical relativity simulations of precessing binary black hole mergers. Physical Review D, 95 (10). Art. No. 104023. ISSN 2470-0010. http://resolver.caltech.edu/CaltechAUTHORS:20170517-110443706
           Gravitational waveforms for neutron star binaries from binary black hole simulations         
Barkett, Kevin and Scheel, Mark A. and Haas, Roland and Ott, Christian D. and Bernuzzi, Sebastiano and Brown, Duncan A. and Szilágyi, Béla and Kaplan, Jeffrey D. and Lippuner, Jonas and Muhlberger, Curran D. and Foucart, Francois and Duez, Matthew D. (2016) Gravitational waveforms for neutron star binaries from binary black hole simulations. Physical Review D, 93 (4). Art. No. 044064. ISSN 2470-0010. http://resolver.caltech.edu/CaltechAUTHORS:20160119-152925358
           Inspiral-merger-ringdown waveforms of spinning, precessing black-hole binaries in the effective-one-body formalism         
Pan, Yi and Buonanno, Alessandra and Taracchini, Andrea and Kidder, Lawrence E. and Mroué, Abdul H. and Pfeiffer, Harald P. and Scheel, Mark A. and Szilágyi, Béla (2014) Inspiral-merger-ringdown waveforms of spinning, precessing black-hole binaries in the effective-one-body formalism. Physical Review D, 89 (8). Art. No. 084006. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20140611-135106521
           Template banks for binary black hole searches with numerical relativity waveforms         
Kumar, Prayush and MacDonald, Ilana and Brown, Duncan A. and Pfeiffer, Harald P. and Cannon, Kipp and Boyle, Michael and Kidder, Lawrence E. and Mroué, Abdul H. and Scheel, Mark A. and Szilágyi, Béla and Zenginoğlu, Anıl (2014) Template banks for binary black hole searches with numerical relativity waveforms. Physical Review D, 89 (4). Art. No. 042002. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20140402-111412386
           Suitability of hybrid gravitational waveforms for unequal-mass binaries         
MacDonald, Ilana and Mroué, Abdul H. and Pfeiffer, Harald P. and Boyle, Michael and Kidder, Lawrence E. and Scheel, Mark A. and Szilágyi, Béla and Taylor, Nicholas W. (2013) Suitability of hybrid gravitational waveforms for unequal-mass binaries. Physical Review D, 87 (2). Art. No. 024009. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20130131-143120705
           Prototype effective-one-body model for nonprecessing spinning inspiral-merger-ringdown waveforms         
Taracchini, Andrea and Pan, Yi and Buonanno, Alessandra and Barausse, Enrico and Boyle, Michael and Chu, Tony and Lovelace, Geoffrey and Pfeiffer, Harald P. and Scheel, Mark A. (2012) Prototype effective-one-body model for nonprecessing spinning inspiral-merger-ringdown waveforms. Physical Review D, 86 (2). Art. No. 024011. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20120807-071339043
           The NINJA-2 catalog of hybrid post-Newtonian/numerical-relativity waveforms for non-precessing black-hole binaries         
Ajith, P. and Buchman, Luisa T. and Chu, Tony and Reisswig, Christian and Santamaría, Lucía and Scheel, Mark A. and Sperhake, Ulrich and Szilágyi, Béla and Taylor, Nicholas W. (2012) The NINJA-2 catalog of hybrid post-Newtonian/numerical-relativity waveforms for non-precessing black-hole binaries. Classical and Quantum Gravity, 29 (12). Art. No. 124001. ISSN 0264-9381. http://resolver.caltech.edu/CaltechAUTHORS:20120803-111743471
           High-accuracy gravitational waveforms for binary black hole mergers with nearly extremal spins         
Lovelace, Geoffrey and Boyle, Michael and Scheel, Mark A. and Szilágyi, Béla (2012) High-accuracy gravitational waveforms for binary black hole mergers with nearly extremal spins. Classical and Quantum Gravity, 29 (4). 045003. ISSN 0264-9381. http://resolver.caltech.edu/CaltechAUTHORS:20120313-115953201
           Inspiral-merger-ringdown multipolar waveforms of nonspinning black-hole binaries using the effective-one-body formalism         
Pan, Yi and Buonanno, Alessandra and Boyle, Michael and Buchman, Luisa T. and Kidder, Lawrence E. and Pfeiffer, Harald P. and Scheel, Mark A. (2011) Inspiral-merger-ringdown multipolar waveforms of nonspinning black-hole binaries using the effective-one-body formalism. Physical Review D, 84 (12). p. 124052. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20120131-112645220
           Addressing the spin question in gravitational-wave searches: Waveform templates for inspiralling compact binaries with nonprecessing spins         
Ajith, P. (2011) Addressing the spin question in gravitational-wave searches: Waveform templates for inspiralling compact binaries with nonprecessing spins. Physical Review D, 84 (8). Art. No. 084037. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20111205-111513859
           Notes on the integration of numerical relativity waveforms         
Reisswig, Christian and Pollney, Denis (2011) Notes on the integration of numerical relativity waveforms. Classical and Quantum Gravity, 28 (19). p. 195015. ISSN 0264-9381. http://resolver.caltech.edu/CaltechAUTHORS:20111018-095759444
           Characteristic extraction tool for gravitational waveforms         
Babiuc, M. C. and Szilágyi, B. and Winicour, J. and Zlochower, Y. (2011) Characteristic extraction tool for gravitational waveforms. Physical Review D, 84 (4). Art. No. 044057. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20110920-133546346
           Suitability of post-Newtonian/numerical-relativity hybrid waveforms for gravitational wave detectors         
MacDonald, Ilana and Nissanke, Samaya and Pfeiffer, Harald P. (2011) Suitability of post-Newtonian/numerical-relativity hybrid waveforms for gravitational wave detectors. Classical and Quantum Gravity, 28 (13). Art. No. 134002. ISSN 0264-9381. http://resolver.caltech.edu/CaltechAUTHORS:20110705-113932521
           Inspiral-Merger-Ringdown Waveforms for Black-Hole Binaries with Nonprecessing Spins         
Ajith, P. and Hannam, M. and Husa, S. and Chen, Y. and Brügmann, B. and Dorband, N. and Müller, D. and Ohme, F. and Pollney, D. and Reisswig, C. and Santamaría, L. and Seiler, J. (2011) Inspiral-Merger-Ringdown Waveforms for Black-Hole Binaries with Nonprecessing Spins. Physical Review Letters, 106 (24). Art. No. 241101. ISSN 0031-9007. http://resolver.caltech.edu/CaltechAUTHORS:20110705-133359175
           Length requirements for numerical-relativity waveforms         
Hannam, Mark and Husa, Sascha and Ohme, Frank and Ajith, P. (2010) Length requirements for numerical-relativity waveforms. Physical Review D, 82 (12). Art. No. 124052. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20110314-113342584
           Improved time-domain accuracy standards for model gravitational waveforms         
Lindblom, Lee and Baker, John G. and Owen, Benjamin J. (2010) Improved time-domain accuracy standards for model gravitational waveforms. Physical Review D, 82 (8). Art. No. 084020 . ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20101102-093727097
           Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries         
Santamaría, L. and Ohme, F. and Ajith, P. and Brügmann, B. and Dorband, N. and Hannam, M. and Husa, S. and Mösta, P. and Pollney, D. and Reisswig, C. and Robinson, E. L. and Seiler, J. and Krishnan, B. (2010) Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries. Physical Review D, 82 (6). Art. No. 064016. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20101004-085415232
           Effective-one-body waveforms calibrated to numerical relativity simulations: Coalescence of nonprecessing, spinning, equal-mass black holes         
Pan, Yi and Buonanno, Alessandra and Buchman, Luisa T. and Chu, Tony and Kidder, Lawrence E. and Pfeiffer, Harald P. and Scheel, Mark A. (2010) Effective-one-body waveforms calibrated to numerical relativity simulations: Coalescence of nonprecessing, spinning, equal-mass black holes. Physical Review D, 81 (8). Art. No. 084041 . ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20100528-085628584
           Characteristic extraction in numerical relativity: binary black hole merger waveforms at null infinity         
Reisswig, C. and Bishop, N. T. and Pollney, D. and Szilágyi, B. (2010) Characteristic extraction in numerical relativity: binary black hole merger waveforms at null infinity. Classical and Quantum Gravity, 27 (7). 075014 . ISSN 0264-9381. http://resolver.caltech.edu/CaltechAUTHORS:20100406-092950339
           Unambiguous Determination of Gravitational Waveforms from Binary Black Hole Mergers         
Reisswig, C. and Bishop, N. T. and Pollney, D. and Szilágyi, B. (2009) Unambiguous Determination of Gravitational Waveforms from Binary Black Hole Mergers. Physical Review Letters, 103 (22). p. 221101. ISSN 0031-9007. http://resolver.caltech.edu/CaltechAUTHORS:20091221-112844960
           Use and abuse of the model waveform accuracy standards         
Lindblom, Lee (2009) Use and abuse of the model waveform accuracy standards. Physical Review D, 80 (6). Art. No. 064019. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20091020-152758901
           Testing gravitational-wave searches with numerical relativity waveforms: results from the first Numerical INJection Analysis (NINJA) project         
Aylott, Benjamin and Baker, John G. and Boggs, William D. and Boyle, Michael and Brady, Patrick R. and Brown, Duncan A. and Brügmann, Bernd and Buchman, Luisa T. and Buonanno, Alessandra and Cadonati, Laura and Camp, Jordan and Campanelli, Manuela and Centrella, Joan and Chatterji, Shourov and Christensen, Nelson and Chu, Tony and Diener, Peter and Dorband, Nils and Etienne, Zachariah B. and Faber, Joshua and Fairhurst, Stephen and Farr, Benjamin and Fischetti, Sebastian and Guidi, Gianluca and Goggin, Lisa M. and Hannam, Mark and Herrmann, Frank and Hinder, Ian and Husa, Sascha and Kalogera, Vicky and Keppel, Drew and Kidder, Lawrence E. and Kelly, Bernard J. and Krishnan, Badri and Laguna, Pablo and Lousto, Carlos O. and Mandel, Ilya and Marronetti, Pedro and Matzner, Richard and McWilliams, Sean T. and Matthews, Keith D. and Mercer, R. Adam and Mohapatra, Satyanarayan R. P. and Mroué, Abdul H. and Nakano, Hiroyuki and Ochsner, Evan and Pan, Yi and Pekowsky, Larne and Pfeiffer, Harald P. and Pollney, Denis and Pretorius, Frans and Raymond, Vivien and Reisswig, Christian and Rezzolla, Luciano and Rinne, Oliver and Robinson, Craig and Röver, Christian and Santamaría, Lucía and Sathyaprakash, Bangalore and Scheel, Mark A. and Schnetter, Erik and Seiler, Jennifer and Shapiro, Stuart L. and Shoemaker, Deirdre and Sperhake, Ulrich and Stroeer, Alexander and Sturani, Riccardo and Tichy, Wolfgang and Liu, Yuk Tung and van der Sluys, Marc and van Meter, James R. and Vaulin, Ruslan and Vecchio, Alberto and Veitch, John and Viceré, Andrea and Whelan, John T. and Zlochower, Yosef (2009) Testing gravitational-wave searches with numerical relativity waveforms: results from the first Numerical INJection Analysis (NINJA) project. Classical and Quantum Gravity, 26 (16). Art. No. 165008. ISSN 0264-9381. http://resolver.caltech.edu/CaltechAUTHORS:20090817-144819295
           Effective-one-body waveforms calibrated to numerical relativity simulations: coalescence of nonspinning, equal-mass black holes         
Buonanno, Alessandra and Pan, Yi and Pfeiffer, Harald P. and Scheel, Mark A. and Buchman, Luisa T. and Kidder, Lawrence E. (2009) Effective-one-body waveforms calibrated to numerical relativity simulations: coalescence of nonspinning, equal-mass black holes. Physical Review D, 79 (12). p. 124028. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20090730-104233665
           Samurai project: Verifying the consistency of black-hole-binary waveforms for gravitational-wave detection         
Hannam, Mark and Husa, Sascha and Baker, John G. and Boyle, Michael and Brügmann, Bernd and Chu, Tony and Dorband, Nils and Herrmann, Frank and Hinder, Ian and Kelly, Bernard J. and Kidder, Lawrence E. and Laguna, Pablo and Matthews, Keith D. and van-Meter, James R. and Pfeiffer, Harald P. and Pollney, Denis and Reisswig, Christian and Scheel, Mark A. and Shoemaker, Dierdre (2009) Samurai project: Verifying the consistency of black-hole-binary waveforms for gravitational-wave detection. Physical Review D, 79 (8). 084025. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20090828-110556553
           Strategies for the characteristic extraction of gravitational waveforms         
Babiuc, M. C. and Bishop, N. T. and Szilágyi, B. and Winicour, J. (2009) Strategies for the characteristic extraction of gravitational waveforms. Physical Review D, 79 (8). 084011. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:20090701-151146004
           High-accuracy waveforms for binary black hole inspiral, merger, and ringdown         
Scheel, Mark A. and Boyle, Michael and Chu, Tony and Kidder, Lawrence E. and Matthews, Keith D. and Pfeiffer, Harald P. (2009) High-accuracy waveforms for binary black hole inspiral, merger, and ringdown. Physical Review D, 79 (2). Art. No. 024003. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:SCHEprd09
           Model waveform accuracy standards for gravitational wave data analysis         
Lindblom, Lee and Owen, Benjamin J. and Brown, Duncan A. (2008) Model waveform accuracy standards for gravitational wave data analysis. Physical Review D, 78 (12). Art. No. 124020. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:LINprd08b
           Spin-induced orbital precession and its modulation of the gravitational waveforms from merging binaries         
Apostolatos, Theocharis A. and Cutler, Curt and Sussman, Gerald J. and Thorne, Kip S. (1994) Spin-induced orbital precession and its modulation of the gravitational waveforms from merging binaries. Physical Review D, 49 (12). pp. 6274-6308. ISSN 0556-2821. http://resolver.caltech.edu/CaltechAUTHORS:APOprd94
          Systems and methods for radio frequency hopping communications jamming utilizing software defined radio platforms        
A dynamically-reconfigurable multiband multiprotocol communications jamming system and method is provided that are particularly suited for the generation of effective radio-frequency waveforms/noise output that successively translates up and down the RF spectrum. The system and method are particularly suited for strategically targeting specific frequencies in order to disrupt a communications...
           My Generic PSF (Portable Sound Format: PSF1, GSF) Ripping Strategy        

Here I introduce my PSF ripping method.

日本語に翻訳する (Yahoo) 日本語に翻訳する (Google)

Introduction

I assume you already know about the following things:

  • What is PSF? - A format for video game music rips. In most cases, a PSF is a compressed game ROM whose non-sound stuff is removed by ripper.
  • "Streamed?" "Sequenced?" What's the difference? etc. - Read PSF Frequently Asked Questions - Neill Corlett's Home Page
  • What is pointer? - For this purpose, I recommend you to learn C.
  • Do I need to learn assembly? - At least you need to know basics. (for example: "What is a register?" "What is a stack?") If you already know one of assemblies, I think you can solve your problems during ripping by pinpoint googling.

Prerequisites

  • Emulator (Debugger) - such as no$psx or no$gba debugger
    • It must have step execution function. (Step Into, Step Over, Run To Return)
    • It definitely should have a function that can change instructions by using assembly.
    • It should change register value. (If not available, use an external tool such as MHS.)
  • IDA Pro - it will help you in reading assembly a lot. See also: IDA Pro - How To Load Game ROM
  • Memory Scanner - such as Memory Hacking Software (MHS) or Cheat Engine.
    • This tool is optional. It is handy when the emulator does not have built-in RAM search function.
    • It can be used to set a memory breakpoint, when the emulator does not have a memory breakpoint feature. (thankfully, nocash debuggers has the feature)

Typical structure of unaltered game

I think a game has routines like the following typically:

register int a0, a1, a2, a3;
register int sp;

void start(void)
{
    // Minimal Initialization
    sp = 0xXXXXXXXX;
    memset(BSS_START, 0, BSS_SIZE);

    main();         // will never return
}

void main(void)
{
    // Initialization Stage
    InitIRQ();      // set interrupt callback(s), an operation like this should exist in upper part of startup code
    a0 = 0xXXXX;
    InitFoo(a0);
    a0 = 0xXXXX;
    a1 = 0xXXXX;
    InitBaa(a0, a1);
    a0 = 0xXXXX;
    InitSound(a0);  // init sound registers
    InitMoo();

    // Main Loop Stage
    main_loop:
        MainFunc1();
        a0 = 0xXXXX;
        a1 = 0xXXXX;
        MainFunc2(a0, a1);
        a0 = 0xXXXX;
        MainFunc3(a0);
        MainFunc4();
        WaitForVSync();
    goto main_loop;

    // never return
}

// Callback for interrupts are registered at initialization stage.
// User callbacks are usually called by BIOS or CPU.
// Those callbacks are used for frequent and synchronous operations,
// so that music playback routine must be called by such a callback, in most cases.
// Some drivers may call the routine from a timer callback, instead of a vsync callback.
void VSyncCallback()
{
    UpdateSound();
    UpdateVideo();
    UpdateBlah();
}

// Start playback. This function is usually called from subroutines in main loop.
// Sound system needs to be initialized beforehand.
// "Load" functions may need to be called beforehand.
void PlayNewSong(int songId, ...)
{
    // initialize score pointer of each tracks, etc.
    ...
}

// About Load Functions
// --------------------
// The music data may need to be loaded from ROM to RAM before the playback,
// if the sound unit cannot read the data directly from ROM for some reasons.
// For example, PlayStation games need to load the music data from CD.
// They also need to load waveforms (PSX ADPCM) to the sound buffer.
// Most GBA games probably do not need it, because the processor can read ROM.
// For such systems, you may also need to find such load functions.

Details depend on the console and game.

Strategy summary

  1. Analysis before ripping
    1. Search score pointers
    2. Search "play new song" function
    3. Analyze loading routines
  2. Ripping
    1. Minimize main loop
    2. Insert driver code (call the song select function)
    3. Minimize initialization
    4. Minimize callback
    5. Code refactoring

Search score pointers

I explained how, in Example Of Sequenced VGM Analysis.

Search "play new song" function

// Start playback. This function is usually called from subroutines in main loop.
// Sound system needs to be initialized beforehand.
// "Load" functions may need to be called beforehand.
void PlayNewSong(int songId, ...)
{
    // initialize score pointer of each tracks, etc.
    ...
}

Score pointers must be initialized when a new song starts playback. So that I always set a write-breakpoint to one of those pointers. (this is done by using MHS, because nocash debuggers cannot do that. In nocash debugger, you can set memory breakpoint by "Define Break/Condition" menu. Syntax for a write-breakpoint is `[address]!!` Read the help for details.)

Let me show the case of PS1 "Hokuto no Ken" as an example.

Set write-breakpoint and find where the score pointer gets changed
  1. Run no$psx
  2. Play the game until just before a song starts playback, then pause the game
  3. Make a savestate (for quick redo, and prevent to change the song data address)

Here I need to set a write-breakpoint to one of the pointers, 0x80079B60.

  1. Open "Debug → Define Break/Condition" from the menu
  2. Enter `[80079B60]!!` (without quotes) and press OK

Now a breakpoint is set. The emulator will hit to the breakpoint immediately, after you unpause the game. In my case, it has stopped at 0x8003C00C. The pointer value gets updated by the instruction here, so I assume we can find the song load function by tracing back.

Note: Do not forget to remove the memory breakpoint before you start tracing back!

Backtrace from the write instruction

Repeat the following steps for 3-5 times:

  1. Select "Run → Run to Sub-return" from the menu (or press F8)
  2. If possible, read the function start address of previously noted instruction, from the one before the current instruction. (If not possible, use IDA Pro instead.)
  3. Write down the instruction address

After that, obtain the function start address of each instruction addresses, by reading the previous instruction or using IDA Pro. Your note will become something like the following.

Sub-return Function Note
0x8003C00C 0x8003BF0C Writes to score pointer
0x8003C1FC 0x8003C060
0x8003C3BC 0x8003C374
0x8003B84C 0x8003B714
0x80011F58 - I guess this is not the playback function because its address is far away from others

I think one of function addresses is the top of playback function. Load a savestate and set a breakpoint to the top-level function (0x8003B714), then resume the game.

The execution will be stopped when a song starts playing. Edit one of the arguments to the function, and unpause the emulation. *1 If the game plays a different song, the function is apparently the song select function. It is what exactly we are searching for.

In my case, the game has played a different song by manipulating r2 register.

In actual research, you may fail and need to try for other arguments and functions. Try patiently, probably you will find the right answer after all (hopefully!).

After find the playback function, try to understand what each arguments mean.

Analyze load function

Games may need to load a music data from somewhere.

  • If it's a PSX game, the game needs to load music data from CD
  • If a music data is compressed or archived, the game needs to unpack it to another RAM area
  • Some games may not have a load function (for example: GBA games, which can access to every ROM addresses directly)

Here we just need to do the same things. Try to know the data location, and use memory breakpoints.

In PS1 Hokuto no Ken, I somehow learned two facts:

  • Music archive seems to be always loaded to 0x800FE000 (I decided to import music archive there by PSFLib)
  • Music archive is unpacked by function 0x80012808

Minimize main loop

Most of functions (or all functions) in the main loop is not necessary for sound playback, since the sound playback is usually done by a callback.

The below is an example of main loop.

    // Main Loop Stage
    main_loop:
        MainFunc1();
        a0 = 0xXXXX;
        a1 = 0xXXXX;
        MainFunc2(a0, a1);
        a0 = 0xXXXX;
        MainFunc3(a0);
        MainFunc4();
        WaitForVSync();
    goto main_loop;

Try removing unwanted codes. Follow the steps:

  1. Play the game until a song starts playing, then pause it
  2. Set a breakpoint at the line like "call MainFunc1"
  3. Run the game and it will stop immediately
  4. Unset the breakpoint and change the instruction to NOP
  5. Unpause the game and see if the music still work (If it works, the function call probably can be removed. If it does not work, the function may be necessary.)
  6. Repeat those steps for every function calls in main loop

In my case, it has become an empty loop:

    // Main Loop Stage
    main_loop:
        //MainFunc1();
        //a0 = 0xXXXX;
        //a1 = 0xXXXX;
        //MainFunc2(a0, a1);
        //a0 = 0xXXXX;
        //MainFunc3(a0);
        //MainFunc4();
        WaitForVSync();   // this call can also be removed
    goto main_loop;

Insert driver code (call the song select function)

You should have small free code block in main loop. Use the block to patch the game to play a song immediately.

    // Main Loop Stage
    //main_loop:
        //MainFunc1();
        //a0 = 0xXXXX;
        //a1 = 0xXXXX;
        //MainFunc2(a0, a1);
        //a0 = 0xXXXX;
        //MainFunc3(a0);
        //MainFunc4();

    // Song Starter Example
    UnpackMusicArchive(MUSIC_ARCHIVE_ADDRESS);  // call loading functions before the playback function, if available
    PlayNewSong(SONG_INDEX);

    main_loop_hacked:
        WaitForVSync();   // this call can also be removed
    goto main_loop_hacked;

Reset & Run the game after inserting the code. Does a song start playing? Can you change the song by editing the arguments? If so, you have almost done your work!

For the next step, you should make a copy of the ROM file, and apply the patch to it. Then, open it instead of unaltered ROM by the emulator.

Minimize initialization

    // Initialization Stage
    InitIRQ();      // set interrupt callback(s), an operation like this should exist in upper part of startup code
    a0 = 0xXXXX;
    InitFoo(a0);
    a0 = 0xXXXX;
    a1 = 0xXXXX;
    InitBaa(a0, a1);
    a0 = 0xXXXX;
    InitSound(a0);  // init sound registers
    InitMoo();

Remove unnecessary calls like we have done in main loop:

  1. Change the instruction like "call InitFoo" to NOP
  2. Reset & Run the game and see if the music still work (If it does not work, reload the ROM, and try removing unnecessary calls in the subroutine.)
  3. Repeat those steps for every function calls

For example, the final result may become like the following:

    // Initialization Stage
    InitIRQ();      // set interrupt callback(s), an operation like this should exist in upper part of startup code
    a0 = 0xXXXX;
    InitFoo(a0);
    //a0 = 0xXXXX;
    //a1 = 0xXXXX;
    //InitBaa(a0, a1);
    a0 = 0xXXXX;
    InitSound(a0);  // init sound registers
    //InitMoo();
    ...

void InitFoo(int a0)
{
    InitSoundRegion();
    //InitJoypad();
}

Minimize callback

Do the same thing to callback function.

void VSyncCallback()
{
    UpdateSound();
    //UpdateVideo();
    //UpdateBlah();
}

Code refactoring

Probably, patched main function is filled by a lot of NOPs. If you think that is not beautiful, you may want to create a new main routine.

Note: For PlayStation games, you can write the new driver code in C, thanks of PSF-o-Cycle.

// commented-out instructions are filled by NOP

void start(void)
{
    // Minimal Initialization
    sp = 0xXXXXXXXX;
    memset(BSS_START, 0, BSS_SIZE);

    main_PSF();     // will never return
}

void main(void)     // no longer used
{
    ...
}

void main_PSF(void) // inserted to an unused code block
{
    // Initialization Stage
    InitIRQ();      // set interrupt callback(s), an operation like this should exist in upper part of startup code
    a0 = 0xXXXX;
    InitFoo(a0);
    a0 = 0xXXXX;
    InitSound(a0);  // init sound registers

    // Song Starter
    UnpackMusicArchive(MUSIC_ARCHIVE_ADDRESS);
    PlayNewSong(SONG_INDEX);

    // Main Loop Stage
    main_loop:
        WaitForVSync();
    goto main_loop;
}

void InitFoo(int a0)
{
    InitSoundRegion();
    //InitJoypad();
}

// Callback for interrupts are registered at initialization stage.
// User callbacks are usually called by BIOS or CPU.
// Those callbacks are used for frequent and synchronous operations,
// so that music playback routine must be called by such a callback, in most cases.
// Some drivers may call the routine from a timer callback, instead of a vsync callback.
void VSyncCallback()
{
    UpdateSound();
    //UpdateVideo();
    //UpdateBlah();
}

Finalize

You need:

Notes

Assembly

General

  • Depending on architecture, there are some pseudo instructions (macro) like the following (MIPS). IDA Pro will recover those pseudo instructions, however, many other disassemblers do not. (PSFLab and nocash debugger do not, at least)
li $t0, 0x1234ABCD
  ↓
lui $t0, 0x1234
ori $t0, $t0, 0xABCD

MIPS

  • The immediate value of addi/addiu instruction is always signed.
addiu $t1, $t0, 0xFFFF
  ↓ means
addiu $t1, $t0, -1
  • Jump and branch instructions have a "delay slot". This means that the instruction after the jump or branch instruction is executed before the jump or branch is executed.
addiu $a0, $zero, 1
jal $8001A000  ; a0 = 2
addiu $a0, $zero, 2
How to know vsync callback address
  • PS1: Search xrefs to VSyncCallback by IDA Pro (It must be identified by PsyQ sigunature).
  • GBA: Interrupt handler function address must be written in 0x03007FFC. The handler function must read 0x04000202 and see what interrupt has raised. See GBATEK for details.
  • General: Repeat "Run to Sub-return" from subfunction of vsync callback
How to wait for vsync
  • PS1: VSync runtime function (It must be identified by PsyQ sigunature).
  • GBA: SWI 05h (IDA Pro will display it as "SVC 5"), VBlankIntrWait BIOS function, however, this BIOS function will never return when the callback does not update 3007FF8 (see GBATEK for details). If the game does not use any other interrupts, you can also use SWI 02h (Halt), it does not require a flag update.

Graveyard

Old method. It is not necessary in most cases, but it might be required for some limited cases.

Removed Section: Set write-breakpoint by MHS and get program counter (PC) value

This section is removed since nocash debugger has a built-in memory breakpoint feature.

First of all, run emulator and MHS.

  1. Run no$psx.
  2. Run MHS and open the no$psx process.
  3. Play the game until just before a song starts playback, then pause the game.
  4. Make a savestate, as usual.

I need to set a write-breakpoint to one of those pointers, PSX-RAM:$79B60. I assume it is already in the main address list. (See Example Of Sequenced VGM Analysis!)

Right-click on the address and click "Find What Writes This Address".

It attaches debugger to the emulator and opens Disassembler and Helper windows. Then delete the added address from Helper window.

Activate Disassembler window, then right-click on a random instruction and click "Breakpoints → Add Software Write Breakpoint Here".

An address must be added to Breakpoints tab in Helper window. Double-click the item to edit it.

Edit the address to watch the PC address of score pointer, then click OK.

Finally a breakpoint is set! Back to the emulator and continue the game. Probably the emulator will run very slowly, but please wait until something will happen.

Hopefully, the emulator will stop running, and MHS's Disassembler will popup and highlight the current execution address.

Now I need to read the program counter (PC), however, the emulator is freezing because of the breakpoint. Therefore, I have to read it from MHS. How? The answer is written in Memory Addresses Of Emulators. Since NO$PSX always has the current execution address in ESI register, I need to view the value through Registers tab in Helper window.

Of course it's PC address, so convert it to game address.

Finally, I get the code address 0x3C92C (0x8003C92C). We need to resume the emulator before using the address though.

  1. Remove your breakpoint from Breakpoints tab
  2. Dettach debugger by Disassembler window
  3. Close Disassembler window

Correction: In the screenshot above, I tried to search an instruction that writes to track 1. However, I needed to search track 3 in fact, because track 1 is used by SFX. I got 0x3C010 (0x8003C010) after the correct process.

*1:no$psx can change the register value. Right-click the target register column and select "Change Value" menu item.


          Creation of radio waveforms according to a probability distribution using weighted parameters        
Described herein are methods and systems capable of generating weighted parameter sets, which can be randomly addressed for dictating a waveform of each pulse to be generated by using a probability distribution function loader to load a memory table with waveform parameter values, wherein the values are loaded according to...
           Anritsu MS9710C Sale price: $15,098.00         

Anritsu MS9710C


Sale price
$16,992.00
$15,098.00


Optical Spectrum Analyzer The Anritsu MS9710C provides excellent wavelength accuracy, waveform shape, and new features. This OSA is an improved version of the popular MS9710B and features improved wavelength accuracy, resolution bandwidth, and s...(Continue to site)

           Tektronix AFG3251 Sale price: $5,398.00         

Tektronix AFG3251


Sale price
$0.00
$5,398.00


Abitrary Function Generator. Unmatched performance, versatility, intuitive operation and affordability make the AFG3000 Series of Function, Arbitrary Waveform and Pulse Generators the most useful instruments in the industry. Users can choose from ...(Continue to site)

           National Instruments PXIe-6556 Sale price: $11,688.00         

National Instruments PXIe-6556


Sale price
$12,992.00
$11,688.00


Digital Waveform Generator Analyzer With PPMU The NI PXIe-6556 is the most comprehensive and flexible NI high-speed digital I/O module for validation or production test. It is a 200 MHz digital waveform generator/analyzer with 4-quadrant per pin ...(Continue to site)

           Elgar SW5250A-1-3-2 Sale price: $11,998.00         

Elgar SW5250A-1-3-2


Sale price
$0.00
$11,998.00


Elgar SW5250A-1-3-2 AC / DC Power Source, 5250 VACThe Elgar SW5250A Power Source is part of the SmartWave Series SW of AC power sources. The SW5250A offers powerful waveform creation for ATE and power line disturbance simulation testing. Three sepa...(Continue to site)

           Ando AQ6317B Sale price: $16,898.00         

Ando AQ6317B


Sale price
$22,772.00
$16,898.00


Optical Spectrum Analyzer The AQ6317B is an advanced optical spectrum analyzer for a wide range of applications, including light source evaluation, measurement of loss wavelength characteristics in optical devices, and waveform analysis of WDM ...(Continue to site)

           Agilent 83496A Sale price: $6,298.00         

Agilent-Keysight 83496A


Sale price
$8,992.00
$6,298.00


Clock Recovery Module The 83496A multi-rate electrical Clock Recovery CR module performs clock extraction for waveform analysis with continuous, unbanded tuning from 50 Mb/s to 13.5 Gb/s, ultra-low residual jitter and Golden Phase Locked Loop P...(Continue to site)

           Tektronix WFM7120 Sale price: $12,897.00         

Tektronix WFM7120


Sale price
$17,946.00
$12,897.00


The WFM6120 / WFM7020 / WFM7120 family provides Tektronix' superior video waveform monitoring and analysis capabilities required in Content Creation, Content Delivery, Research & Development, and Manufacturing applications. Precision and leading-...(Continue to site)

           EM Test LD200B1 Sale price: $7,998.00         

EM Test LD200B1


Sale price
$0.00
$7,998.00


Load Dump, Double-exponential waveform generation Used on Ford, Chrysler and ISO 7637-22004....(Continue to site)

          90/99 minute audio CD writing with Linux        

[Short link to this article if you need it: http://goo.gl/D9n7tX - or retweet me!]

There is an executive summary/how-to at the end of the main article if you're just looking to get on and do it! This has been quite a popular blog entry, so if you've found it useful, please get in touch and let me know!

INTRODUCTION

It seemed like such an easy thing to do. Use Linux to write a continuous mix audio CD of some tracks of 2013 to a 90-minute blank CD-R with track splits and CD-Text information. Bear in mind that some things I'm covering here are NOT specific to 90-minute discs, but are true for audio CD writing on Linux in general. I'm writing this to document my experiences since I struggled to find the comprehensive information anywhere else. For reference in this article my use of "wodim" and "cdrecord" are interchangeable - the machine I was using used wodim 1.1.11 - "cdrecord" was symlinked to it.

For the uninitiated, most CD-Rs on the market are 80 minute long, and can sometimes be "overburned" by around 88 seconds. Longer CD-Rs of 90 and 99 minutes are sometimes available at higher cost, although these are technically in violation of Philips+Sony's Orange book specification, so they cannot be guaranteed to work either in your writer or anything that's reading them, and most software doesn't know how to recognise them, hence the need to use overburning to write anything more than 80 minutes long. The ability to overburn effectively is dependent on your drive.

TRIAL AND ERROR - THE FULL STORY

Firstly I fired up the Brasero disk burner (As an aside it's not helpful when typing that name into Unity's Dash that Ubuntu dynamically searches for products on Amazon now - especially when you're 4 characters into that particular program name...) set up the track breaks, add the track info for CD-Text, insert the 90-minute blank and ... it wasn't interested. No, for the purposes of this I really did not want to burn over several discs:


So a bit of searching around got me nowhere fast, other than speculation (and a suggestion that k3b might be a better choice). Either way, Brasero wasn't looking like the universal tool I needed. A friend suggested he'd managed to write 90 minute CD-Rs using cdrecord directly. But my requirements were a little harder than writing a data disc so I tried a couple of things. Firstly I tested with a smaller audio CD with Brasero writing the image to a file (and cue sheet for the track break/CD-Text info) to an image file and trying to burn the image manually to a CD-RW with cdrecord (If you're not familiar with cuesheet use, I'll cover that at the end):

    cdrecord dev=/dev/sr0 -text -dao cuefile=MyAudio1.cue

This seemed to work ok until I went to play it. Random tracks (didn't seem to be a pattern to which ones, and it varied depending on which tracks I'd included) but always the same ones if the same image was burned again) seemed to have a low volume buzz in the background. Quite odd since it was a continuous input file. I never managed to figure out why, so ultimately that was useless.

While having a play around with cuesheets it became apparent that I didn't actually need brasero's image file - just the cue sheet (for the record brasero's track splitting was ok, other than the fact it seemed to always end up splitting with the tracks starting at 2 and put track 1 at the end - bear that in mind when typing in the track names!) I could edit the cue sheet myself, reset the type in the first line from "MOTOROLA" to "WAVE" and change the image file name to the source .wav file.


Good plan, but that didn't quite work either. I got this:


   Inappropriate audio coding in 'MyAudio1.wav' on line 1 in 'MyAudio1.cue'.

So cdrecord claimed the wav file was invalid - did I need Brasero's image after all? Was I back to square 1? Not quite ... turns out (and this is not unreasonable given the format of CDs) that cdrecord won't accept a .wav file that isn't in 44.1kHz, and my source file was in 48kHz. The error message above from cdrecord is a little ambiguous in that respect and I may raise that with them.

So the next step: fire up audacity, load the file, use the drop down in the bottom left of the screen (as per the screen shot on the right) to switch it to 44.1kHz (44100Hz) and re-export. NOTE: It's worth pointing out here that Brasero didn't care about the source file being 48kHz and would auto-convert behind the scenes when writing to a normal sized CD:

   sxa@sainz:/dev/shm$ file *.wav
   MyAudio1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 48000 Hz
   MyAudio2.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz

Much better - it got rid of that error message but once again it still didn't work. This time cdrecord objected that the amount of data in the final track didn't end on a CD-frame (2352-byte) boundary:

   wodim: Bad audio track size 42302672 for track 25.
   wodim: Audio tracks must be at least 705600 bytes and a multiple of 2352.
   wodim: See -pad option.

Sadly it's suggestion to use "-pad" didn't seem to do anything useful and I still got the same error message - I'm guessing that probably doesn't work when the track info comes from a cuesheet. So back into Audacity, set the time counts at the bottom to work in CD-DA frames instead of seconds (as per screen shot above left), highlight the whole thing, manually drop the "end" frame number by 1 from whatever it was (you'll lose up to 1/74 of a second but I think you can probably live with that!) and export the selection to a .wav again.

So now I have a 44.1kHz CD-DA framed .wav file and a brasero-generated (slightly edited to point at a .wav) cue file. For the record, I tested with a smaller .wav file first to my CD-RW and it did solve the "buzz" problem from earlier - I figured it was better to try that before potentially wasting a 90-minute blank!

I'd seen reports that overburning (required for anything over 80 minutes) was much more reliable at slower speeds, so I gave cdrecord the parameter to reduce the write speed to 2x:

   cdrecord dev=/dev/sr0 -text -dao cuefile=MyAudio1.cue -v -overburn speed=2

Now the drive (an LG GSA-4120B) seemingly wasn't interested in hanging around and so the speed was increased to 4 (automatically, this was still using the above command). It completed and - I'm delighted to say - seemed to work. Played in the three players I tried, and the CD-Text information was there!

I also have 99 minute blanks but haven't needed them yet - I assume they'd work as well as the 90s as long as your drive is happy with them. The audio I had was only about 89 minutes long so it wasn't worth using the longer ones.

I remember playing with, and running a very early website dedicated to digital audio extraction (DAE - now more commonly referred to as "ripping") and MP3 files before most people knew about them, and here I am looking at lower level CD-DA stuff again. It's all coming full circle...



EXECUTIVE SUMMARY

I'm adding this section as a summary checklist for the next time I need to do this - it just summarises the rest of the article. Here's how to burn a continuous audio file with CD-Text to a 90-minute CD-R under Linux (assuming your drive supports it)
  1. Ensure your source audio file is in a CD-native 44.1kHz sample rate
  2. Make sure your file has a size that contains a number of CD-DA frames equal to a multiple of 2532 bytes
  3. Export to a standard uncompressed PCM .wav file
  4. Create a cue sheet with the CD-Text details (Brasero makes this fairly easy)
  5. Write the disc using the lowest possibly speed for increased reliability:
    cdrecord dev=/dev/sr0 -text -dao cuefile=MyFile.cue -v -overburn speed=2
For steps 1 and 2 you can use the "Audacity" editor - make sure your screen looks something like this before saving with File->Export Selection - ensure it's an exact number of total frames (I suggest setting it to one less than whatever it was originally when you highlight the whole waveform with CTL-A):




If you've never used a cue sheet before, there Wikipedia article is a good reference, but here's a quick example of the start of one in case you want to bypass creating it with Brasero showing the first three tracks. Just add as many as you need (although as mentioned, the track PERFORMER entries don't get picked up by cdrecord):


FILE "/mnt/scd13/2013top34-90minues.wav" WAVE
TITLE "sxa's 2013 top tracks mix"
  PERFORMER "@sxa555"
TRACK 01 AUDIO
TITLE "Mozart's House"
PERFORMER "Clean Bandit"
INDEX 01 00:00:00
TRACK 02 AUDIO
TITLE "Royals (Zoo Station Remix)"
PERFORMER "Lorde"
INDEX 01 03:32:24
TRACK 03 AUDIO
TITLE "All I Want Is You"
PERFORMER "Agnes"
INDEX 01 06:56:24

[etc.]



REFERENCES:
  1. Cue file format
  2. CD-DA frames tip from billw58 in audacity forums
  3. cdrecord man page with the "88 second" overburn reference (search "-overburn")
  4. Ubuntu's inclusion of amazon search results into Dash
  5. Article which gave me the hint about the missing PERFORMER entry

          A Fast Template Periodogram for Finding Periodic (Non-Sinusoidal) Waveforms in Noisy, Irregularly-Sampled Time Series Data        

Description

Astronomers are often interested in detecting periodic signals in noisy time-series data. The Lomb-Scargle periodogram was designed for this purpose, and can efficiently handle complications like irregular sampling and heteroskedastic measurement errors. However, many signals in astronomy are non-sinusoidal, and while extensions to the Lomb-Scargle periodogram are able to handle a variety of waveform shapes, this comes at the cost of decreased sensitivity. Template fitting algorithms provide better sensitivity by explicitly fitting a fixed-shape waveform to the data, but this is too computationally demanding to be practical for large surveys. We present a new algorithm, the Fast Template Periodogram, that combines the speed advantage of Lomb-Scargle with the sensitivity of template fitting. The Fast Template Periodogram provides up to 4 orders of magnitude of speedup over more naive template fitting methods for large surveys with greater than 1,000 datapoints per object.


          IT-Forum        

Consider the task of analog to digital conversion in which a continuous time random process is mapped into a stream of bits. The optimal trade-off between the bitrate and the minimal average distortion in recovering the waveform from its bit representation is described by the Shannon rate-distortion function of the continuous-time source. Traditionally, in solving for the optimal mapping and the rate-distortion function we assume that the analog waveform has a discrete time version, as in the case of a band-limited signal sampled above its Nyquist frequency. Such assumption, however, may not hold in many scenarios due to wideband signaling and A/D technology limitations. A more relevant assumption in such scenarios is that only a sub-Nyquist sampled version of the source can be observed, and that the error in analog to digital conversion is due to both sub-sampling and finite bit representation. This assumption gives rise to a combined sampling and source coding problem, in which the quantities of merit are the sampling frequency, the bitrate and the average distortion.

In this talk we will characterize the optimal trade-off among these three parameters. The resulting rate-distortion-samplingFrequency function can be seen as a generalization of the classical Shannon-Kotelnikov-Whittaker sampling theorem to the case where finite bitrate representation is required. This characterization also provides us with a new critical sampling rate: the minimal sampling rate required to achieve the rate-distortion function of a Gaussian stationary process for a given rate-distortion pair. That is, although the Nyquist rate is the minimal sampling frequency that allows perfect reconstruction of a bandlimited signal from its samples, relaxing perfect reconstruction to a prescribed distortion allows sampling below the Nyquist rate while achieving the same rate-distortion trade-off.


          Dynasty 400 with 28 different Waveforms        
Dynasty with 28 Different Waveforms how they work you have to test them out I did a sample of just one on the negative side of Advanced Squarewave 1- Advanced Squarewave 2- Advanced Squarewave with Soft Squarewave (-) 3- Advanced Squarewave with Soft Squarewave (+) 4- Advanced Squarewave...
          give me skin        
you have
beautiful
skin

my deft mute
chanteuse
hummingbird

thrumming
on mileless
waveform

you must not smoke
anymore
to have such beautiful
skin

so many miles gone
yet all for song

my songbird in blood
my dopamine flood

you must not smoke
anymore

if you would exert
proper deft scansion
over mileless
waveform

This conversation
I have
with whomever is listening

in your head

or in
mine


          Carga y descarga de un condensador de 220uF mediante 6008        

En la siguiente entrada vamos a realizar una práctica en la que realizaremos la carga y descarga de un condensador de 220uF mediante el DAQ 6008.
Mediante Labview crearemos un VI, que nos realice la carga y descarga del condensador, mientras nos tomará una serie de muestras o medidas, una vez terminada la toma de muestras podemos ver en una gráfica el resultado de la toma de valores.

Como hemos citado anteriormente el condensador a emplear tendrá una capacidad de 220uF, y realizaremos su carga y descarga del mismo.La adquisición de datos será en modo contínuo. 
La carga del condensador la vamos a realizar mediante el canal digital P02, la cual será controlada desde el VI y el DAQ, en el momento que pongamos a True el canal P02(5voltios), se producirá la carga del condensador.
En el momento que desactivemos la salida P02 poniéndola a False, no circula tensión y se producirá la descarga del condensador.
Para la toma de muestras de la lectura analógica emplearemos el canal analógico AI0, que tomará las 10000 muestras en función del tiempo que hemos establecido.
La resistencia que hemos empleado será de 2K2 ohmios, que actuará como un divisor de tensión con la resistencia interna del DAQ.
El esquema de conexionado externo  y comunicación con el DAQ será el siguiente.




Una vez que hemos explicado un poco el conexionado y el ejercicio nos centraremos un poco en la teoría.
En primer lugar tenemos que tener en cuenta que daremos al condensador un tiempo mínimo de carga de 5 segundos, es decir carga en un tiempo mínimo de 5 tau, Z=5. En 5 tau tendremos cargado al menos el 75% del condensador. 
Emplearemos la siguiente fórmula para poder despejar la ecuación y obtener el tiempo de carga del condensador.

Tiempo = Z*R*C
T= 5*2200*0,000220= 2,42 sec

La carga completa del condensador  la tendríamos en 10 Tau.
4,84 sec.
Como ya hemos realizado la parte Hardware de comunicación con el DAQ y hemos hablado un poco de teoría para justificar la carga y descarga del condensador, pasaremos a crear el VI.

VI Carga y descarga condensador

Para realizar la creación del VI lo haremos de la siguiente forma:
Dentro del Diagrama de bloques tenemos dos líneas la de arriba gobernará la lectura analógica mediante al canal Dev/Ai0, y la linea de abajo que gobernará el canal de salida P02, el cual nos cargará y nos permitirá la descarga del condensador, a su vez lo vamos a dividir en 4 partes.

  • En la primera parte,Insertamos la función Create Channel,  la cual configuramos como Analog Input Voltaje. Establecemos los valores de voltaje minimo 0v y maximo 5v, por ultimo lo referenciamos a Gnd (RSE). Canal Dev/Ai0 el cual empleamos como entrada analógica. A continuación de la línea insertamos la función Timing con una toma máxima de 10000 samples, y lo configuramos en modo contínuo.


  • La segunda parte será la encargada del control del While mediante una temporización y además nos mostrará su estado mediante un indicador. Hemos insertado una función de Tiempo para el control del While, en el cual podemos configurar el retardo con un control y a la vez nos mostrará en un indicador el tiempo transcurrido. También hemos insertado la función  Read asociada a un Waveform Graph para poder ver los valores obtenidos de la muestra(vector de datos). Muestras por canal 10000.


  • La tercera parte Crearemos un canal con la función Create Channel. El canal creado será Dev1/Port0/line2.El cual lo configuramos como Digital Output. Una vez creado el canal, Insertamos la función write con una constante True y el tiempo infinito. Para que active el canal de salida a 5v. Lo configuramos como Digital->single channel->single sample-> Booleano de 1 linea.



  • La última parte es la encargada de la desactivación del canal para que se pueda descargar el condensador. Insertamos una función de escritura de nuevo, pero en este caso con un valor False, para que en este caso desactive  el canal. Ademas hemos establecido una temporización de 6.5 segundos de espera que controla el While.


En las siguientes imágenes podemos observar el VI, con sus explicaciones correspondientes para que comprendamos su funcionamiento.




Este es el resultado de nuestro de nuestro Front Panel.


Si hacemos Run al programa, podemos ver su funcionamiento.

Hemos utilizado un Waveform Graph para que nos muestre un vector completo. En el momento que se adquiera otro vector, se borra el anterior y se grafica el nuevo.

En este otro ejemplo hemos reducido el tiempo de retardo en el While para la desactivación del canal P02, por lo que podemos ver que se muestra una mayor parte de la descarga, ya que empieza a tomar samples antes, y por eso nos muestra mas datos en el Waveform Graph.




Descarga aquí el VI




          Sea Ice Cover, Global Temperature, Fresh Water Supply and Solar Activity – July 2015        
Without commentary, here are the most recent waveforms, charts and graphs related to these indicators.
          Launch X431 GDS Scopebox        
Automotive specialized oscilloscope function and ignition waveform analysis
          The Chills: Silver Bullets         
chills_silver_bullets.jpg
It’s been nearly 20 years since their last new album Sunburst and 35 years since they first formed (I say “they” but I mean Martin Phillipps and his revolving posse of bandmates) yet the brand new Chills material sounds as good as anything from their past – the same melodic gifts, the same off-kilter catchiness that defined “Dunedin-pop” for the world. Although the last reference point is 1996's Sunburst, there are echoes of the Chills from 30-something years ago, though much better recorded. 'Warm Waveform’, for example, has the same reverby guitar and swirly keyboards combo, the same floaty feel, plus Phillipps’ Kiwi-accented caress of a voice. Wonderful!

There are some heavy social and ecological themes but they’re delivered in a digestible way. ‘Underwater Wasteland’ is a warning about what we’re doing to the seas while ‘Aurora Corona’ (the Southern Lights) is a prayer for mercy from the earth goddess, set in a whirl of upbeat pop. ‘Pyramid/ When The Poor Can Reach The Moon’ lasts 8 minutes and is a game of two halves: a complex web of geopolitical themes that resolves itself into pretty and optimistic baroque pop. ‘America Says Hello’ sugars its message about the state of the world and running out of time with some glorious bouncy pop moments while ‘Molten Gold’ ends the record in a slightly naïve but ultra-optimistic manner, the singer happy for the gifts he has.

It’s a brilliant return, treating us to new songs when the other new material in the last year or two has been a live reworking of old favourites and fantastic Peel Session versions of classic Chills tunes. Phillipps is on fire, and he even gets away with using a children’s choir on ‘Tomboy’, displaying a pop-sureness that makes every second of Silver Bullets a joy. The legends never lost it.
          Versadial Solutions Releases 4.7.3 Call Recording – visual audio wave, direct recorder access, Avaya and Panasonic signal capture        

Versadial Call Recording Software
Versadial Call Recording Software - Toll Free:

Versadial Solutions Releases Version 4.7.3 of their call recording solution.  What comes with 4.7.3? Audio waveform graph in the player - Visually see active and quiet audio waves during playback Avaya H.323 signaling capture and decoding Panasonic MGCP signaling capture Direct access to VSLogger recorder user interface Sort active calls to the top of live monitoring Audio Wave In Player: Continue Reading

Versadial Solutions Releases 4.7.3 Call Recording – visual audio wave, direct recorder access, Avaya and Panasonic signal capture
vadmin


          Launch X431 GX3 Multi-language diagnostic tool with 110 Softwares        
Launch X431 GX3 has the function of read DTCs, read datastream, actuation test, sensor waveform display and ECU coding. It's faster to communicate with cars than X-431 by integrated structure to save time in work.
          David Rowe: LilacSat-1 Codec 2 in Space!        

On May 25th LilacSat-1 was launched from the ISS. The exiting news is that it contains an analog FM to Codec 2 repeater. I’ve been in touch with Wei Mingchuan, BG2BHC during the development phase, and it’s wonderful to see the satellite in orbit. He reports that some Hams have had preliminary contacts.

The LilacSat-1 team have developed their own waveform, that uses a convolutional code running over BPSK at 9600 bit/s. Wei reports a MDS of about -127 dBm on a USRP B210 SDR which is quite respectable and much better than analog FM. GNU radio modules are available to support reception. I think it’s great that Wei and team have used open source (including Codec 2) to develop their own novel systems, in this case a hybrid FM/digital system with custom FEC and modulation.

Now I need to get organised with some local hams and find out how to work this satellite myself!

Part 2 – Making a LilacSat-1 Contact

On Saturday 3 June 2017 Mark VK5QI, Andy VK5AKH and I just made our first LilacSat-1 contact at 12:36 local time on a lovely sunny winter day here in Adelaide! Mark did a fine job setting up a receive station in his car, and Andy put together the video below showing both ends of the conversation:

The VHF tx and UHF rx stations were only 20m apart but the path to LilacSat-1 was about 400km each way. Plenty of signal as you can see from the error free scatter diagram.

I’m fairly sure there is something wrong with the audio (perhaps levels into the codec), as the decoded Codec 2 1300 bit/s signal is quite distorted. I can also hear similar distortion on other LilicSat-1 contacts I have listened too.

Let me show you what I mean. Here is a sample of my voice from LilacSat-1, and another sample of my voice that I encoded locally using the Codec 2 c2enc/c2dec command line tools.

There is a clue in this QSO – one end of the contact is much clearer than the other:

I’ll take a closer look at the Codec 2 bit stream from the satellite over the next few days to see if I can spot any issues.

Well done to LilacSat-1 team – quite a thrill for me to send my own voice through my own codec into space and back!

Part 3 – Level Analysis

Sunday morning 4 June after a cup of coffee! I added a little bit of code to codec2.c:codec2_decode_1300() to dump the energy quantister levels:

    e_index = unpack_natural_or_gray(bits, &nbit, E_BITS, c2->gray);
    e[3] = decode_energy(e_index, E_BITS);
    fprintf(stderr, "%d %f\n", e_index, e[3]);

The energy of the current frame is encoded as a 5 bit binary number. It’s effectively the “AF gain” or “volume” of the current 40ms frame of speech. We unpack the bits and use a look up table to get the actual energy.

We can then run the Codec 2 command line decoder with the LilacSat-1 Codec 2 data Mark captured yesterday to extract a file of energies:

./c2dec 1300 ~/Desktop/LilacSat-1/lilacsat_dgr.c2 - 2>lilacsat1_energy.txt | play -t raw -r 8000 -s -2 - trim 30 6

The lilacsat1_energy.txt file contains the energy quantiser index and decoded energy in a table (matrix) that I can load into Octave and plot. I also ran the same text on the reference cq_freedv file used in Part 2 above:

So the top plot is the input speech “cq freedv ….”, and the middle plot the resulting energy quantiser index values. The energy bounces about with the level of the input speech. Now the bottom plot is from the LilacSat-1 sample. It is “red lined” – hard up against the upper limits of the quantiser. This could explain the audio distortion we are hearing.

Wei emailed me overnight and other Hams (e.g. Bob N6RFM) have discovered that reducing the Mic gain on the uplink FM radios indeed improves the audio quality. Wei is looking into in-flight adjustments of the gain between the FM rx and Codec 2 tx on LilacSat-1.

Note to self – I should look into quantiser ranges to make Codec 2 robust to people driving it with different levels.

Part 4 – Some Improvements

Sunday morning 4 June 11:36am pass: Mark set up his VHF tx in my car, and we played the cq_freedv canned wave file using a laptop and signalink so we could easily vary the tx drive:

Fortunately I have plenty of power available in my Electric Vehicle – we just tapped across 13.2V worth of Lithium cells in the rear pack:

We achieved better results, but not quite as good as using the source file directly without a journey through the VHF FM uplink:

LilacSat-1 3 June high mic gain

LilacSat-1 4 June low mic gain

encoded locally (no VHF FM uplink)

There is still quite a lot of noise on the decoded audio, probably from the VHF uplink. Codec 2 performs poorly in the presence of high levels of background noise. As we are under-deviating, the SNR of the FM uplink will be reduced, further increasing noise. However Wei has just emailed me that his team is reducing the “AF gain” between the VHF rx and Codec 2 on LilacSat-1 so we should hear some improvements on the next few passes.

Note to self #2 – add some noise reduction inside of Codec 2 to make it more robust to different input signal conditions.

Links

The LilacSat-1 page has links to GNU Radio modules that can be used to receive signals from the satellite.

Mark, VK5QI, describes he car’s exotic antennas system and how it was used on todays LilacSat-1 contact.

LilacSat-1 HowTo, Mark and I have documented the set up procedure for LilacSat-1, and written some scripts to help automate the process.


          David Rowe: Towards FreeDV 700D        

For the last two months I have been beavering away at FreeDV 700D, as part my eternal quest to show SSB who’s house it is.

This work was inspired by Bill, VK5DSP, who kindly developed some short LDPC codes for me; and suggested I could improve on the synchronisation overhead of the cohpsk modem. As an aside – Bill is part if the communications payload team for the QB50 SUSat Cubesat – currently parked at the ISS awaiting launch! Very Kerbal.

Anyhoo – I’ve developed a new OFDM modem that has less syncronisation overhead, works better, and occupies less RF bandwidth (1000 Hz) than the cohpsk modem used for 700C. I have wrapped my head around such arcane mysteries as coding gain and now have LDPC codes playing nicely over that nasty old HF channel.

It looks like FreeDV 700D has a gain of 4dB over 700C. This means error free operation at -2dB SNR for AWGN, and 2dB SNR over a challenging fast fading HF channel (two paths, 1Hz Doppler, 1ms delay).

Major Innovations:

  1. An OFDM modem with with low overhead (small Eb/No penalty) synchronisation, even on fading channels.
  2. Use of LDPC codes.
  3. Long (several seconds) interleaver.
  4. Ruthlessly hunting down any dB’s leaking out of my performance curves.

One nasty surprise was that after a closer look at the short (224,112) LDPC codes, I discovered they don’t give any real improvement over the simple diversity scheme used for FreeDV 700C. However with long interleaving (several seconds) of the short codes, or a long (few thousand bit/several seconds) LDPC code we get an additional 3dB gain. The interleaver allows us to ride over the ups and downs of the fast fading channel.

Interleaving has a few downsides. One is delay, the other is when they fail you lose a big chunk of data.

I’ve avoided delay until now, using the argument that low delay is essential for PTT radio. However I’d like to test long delays and see what the trade off/end user experience is. Once someone is speaking – i.e in the middle of an “over” – I suspect we won’t notice the delay. However it could get confusing in fast handovers. This is experimental radio, designed for very low SNRs, so lets give it a try.

We could send the uncoded data without interleaving – allowing low delay decoding when the SNR is high. A switch could control LDPC decoding, allowing a user selection of coded-high-delay or uncoded-low-delay, like a noise banker. Mark, VK5QI, has suggested interleaver depth also be adjustable which I think is a good idea. The decoder could automagically determine interleaver depth by attempting decoding over a range of depths (1,2,4,8,16 frames etc) and noting when the LDPC code converges.

Or maybe we could use a small, low delay, interleaver, and just live with the fades (like we do on SSB) and get the vocoder to mute or interpolate over them, and enjoy low or modest latency.

I’m also interested to see how the LDPC code mops up errors like static bursts and other real-world HF rubbish that SSB subjects us to even on high SNR channels.

So, lots of room for experimentation. At this stage it’s all in GNU Octave simulation form, no C implementation or FreeDV GUI mode exists yet.

Lots more I could write about the engineering behind the modem, but lets leave it there for now and take a look at some results.

Results

Here is a rather busy set of BER versus SNR curves (click for larger version, and here is an EPS file version):

The 10-2 line is where the codec gets easy to listen to.

Observe far-right green (700C) to black (700D candidate with lots of interleaving) HF curves, which are about 4dB apart. Also the far-left cyan shows 700D working at -3dB SNR on AWGN channels. One dB later (-2dB) LDPC magic stomps all errors.

Here are some speech/modem tone samples on simulated channels:

AWGN -2dB SNR Analog SSB 700D modem 700D DV
HF +0.8dB SNR Analog SSB 700D modem 700D DV

The analog samples have a 300 to 2600 Hz BPF applied at the tx and rx side, to model an analog SSB radio. The analog SSB and 700D modem signals have exactly the same RMS power and channel models applied to them. In the AWGN channel, it’s difficult to hear the 700D modem signal, however the SSB is audible as it has peaks 9dB above the average.

OK so the 700 bit/s vocoder (Codec 2 700C) speech quality is not great even with no errors, but we have found it supports conversations just fine, and there is plenty of room for improvement. The same techniques (OFDM modem, LDPC interleaving) can also be applied to high quality/high bit rate/high SNR voice modes. But first – I want to push this low SNR DV work through to completion.

Simulation Code

This list summarises the GNU Octave code I’ve developed, as I’ll probably forget the details when I move onto the next project. Feel free to try any of these scripts and let me know what I’ve forgotten to check in. It’s all checked into codec2-dev/octave.

ldpc.m Wrapper functions for using the CML library LDPC functions with Octave
ldpcut.m Unit test/demo for ldpc.m
ldpc_qpsk.m Runs simulations for a bunch of codes for AWGN and HF channels using a simulated QPSK OFDM modem. Runs at the Rs (the symbol rate), assumes ideal modem
ldpc_short.m Simulation used for initial short LDPC code investigation using an ideal rate Rs BPSK modem. Bunch of codes and interleaving schemes tested
ofdm_lib.m Library of OFDM modem functions
ofdm_rs.m Rate Rs OFDM modem simulation used to develop low overhead pilot symbol phase estimation scheme
ofmd_dev.m Rate Fs OFDM modem simulation. This is the real deal, with timing and frequency offset estimation, LDPC integration, and tests for coarse timing and frequency offset estimation
ofdm_tx.m Generates test frames of OFDM raw file samples to play over your HF radio
ofdm_rx.m Receives raw file samples from your HF radio and 700D-demodulates-decodes, and measures BER and PER

Sing Along

Just this morning I tried to radiate some FreeDV 700D from my home to some interstate SDRs on 40M, but alas conditions were against me. I did manage to radiate across my bench so I know the waveform does make it through real HF radios OK.

Please try sending these files through your radio:

ssb_otx_224_32.wav 32 frame (5.12 second) interleaver
ssb_otx_224_4.wav 4 frame (0.64 second) interleaver

Get someone (or a websdr) to sample the received signal (8000Hz sample rate, 16 bit mono), and email me the received file.

Or you can decode it yourself using:

octave:10> ofdm_rx('~/Desktop/otx_224_32_mysample.wav',32);

or:

octave:10> ofdm_rx('~/Desktop/otx_224_4_mysample.wav',4);

The rx side is still a bit rough, I’ll refine it as I try the system with real off-air signals and flush out the bugs.

Update: FreeDV 700D – First Over The Air Tests.

Links

QB50 SUSat cubesat – Bill and team’s Cubesat currently parked at the ISS!
Codec 2 700C and Short LDPC Codes
Testing FreeDV 700C
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
FreeDV 700D – First Over The Air Tests


          Simvision doesn't always save waveform formatting        

Hello,


 I'm running into a very frustrating issue/bug w/ the waveform viewer in Simvision where the waveform display formatting is not always saved to the svcf file when the "save command script" command is used.


Under the advanced option box I make sure to check the preferences box to save waveform formatting but when I open the .svcf file and view it, the -color and -namecolor fields are not always saved.


The problem does not happen consistently and it does not do it for all the waveforms in the viewer but when it does it is very difficult to get the tool not to keep doing it.  Shutting the tool down and restarting does not help.  I feel like there is some other option or something I have enabled that prevents the formatting save for certain signals but I can't figure it out and I'm tired of formatting my signals the way I need to only to have the formatting work lost when the script is saved.


Any help would be greatly appreciated.

thanks


          MST-12000 Universal Automotive Test Platform And ECU Signal Simulation        
Twelve way set arbitrary waveform output, can produce all of today's car engine crankshaft, camshaft signal (Hall, magnetoelectric, photoelectric signal), waveform data will be pre-served by computer for long term.
          Launch X431 Master Original Update via Internet        
Launch X431 Master has the function of read DTCs, read datastream, actuation test, sensor waveform display and ECU coding. It's faster to communicate with cars than X-431 by integrated structure to save time in work.
          Original Launch X431 Master International Version Update via Internet        
Original Launch X431 Master has the function of read DTCs, read datastream, actuation test, sensor waveform display and ECU coding. It's faster to communicate with cars than X-431 by integrated structure to save time in work.
          Digitech Nautila        
Price slashed the 10.08.2017: Digitech Nautila, Combined Flanger and Chorus, up to 8 Chorus and 4 Flanger voices, 'Drift' lets you blend seamlessly between 3 different waveforms, Controls for Speed, Depth, Emphasis, Voices, Mix...
145 € previously 149 € - item no. 393364

          Biomorphic: 128 Harmor Presets        

The future is here with sounds of ‘Biomorphic’, the Harmor preset pack designed by Toby Emerson. These 128 Harmor presets have been carefully crafted and sculpted with the utmost care, making sure each preset has both an impressive sound and an ease of use. Many custom waveforms have been created for Biomorphic and each sound has fully mapped XYZ controls for maximum control. Beginners should feel right at home diving into these sounds and advanced users will appreciate the quickness of creating new sounds tailored to their tracks with just the turn of a few controls.

Many of the neuro sounds have been designed with modulation in mind and can be chained together to create more complex and interesting movements. The bass sounds come in many forms from smooth and deep to aggressive and cutting and will fit in many styles including electro, dubstep, and funk. Breathy and spacey pads, cutting leads, plucked synths and chillout sounds fill up this packs diverse selection on sounds. To top things off the FLP project file has been included for the user to study and to get ideas on how to use these sounds in productions in FL Studio.

These presets are provided in .fst format and can be loaded in the FL Native & VSTi version of Harmor.

Important Note: Please keep your software legal and up-to-date to avoid any issues when loading your new Harmor presets.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Works with All DAW Software including Cubase, Logic, Ableton Live, Reason, Sonar, FL Studio, Garageband, and many more...

PC & Mac Compatible

• Format: Synth Presets
• 128 Presets Total
• Requires Image-Line Harmor

• Sounds & Categories:

• 37 Neuro
• 32 Synth
• 26 Bass
• 20 Pads
• 13 Leads

• Bonus Material:

• 1 FL Studio Project (FLP)

• 100% Royalty Free
• Instant Download
• Download Size: 151 MB

• More Packs From Black Octopus Sound

£18.95

          Artisan EDM for Dune 2        

Download the latest sounds for Synapse Audio’s powerhouse Dune2, this sound bank features 100 presets plus bonus MIDI sequences and custom waveforms. Electronisounds presents ‘ARTISAN EDM for DUNE 2’. This soundset of skilfully produced patches will bring you instant satisfaction from this highly rated synth. The Presets are based around a new custom wavetable with 64 all-new custom waveforms. These brand new waveforms allow for new and unique sounds and timbres that were not possible with DUNE 2 until now!

Using this bank of patches, you will get the sound and feeling of EDM straight away! These sounds will appeal to producers of many different genres and the all-new waveforms are perfect building blocks for those of you who like to experiment and design your own sounds!

The Dune 2 sound bank features 100 presets including electro basses, inspiring leads, side-chain "pumping" sounds, driving plucks, chilled and evolving pads, tempo synced arps and sequences, FX sounds and drum sounds for ALL your EDM production needs. These sounds are a perfect compliment to our other sound banks for DUNE2, ‘Renegade EDM’ and ‘Filthy Basses’.

All the sounds you need are here for making genres such as: EDM, Electro House, Big Room, Festival House, Trance, Psy Trance, Minimal, IDM & more.

Mod wheel programming is featured in most patches for extreme sonic flexibility and sound variation(s). Always use the Modulation Wheel on your keyboard while playing a sound, interesting things will happen (morphing). In fact in many cases you will have two sounds in one single preset, just by opening the modulation wheel.

09 Arpeggios
13 Basses
02 Chords
12 Drums
05 FX
26 Leads
06 Pads
04 Plucks
14 Sequences
09 Synths

All of the MIDI files for the "Sequence" patches are included so you can use the melodies with other sounds or edit and customise them to your own taste.

The audio demo example showcases 23 of the 100 patches. Everything in the audio demo is coming from DUNE 2, except for some of the drums. The analog-rhythm-box sounding IDM-style drums from :46 - 1:16 are from patches in this bank. The risers, downlifter FX, analogue style snare build-up and bass drop are also from this bank! There are no side-chain plug-ins used in the demo - all "pumping" sounds are coming straight from DUNE 2 with the side-chain "pump" effect built right in!

Please Note: This soundset requires the DUNE 2 VSTi from Synapse Audio. These presets are not compatible with Dune1. Keep your software legal and up-to-date to avoid any issues when loading your new presets.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

• Format: Synth Presets
• 100 Presets Total
• Requires Synapse Audio Dune2

• Sounds & Categories:

• 26 Leads
• 14 Sequences
• 13 Basses
• 12 Drums
• 09 Arpeggios
• 06 Pads
• 05 FX
• 04 Plucks
• 02 Chords

• Bonus Material:

• 14 MIDI Sequences
• 64 Custom Waveforms

• 100% Royalty Free
• Instant Download
• Download Size: 6 MB

• More Packs Electronisounds

Requires Synapse Audio Dune2

PC & Mac Compatible

£12.98

          Event Horizon for Dune 2        

Get the latest Synapse Audio Dune 2 Presets from Electronisounds! This bank of skilfully crafted patches will bring you instant inspiration. The sounds in ‘Event Horizon for Dune 2’ are based around a new custom wavetable with 64 all-new custom waveforms. These brand new waveforms allow for new and unique sounds and timbres that were not possible with Dune 2 until now! We've also included the custom wavetable from our "Artisan EDM" Dune 2 sound bank with an additional 64 custom waveforms!

Check out the audio demo to see how versatile these sounds are. The demo features 12 of the 100 patches. All the sounds you need are here for making genres such as: EDM, Progressive House, Electro House, Big Room, Festival House, Trance, Psy Trance, Nu-Disco, Dubstep and Bass Music, Minimal Techno, Tech House and many other electronic dance music genres.

Mod wheel programming is featured in all patches for extreme sonic flexibility and sound variation(s). Always use the Modulation Wheel on your keyboard while playing a sound, interesting things will happen (morphing). In fact in many cases you will have two sounds in one single preset, just by opening the modulation wheel. Aftertouch modulations are also featured in many of the Dune 2 patches.

Please Note: This soundset requires the Synapse Audio Dune 2, version 2.5 VSTi/ AU. Please ensure that your software is legal and up-to-date. These presets are not compatible with Dune 1.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

• Format: Synth Presets
• Requires Synapse Audio Dune 2
• 100 Dune 2 Patches Total

• Sounds & Categories:

• 20 Lead Presets
• 27 Bass Presets
• 15 Sequence Presets
• 12 Pad Presets
• 07 Pluck Presets
• 07 Synth Sound Presets
• 05 FX Presets
• 05 Arp Presets
• 01 Keys Preset
• 01 Chord Preset

• 100% Royalty Free
• Instant Download
• Download Size: 12 MB

Download Synapse Dune 2 Sound Banks
£13.99

          THE ONE: Mainstage EDM Volume 2        

Are you looking for cutting edge EDM presets for Xfer Serum? Search no more, these presets rock! The sound banks includes 64 pro standard EDM presets for Xfer Serum, just a click away. Not only does this download contain top notch synth presets, but also custom Serum wavetables and waveforms, MIDI loops, and WAV samples too.

All presets feature full usage of the synth’s powerful engine, involving innovative sound design using many different techniques that are used by the pros including creative usage of the FX rack, and full usage of all four Macro Controls and the Mod Wheel.

On top of these great features, many of the presets include velocity linking, and most presets include randomizing LFO's - all to make the presets respond as professionally and naturally as possible.

These sounds were made for the Mainstage, these pro quality presets will propel your tracks straight to the top! Take a listen to the audio demo track, which showcases a large number of the presets and what they're capable of.

Full Sound Bank Specifications:

• 64 Xfer Serum Presets

• 6 Arps
• 6 FX
• 23 Leads
• 7 Pads
• 6 Risers

• 76 Custom Wavetables & Waves
• 11 MIDI loops from the demo track.
• 32 WAV Samples from the demo track.
• List of which presets where used in the demo track.
• he demo as an MP3

Suitable genres for this Serum Sound Bank include: EDM, Electro House, Electro, House, Complextro, Bigroom House & Hardstyle.

Please Note: Serum version 1.068 and higher required. This product does not contain a copy of the synth Serum.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Quick Links: Browse By Genre | WAV Sample Packs | MIDI Packs | Construction Kits | REX Loops | Combi Packs | Reason Refills | Apple Loops

• Format: Synth Presets
• 64 Xfer Serum Presets

• Sounds & Categories:

• 6 Arps
• 6 FX
• 23 Leads
• 7 Pads
• 6 Risers

• 76 Custom Wavetables
• 11 MIDI Loops From Demo
• 32 WAV Samples From Demo

• 100% Royalty Free
• Instant Download
• Download Size: 62 MB

• More Xfer Serum Presets

“Xfer
£11.95

          THE ONE Deep Future House For Serum        

THE ONE eagerly present to you THE ONE: Deep Future House, featuring 70 Xfer Serum presets with a sweet Deep House & Future House theme. Not only does this pack include Serum presets, it also includes custom Serum wavetables and waveforms, custom Serum warp shapes, as well as MIDI loops and Wav samples from the demo song.

In these presets you'll find that finely tuned and advanced synthesis have been used, to create sounds that deliver nothing but top quality. You'll hear rich timbres and characters, as well as a good sonic quality with good transients. This package is precisely what you need for your music to reach the next level and cut through the dense masses of producers.

In the package you'll find steady basses with those unique timbres that resembles Future House, combined with the deliciously lush and deep pad sounds you hear in Deep House, not to mention all the leads, stab synths, and pluck synths that all have nuqie characters to them.

All 4 Macros and the Mod Wheel have been assigned so you may get a good overview of the sound and have good control over both the artistic aspect and the sonic quality of the sound. All sounds feature strong usage of the FX rack to fine tune the presets, and the presets use Serum at its full potential so you get the most out of this software synth.

The presets feature strong usage of Velocity linking, as well as innovative usage of the LFOs to resemble randomizing - all to make the patches respond as natural and professional as possible.

Not to mention that all sounds have been crafted to be as punchy, snappy, fat and full as possible.

Listen to the demo song for this package and you'll hear a lot of the sounds being showcased, showing you the potential your music have by getting this package. This sound bank is a complete must have House preset solution!

Full Specifications:

• 70 Serum presets • 20 Basses • 4 FX • 10 Pads • 8 Plucks • 4 Risers • 10 Stabs

• 71 Custom wavetables & waveforms • 04 Custom warp shapes

• 22 MIDI Loops from the demo song • 42 WAV samples from the demo song

Suitable genres: Future House, Deep House, House, Garage, Tech House, EDM

Please Note: This bank requires the latest version of Xfer Serum, available as a free download from the Xfer forum.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Quick Links: Browse By Genre | WAV Sample Packs | MIDI Packs | Construction Kits | REX Loops | Combi Packs | Reason Refills | Apple Loops

• Format: Synth Patches
• Requires Xfer Serum
• 70 Presets Total

• Sounds & Categories:

• 20 Bass Sounds
• 14 Lead Sounds
• 10 Pad Sounds
• 10 Stab Sounds
• 08 Pluck Sounds
• 04 Risers Sounds
• 04 FX Sounds

• 71 Waves & Tables
• 04 Warp Shapes

• Macro Controls Assigned
• Mod Wheel Assigned

• 22 MIDI Loops
• 42 WAV Samples

• 100% Royalty Free
• Instant Download
• Download Size: 64 MB

• More Packs From THE ONE-Series

PC & Mac Compatible
£12.95

          THE ONE: Supersaw Antidotes        

What you have been waiting for is finally here - a collection of intensive Supersaw patches for Serum! THE ONE: Supersaw Antidotes features 50 handcrafted supersaw Serum presets for your electronic music production, as well as custom Serum wavetables and waveforms, MIDI, and Wav loops. This soundset is a must have, as supersaws are such important elements in todays electronic music. With this pack you'll instantly take your music to the next level with these high quality sounds.

All presets feature powerful synthesis techniques, and have been crafted to use Serum at it's full potential to deliver sounds at top of the notch quality. This involves innovative usage of the FX rack, full usage of the 4 Macro controls and the Mod Wheel, as well as custom wavetables and custom waveforms to give the sound a rich character.

With the perfectly assigned 4 Macro controls in this package you'll be able to easily have a good overview of your sound and be able to quickly adjust it to suit your song project, as each Macro control is on point. As a lot of the presets comes with custom wavetables and waveforms, the supersaw patches will have a unique sound to them - precisely what you need to cut through the masses in the music industry.

Not only do they have a unique sound to them, they have also been crafted to sound as intensive, wide, fat and full as possible, and contain good transients. And let's not forget the strong usage of Velocity linking, to make the sounds behave as natural as possible. The creator of these presets, Steve Hilo, have made hundreds of supersaw patches in the past, and knows how a supersaw needs to be made to be as useful as possible in music production.

Have a listen to the the MP3 demo song showcasing a large number of these sounds, and you'll hear the large spectrum of characteristics this preset pack covers.

Full specifications:

• 50 Serum supersaw presets
• 10 dark supersaws
• 10 sustained supersaws
• 10 supersaw leads
• 10 stabby supersaws
• 05 arps
• 05 sequenced supersaws

• 47 Custom Serum wavetables and waveforms
• 33 Wav loops from the MP3 demo
• 26 MIDI loops from the MP3 demo

NOTE: Serum version 1.068 required. This product does not contain a copy of the synth Serum.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Quick Links: Browse By Genre | WAV Sample Packs | MIDI Packs | Construction Kits | REX Loops | Combi Packs | Reason Refills | Apple Loops

• Synth Presets
• Requires Xfer Serum
• 50 Serum Presets

• 10 Dark Supersaws
• 10 Sustained Supersaws
• 10 Supersaw Leads
• 10 Supersaw Stabs
• 05 Supersaw Arps
• 05 Sequenced Supersaw

• 47 Custom Waves
• 36 Demo Track WAV Loops
• 26 Demo Track MIDI Loops

• 100% Royalty Free
• Instant Download
• Download Size: 50.7 MB

PC & Mac Compatible
£11.95

          Biomorphic: 128 Serum Presets        

The future is here with sounds of Biomorphic. This pack contains 128 presets for Xfer Serum programmed by Toby Emerson. These presets have been carefully crafted and sculpted with the utmost care, making sure each preset has both an impressive sound and an ease of use.

Beginners should feel right at home diving into these sounds and advanced users will appreciate the quickness of creating new sounds tailored to their tracks with just the turn of a few controls. Many of the neuro sounds have been designed with modulation in mind and can be chained together to create more complex and interesting movements. The bass sounds come in many forms from smooth and deep to aggressive and cutting and will fit in many styles including electro, dubstep, and funk.

Breathy and spacey pads, cutting leads, plucked synths, and chillout sounds fill up this packs diverse selection on sounds. To top things off the FLP project file has been included for the user to study and to get ideas on how to use these sounds in productions.

Please Note: This bank requires the latest version of Xfer Serum, available as a free download from the Xfer forum.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

• Format: Synth Soundset
• Requires Xfer Recs Serum 1.053 or Higher
• 128 Presets Total

• Sounds & Categories:

• 26 Bass
• 13 Leads
• 37 Neuro
• 20 Pads
• 32 Synth

• Bonus:

• 110 Single Waveforms/ Tables
• 1 FL Studio 12 Project (FLP)

• 100% Royalty Free
• Instant Download
• Download Size: 76.9 MB

PC & Mac Compatible
£18.95

          Shocking Future House For Serum        

'Shocking Future House For Serum' is a collection of edgy presets programmed for this game-changing synthesizer. These amazing modern House patches bring you more than ever. Inside you will find the highest quality patches created with maximum precision and using the innovative features of Serum.

This superb Serum soundset set brings you four macro controls assigned and even custom waveforms from Vandalism. All of this brings you an astounding sound bank full of up-to-date impressive presets, which are truly the future!

There are thousands of typical EDM oriented banks, but when it comes to original sounds, these are truly different, designed with maximum precision and definitely beyond the trends! All of them are painstakingly designed so you should really add them to your own library. Another soundset that you can't pass by!

Please Note: This bank requires Xfer Serum Xfer Records Serum v.1.044 or higher, please ensure that you are using the latest version. Updates are available as a free download from the Xfer forum.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Quick Links: Browse By Genre | WAV Sample Packs | MIDI Packs | Construction Kits | REX Loops | Combi Packs | Reason Refills | Apple Loops

• Format: Synth Soundset
• 64 Serum Presets Total
• Requires Xfer Records Serum 1.044

• Sounds & Categories:

• 23 Basses
• 20 Leads
• 19 Synths • 01 Pad • 01 Pluck

• 5 Custom Wave Tables
• 4 Macro Controls Assigned
• ModWheel Assigned

• Royalty Free
• Instant Download
• Download Size: 25 MB

• More Packs From Vandalism

PC & Mac Compatible
£12.95

          Shocking Electro House For Serum        

'Shocking Electro House For Serum' features 64 all new sounds for Xfer Records Serum advanced wavetable synthesizer. This amazing palette of modern Electro patches brings you more than ever before from Vandalism’s expert synth programmers. Inside the bank you’ll find the highest quality presets created with maximum precision taking full advantage of the innovative features of Serum.

This soundset set brings you 4 macro controls assigned and even custom waveforms from Vandalism. All of this brings you an astounding soundbank full of up-to-date impressive presets ready to electrify the crowd!

There are thousands of typical EDM oriented banks, but when it comes to original sounds, these are truly different: each designed with maximum precision and definitely well beyond current trends!

Each of these presets has been painstakingly designed to provide absolute top quality sounds for your own library - a must have bank for all Xfer Serum users!

Download Free Xfer Serum Wavetables

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Quick Links: Browse By Genre | WAV Sample Packs | MIDI Packs | Construction Kits | REX Loops | Combi Packs | Reason Refills | Apple Loops

• Format: Synth Soundset
• Requires Xfer Records Serum
• 64 Presets Total
• 8 Custom Waveforms Included

• Sounds & Categories:

• 27 Bass Sounds
• 17 Lead Sounds
• 10 Synth Sounds
• 08 FX Sounds
• 02 Pluck Sounds

• PC & Mac Compatible
• Royalty Free
• Instant Download
• Download Size: 42 MB

• More Packs From Vandalism

PC & Mac Compatible
£12.95

          Hard Funk        

Hard Funk combines the wobbley robotic sound of Dubstep with the tempos and elements of Electro Funk. Inspired by artists like Adapted Records, Au5, Khurt, Staunch, Stickybuds, Noisia, Skrillex, Krafty Kuts, Dodge & Fuski, and Skream. Use these sounds to simply add growling basses or high laser zapping leads to your tunes. Hard Funk comes with enough material to compose an entire EP (3 construction kits and 90+ add-on #loops). This is a solid pack for those on a budget. There are 155 loops and 65 drum one-shot samples in total. Sounds include Bass Guitars, Pitch Risers, Growling Basses, Wobble Basses, Laser Synths, 303s, Woops, LFO synths, Pads, Kicks, Snares, SFX, Hi Hats, Crashes, and much more. The Ableton Live format includes Drum Racks and a Filter Mod Rack with pre-loaded growling Bass waveforms. All loops are tempo synced and formatted with proper key and tempo information. 24 bit 44.1 kHz. Apple Loop .AIFF and Acidized .Wav Ableton Live Pack Ableton Live Racks. Also includes Drum Sampler Kits for Logic Pro .exs, NI Kontakt, NI Battery, & Reason .sxt. Ableton Live Pack include Drum Racks & Filter Mod Rack. Produced by Jason Donnelly.

Royalty-Free: All of the content in this download is 100% royalty-free. Once purchased, you can use these sounds in your own commercial music releases with no restrictions.

Quick Links: Browse By Genre | WAV Sample Packs | MIDI Packs | Construction Kits | REX Loops | Combi Packs | Reason Refills | Apple Loops

Gold Members Save 25% - Click HereDMS Gold Members Save 25%! Click Here For More Info.

• Formats:
• WAV
• Apple Loops
• Ableton Live Racks

• 3 Construction Kits
• 155 Loops • 65 One-Shot Samples • Drum Sampler Kits for EXS, Kontakt, • Battery & Reason
• Royalty Free
• Instant Download

• More Packs From Soundtrack Loops

Works with Cubase, Logic, Ableton Live, Pro Tools, Reason, Reaper, Sonar, FL Studio, Garageband and many more...

PC & Mac Compatible

PC & Mac Compatible
£9.16

          Stofftasche ° Waveforms ° Stoffbeutel ° Einzelstück        
12,95 EUR
Stoffbeutel "Waveforms" Taschenfarbe:* gr?n Druckfarbe:* pink Drucktechnik:* Handsiebdruck mit Farben auf Wasserbasis.Material:* 100% BaumwolleHenkel: * kurzGr??e:* ca. 37 x 41 cm

          Edgecam’s Waveform “The Only Way To Successfully Cut For Certain Complex Aerospace Applications”        
Edgecam’s Waveform “The Only Way To Successfully Cut For Certain Complex Aerospace Applications”   A cutting tool developer partners with Edgecam software to ensure its philosophy of taking customers’ machining challenges off their hands, is always successful.  â€œEdgecam is a vital part of optimising the process for our customers’ machining issues." ...
          "Capture 3D data with LiDAR": Webinar on Dec 14        
Registration now is open for the u_Lecture webinar presentation by Christian Sevcik on Dec 14 at 5pm CET  >>> https://attendee.gotowebinar.com/register/1918432664800614660

This lecture is a about current LiDAR technology and its various applications: gain insights how point cloud data is acquired and derived with Waveform LiDAR sensors. We will shed some light on what it takes from sending out a laser pulse to a point with 3D coordinates. Learn about the latest trends in the LiDAR industry, from UAV operation, to multichannel and multispectral LiDAR and see some of the industries where LiDAR systems are operational today.



The speaker Mr Sevcik is Manager, Strategic Software Alliances at RIEGL Laser Measurement Systems GmbH, Austria. He graduated with an MSc in Surveying and Geoinformation from Graz University of Technology and held various positions in the geospatial industries before joining Riegl in 2011.
          TimS on Stupid Questions Open Thread Round 2        

Hmm. I'm also pretty sure that the double-slit experiments are not evidence of MWI vs. waveform collapse.


          DanielLC on Stupid Questions Open Thread Round 2        

They are evidence against wave-form collapse, in that they give a lower bound as to when it must occur. Since, if it does exist, it's fairly likely that waveform collapse happens at a really extreme point, there's really only a fairly small amount of evidence you can get against waveform collapse without something that disproves MWI too. The reason MWI is more likely is Occam's razor, not evidence.


          TimS on Stupid Questions Open Thread Round 2        

Pretty sure double slit stuff is an effect of wave-particle duality, which is just as consistent with MWI as with waveform collapse.


          falenas108 on Stupid Questions Open Thread Round 2        

The first comment says that the double slit experiment is feasible under both hypothesis, but the second adds on that it is just as likely with MWI as waveform collapse.

Analogy: There are two possible bags arrangements, one filled with 5 green balls and 5 red balls, and the other with 4 green balls and 6 red balls. It's true that drawing a green ball is consistent with both, but it's more likely in the with first bag than the second.


          You can't dance and climb in the same shoes.        

This headline encapsulates the importance of waveforms for tactical radios. Each task that a radio executes must be performed in a certain way. This article aims to demystify the oft-baffling domain of the tactical radio waveform.

The late tactical communications expert and defence journalist Adam Baddeley once told your correspondent that tactical radios were interesting, but their...

          AW2DA - Ian Boddy - Analogue Workshop Volume 2: Dark Ambient        

AW2DAAW2DA | download sample pack
buy as download

Analogue Workshop Volume 2: Dark Ambient

Dark Ambient is the second in the series of Analogue Workshop sample libraries from renowned sound designer Ian Boddy. Featuring 300 samples & over 500 Kontakt patches culled from his collection of vintage & analogue modular synths these sounds dwell very much in the darker realms of ambience & electronic music.

The patches run the full gamut from subsonic drones, evolving pads and haunting modulations to weird, surreal atmospheres. The one-shot section contains a range of crunchy, deep, weird short percussive sounds and FX that can be used to spice up any drum kit or add impact to key moments in your musical compositions.

A new custom designed graphical interface has been scripted in Kontakt to provide even more variation & programming opportunities in the way the sounds are presented. This is rounded out by a set of impulse responses for the convolution reverb taken directly from analogue modules such as spring reverbs & bucket-brigade delays to further enhance the overall sound character.

 

Video demo @ https://www.youtube.com/watch?v=2EIakRIsS-U

Technical Data

300 samples at 44.1Khz, 24 bit in WAV format.
Mixture of stereo & mono samples.

Raw audio recorded from the following analogue synthesisers:

Roland System 100-M
Serge Animal, Audio Interface, Dual Oscillator, Klangzeit, CV Matrix Mixer,
Dual ADSR, Stereo Mixer & custom panels
Analogue Systems
Analogue Solutions
Doepfer A-100
Cwejman SPH2, FSH1 & MX-4S
Make Noise DPO, Wogglebug, René, Maths & QMMG
Livewire, Metasonix, Harvestman & Synthesis Technology modules
VCS3
Minimoog & Moog Voyager
Metasonix TM-2 tube BP filter
TLA Ebony A2Stereo Processor

Recorded directly into Apple Mac Pro using UAD Apollo audio interface.
Samples edited in Bias Peak Pro 5 & Redmatica KeymapPro.

Native Instruments Kontakt 4 support provided.

Recorded & produced by Ian Boddy January - March 2014.

Audio Demos by Ian Boddy & Andrew Stokes.
Additional Kontakt instrument programming by Andrew Stokes.

Thanks to: Steve Howell @ www.hollowsun.com for GUI graphic design & licensing of "RE-201" impulse response. Mario Krušelj for Kontakt scripting.


          097 The MP3 Is Free and W.H.A.C. Is Quaint        

GROUP CHAT FOR EPISODE 97 ON VOXER! **IMPORTANT** In order to get in the chat, you need copy the LINK and PASTE it in your mobile browser and it will open the app. IF YOU HAVE TROUBLE: ADD ELSIE AND SHE WILL ADD YOU TO THE CHAT: search for Elsie Escobar or yogeek on Voxer!This will only be available until June 3, 2017

Fill out our survey!

Quick Episode Summary:

  • Intro :11
  • Promo 1: The Art of Manufacturing 1:48
  • Let’s try Voxer! 2:23
  • Promo 2: Paranormal Now 3:59
  • How we feature you! 4:17
  • Audio Rockin’ Libsyn Podcast: Two Minute Talk Tips 5:00
  • Promo 3: Wise Traditions 10:49
  • Rob & Elsie Conversation 11:24
    • We got the coolest t-shirts!
    • We still have our #PM17fun giveaway going on!
    • Fun tweets from #PM17fun voice by Kris 14:18
    • Oh man, the horrible headlines and stories about the death of the MP3, the MP3 is finally free!
    • Elsie has an idea, and she wants you to participate, let’s try out this Voxer thing
    • What have we done to optimize our recording space?
    • In case Apple comes a knockin’ and they want to feature you, here’s what you need to do to get ready
    • Tip for those adding custom tags.
    • Best use for Clammr and the new Auphonic Audiograms
    • Starting in a network and keeping your branding
    • Playing music in your podcast is like being pregnant
    • How to bulk edit your categories
    • What are the pros and cons of posting old episodes onto Blogger or Tumblr?
    • Awesome feedback for Libsyn from a high-schooler!
    • The status of the Downcast App
    • Going over stats of a podcast that was featured on the front banner of iTunes
    • Core stat trends that Rob sees month after month
    • Stats stats staaats! - A Poem
    • And…well…STATS!

Featured Podcast Promos + Audio


Podcasting Articles and Links mentioned by Rob and Elsie

Where is Libsyn Going? (In Real Life)


HELP US SPREAD THE WORD!

We’d love it if you could please share #TheFeed with your twitter followers. Click here to post a tweet!

If you dug this episode head on over to iTunes and kindly leave us a rating, a review and subscribe!

Ways to subscribe to The Feed: The Official Libsyn Podcast

FEEDBACK + PROMOTION

You can ask your questions, make comments and create a segment about podcasting for podcasters! Let your voice be heard.

  • Download the FREE The Feed App for iOS and Android (you can send feedback straight from within the app)
  • Call 412 573 1934
  • Email thefeed@libsyn.com
  • Use our SpeakPipe Page!

 


          Rockwell Collins and NASA complete 1st communication tests aimed at safely integrating UASs in national airspace        
CEDAR RAPIDS, Iowa -  Rockwell Collins and the National Aeronautics and Space Administration (NASA) recently completed the first in a series of risk reduction tests that will eventually help enable unmanned aircraft systems (UAS) to safely operate in the national airspace. The data link waveform tests, performed as part of the first of three research phases on the program, simulated communication between one aircraft and one ground-based pilot station. The objective of the test was to verify the waveform&rsquo...
          Novation Bass-Station v1.6.0 AU VST PC + Mac OSX        
Novation Bass-Station v1.6.0 AU VST 
PC + Mac OSX 




The classic sound of analogue. Two oscillators that have been carefully modelled to preserve the precise tonal character of the classic sawtooth and square waveforms of the original Bass Station.

Enhanced Classic Panel for Greater Tweakability
The original front panel has been 3D modelled and additional controls have been added for ease of use.
Parameter editing screen
Wheel, Aftertouch and Breath control along with a variety of system-wide settings are just a click away.
Multiple Instances
The ability to use multiple “instances” allows the Bass Station to run as many multi-timbral parts as CPU will allow.
100 Stunning Sounds
The Bass Station plug-in ships with the classic originals along with many more, from screaming leads to phatt funky basses. 
FREE DOWNLOAD 
Novation Bass-Station v1.6.0 AU VST / PC + Mac OSX + Crack

          Steinberg Voice Machine v1.0 VST        
 Steinberg Voice Machine v1.0 VST


VoiceMachine - Real-time vocal pitch transformer units
  VoiceMachine consists of two new real-time voice effect tools for the
  VST PC and Mac platform.
  The VM Generator allows you to work with your voice in the same way that
  you would use an instrument. This way a layout for a backing choir can be
  created in minutes, reducing recording time to a minimum. The VD Generator
  creates up to 4 additional voices by simply triggering them via MIDI Note
  On/Off events in real-time. Your vocal arrangements can be easily played
  alongside your lead vocal. Just use your keyboard or draw in your MIDI
  note events in your favorite VST sequencer program. The VM Processor lets
  you either change the melody or simply correct the intonation by changing
  the pitch of a voice while maintaining its natural character.
  Features:
  =========
  * Real-time natural pitch shifting (no 'singing rodent' effect)
  * Independent control of pitch change and voice character
  * Up to 4 additional voices out of a monophonic voice track (VM Generator)
  * Triggered via MIDI Note On/Off
  * LFO with different waveforms and delay for vibrato simulation
  * All parameters can be addressed via definable MIDI controllers

FREE DOWNLOAD 
 Steinberg Voice Machine v1.0 VST + Keygen
Download

          Sinevibes updates Multitude and Cluster plugins        
Sinevibes has updated its Multitude delay and Cluster animated filter effect plug-ins for Mac. Multitude uses gate sequencers to control sends into four individual delay lines – allowing you to activate them at precise moments in time. Changes in Multitude v1.0.5 Five new modulator waveforms: pulse, trapezoid, notch, 3x and 4x staircase. Reduced overall processor […]
          Blue Cat Audio releases Oscilloscope Multi 2.0 effect plugin        
Blue Cat Audio has introduced its Oscilloscope Multi 2.0, a brand new version of the multiple tracks oscilloscope effect plug-in for Windows and Mac. Blue Cat’s Oscilloscope Multi is a unique tool that enables the visualization of the waveform of multiple instances on a single screen in order to compare them or detect phase, synchronization […]
          Blue Cat Audio updates Oscilloscope Multi to v1.1        
Blue Cat Audio has released version 1.1 of Oscilloscope Multi, a multiple tracks audio oscilloscope plugin that offers special features to compare several audio waveforms in real time. Changes in Oscilloscope Multi v1.1 XY view improvements: New density and time zoom controls for a better analysis experience. Added an option to connect the XY curve […]
          Blue Cat Audio releases Oscilloscope Multi        
Blue Cat Audio has released Oscilloscope Multi, a unique multiple tracks real time waveform renderer and comparator: it lets you visualize the content of several audio tracks on the same screen and compare them thanks to its X-Y view. Oscilloscope Multi features Multiple tracks real time oscilloscope: visualize and compare the waveform of several audio […]
          Dreadbox Medusa Analogue Synthesizer        

The Dreadbox Medusa is a three oscillator semi modular Analogue Synthesizer developed in conjunction with Polyend.

Medusa is based around a tried and tested analogue synth architecture, three oscillators being fed into a beefy low pass filter, but that's where the "classic" stops. unlike other synths from Dreadbox, the Medusa includes a unique third oscillator and a brand new 12DB low pass filter design:

VCO 3 can be freely moved through seven different waveforms, and includes morphing for the pulse wave. Ranging from complex harmonics to classic subtractive waves, this makes for very interesting tonal variation and deep sound design applications. The new 12dB low pass filter provides a new Thick control, that lets you enhance and tune frequencies 110hZ and below, providing fat rich low end even in high resonance settings.

The filter is multimode, with low pass, high pass and notch settings, it also includes variable key tracking, dedicated ADSR envelope and a resonance control.

The Polyend side of the Medusa is where things get really really interesting. Polyend have developed a 64-step sequencer, which can store filter cut off position, mod wheels placement and velocity setting withing an onboard memory of 7 banks. So that's 64 steps across seven banks, including modulation settings. Polyend also developed the arpeggiator on Medusa, it includes 6 different playing modes and variable clock for versatile and interesting play modes.

As you'd expect with any Dreadbox desktop synth, there's multiple patch points for Eurorack compatability and internal patching fun. There's 8 patch points in total  that allow you to manged and control the internal sequencer and voice architecture for even more sound tweaking fun.

When it comes to modulation, there's plenty on hand: there's mod wheel with two attenuators for VCO and VCF, velocity with attenuators for VCO and VCF and VCO 3 and be used to frequency modulate the VCF or VCO 1 and 2 and of course there's attenuator for controlling the depth of that modulation.

There's also an LFO section, with two waveforms, variable rate and fixed destions: VCO 3 pitch, VCO 3 morph state, VCO 1+2 pitch, pulse width of VCO 1+2, pulse width of VCO 3 and VCF cut off modulation. So yeah, you can do quite a lot.

Medusa is quite possibly the ultimate Monosynth, it combines a classic architecture with some modern twists and oscillator ideas. It's semi modular design lends itself well to modular users and the onboard sequencer is really really intelligent and well implemented.

The main features of the Dreadbox Medusa include

Three oscillator monophonic analogue synthesizer

12dB multi mode filter with thick control

Sequencer and arp developed by polyend

Semi modular architecture

Dense modulation structure, a joy to programme

Price:£0.00


          MeeBlip Triode Hybrid Synthesizer        

The MeeBlip Triode is a three oscillator hybrid synthesizer that features two distinct MeeBlip oscillators, a sawtooth sub oscillator and a unique twin-t analogue filter.

Meeblip have always been known for synthesizer that defy their small format, the Triode is no exception. It's architecture is all about producing masses of low end and rasping cutting tones.

Its two oscillators can produce square, saw and PWM waveforms, they also have access to 24 different wavetable tones for impressive versatility. The onboard filter is based around a twin-t notch style arhictecture with a very focused and smooth roll off, with very aggresive resonance tones if desired.

When it comes to modulation, there's an onboard LFO which cna be routed to either the oscillator ot the filter, plus an envelope generator which is hardwired to both the filter and the amplifier, it includes a switchable sustain control for quickly dilaing in and out the sounds you're after.

Connectivity wise, there's a standard 5-pin MIDI DIN, stereo output on 3.5mm jack and a DC power input.

Limited to just 1000 pieces worlwide, the MeeBlip Triode isn't just a killer synth, it's also very limited edition....and VERY red.

The main features of the MeeBlip Triode Synthesizer Include

Three oscillator synthesizer

Analogue filter with twin-t topology

Single LFO assingable to oscillator or filter

Limited to 1000 pices worldwide

Price:£115.83


          Critical Code Studies Conference- Week Five Discussion        
by
David Shepard
2012-03-19

In week five, Stephen Ramsay performed a live reading of a livecoding performance: in a video, he presented a spontaneous commentary over a screencast of Andrew Sorensen’s “Strange Places,” a piece Ramsay had never seen before. The screencast showed Sorensen using Impromptu, a LISP-based environment for musical performance that he had himself developed, to improvise a piece of music; Sorenson developed the piece’s musical themes by composing and editing code. The video allowed the audience to watch Sorenson write and edit his code in the Impromptu editor window. This presentation inspired a discussion that broke livecoding down into two overlapping issues: is it “live,” and is it “coding”?

The first half of “livecoding”- liveness - makes coding interesting by making the programmer and his or her process apparently accessible, compared to the anonymous distribution of most software via download or disc after thorough design and bug testing. The TOPLAP Manifesto states, “Give us access to the performer’s mind … the whole human instrument.” We never meet the programmers who develop the majority of the software we use, let alone see them at their desks typing; a programmer on a stage seems comparatively accessible, as is (presumably) his or her intent and mistakes.

But what does watching a programmer show? Though the discussion yielded quite a bit of access to the performer’s mind, as Sorensen himself joined in, most of the participants agreed with Sorenson’s own statement that he does not believe livecoding yields such insight, and focused on the performative quality of the coding instead. John Bell described the “miraculous” feeling of watching Sorensen’s act of coding, contrasting it with his own experience of the labor involved in building each component of a program. Livecoding is “live” by virtue of putting a programmer before an audience, who is engaged in a composition that is improvised.

What Bell’s miraculous feeling (and livecoding’s supposed spontaneity) also highlight is how livecoding requires complex layers of abstraction that simplify much of the development process: Sorenson’s custom-built programming environment, the Apple AudioUnits library, and preset routines such as the function “cosr” that Ramsay found opaque (“a macro that provides a wrapper for the standard cos function with some scaling and time stuff built-in,” according to Sorenson). Livecoding - done in minutes on a screen - differs from the analytical, iterative development process of most professional programming, relying heavily on prewritten code libraries to reduce the complexity of the coding process enough to perform before an audience. Livecoding (usually) involves no debugging by virtue of the fact that it depends on well-defined environments and well-tested libraries of routines, not to mention precise typing and a forgiving environment; attempting to compile Impromptu code with syntax errors results in silent errors that appear in a portion of the window hidden from view in Sorenson’s video.

The necessity of these foundations inspired Jeremy Douglass to ask, “At what point do we exit liveness? When we draw on scripted elements or libraries of things that previously ‘worked’?” Daren Chapin responded that “This kind of heuristic simplification is one of the primary goals of programming … Libraries, extensible classes, polymorphism, macros, code generators … all of these mechanisms aid us in transforming what once seemed like complex maps between domains into ones that are easy for us to reason about, so we can in turn build larger and more complex ones. It’s this constant interplay of building-out-and-abstracting-away that makes coding such a lively activity, and perhaps so simultaneously sundry and magical.” All programming employs abstractions that simplify complex problems; they make both livecoding and most serious application development possible; most programmers use at least their operating system’s API to build windowed applications, if not other code libraries.

For similar reasons, John Nyhoff bolstered livecoding’s claim to liveness by arguing against conflating spontaneity with liveness, highlighting yet one more understanding of “live.” Just as not all jazz is improvised, most theater is live while following a preset script. Liveness in traditional drama is the actor’s ability to make a preexisting script seem naturalistic, as if he or she were the character speaking the words and feeling the emotions. Nyhoff consequently stated that “in much theatre and most programming, the script/code is authored through execution/performance. … Sorensen’s work … constitute[s] a kind of temporally compressed re-staging of the process of the composition, the programming.” Nyhoff emphasized that the liveness of livecoding was the performative and demonstrative quality of the coding, rather than whether or not the text produced was composed entirely on the spot.

While something of a special case, then, livecoding raises broader issues for Critical Code Studies, especially questions related to the definition of programming and the visibility of code. As John Bell pointed out, livecoding further applies pressure to the valuation of “scripting” over “coding”: the former is not considered “real” programming because of the use of high-level languages for small tasks. The Ruby on Rails framework was promoted using an “amazing and carefully scripted series of demo magic tricks,” in Jeremy Douglass’ words: on a terminal window projected for an audience, a few scripts generated a basic but complete blogging application. This demonstration blurred the lines between live and preplanned coding, and programming using highly-specialized environments like Impromptu and simply using a framework. What makes livecoding interesting, then, is the question of what coding and access to code really allows the reader―and thus, livecoding gets at the heart of Critical Code Studies’ investigation of what code means and does.


Reply by Amanda French on March 1, 2010 at 11:37am

I did notice that you made one or two comments about Sorensen changing the code along the lines of “Was that note not right?” – comments that sounded a bit odd to me given that the performance was so clearly improvisational. His changes, in other words, always looked to me to be changes for the heck of it, changes that constituted the performance, not changes striving for some ideal. The notion of “sheet music” doesn’t apply here, as it wouldn’t apply to a jazz musician or a bluegrass picker. Even the name of his environment, Impromptu, makes that point. Raises the question for me precisely of whether a livecoding session that did consist of simply typing in an existing program would be as compelling – I think it would definitely have its points of interest, actually. Or what would the livecoding analog be to a non-improvisational live performance of music?


Reply by Mark Marino on March 1, 2010 at 2:49pm

Although, I’m tempted to speak just about Sorensen’s code and your play-by-play, I find myself returning to an earlier moment in the video. As you set up live coding, you move from Holden Caulfield’s awe at the tympani player (whose music flies off the stand in your version) to this meditation on live instruments. Thinking of Amanda’s intervention, I can’t help but wonder what the sheet music may be. Could it be the specifications of the coding environment? Could it be some starter loops?

I like, too, this distinction Amanda is trying to flesh out about exactly what kind of “live” performance this is. (Any good articles on “live coding” for the bib?) During one cut, you say, that performances seem more real, while the screen shows a reel-to-reel recorder (1:52). This Glenn Gould move establishes a central ironic tension in the video (with all its spliced together, remediated film) within the “live” of the live coding and perhaps also affecting the executed code we’ve been discussing in Week 4.

In fact, this entire discussion seems to speak to Wendy’s chapter, as we see Sorensen’s magical code that executes as he changes it, when we marvel at his mysterious cosr command. Is this “the erasure of execution”? Is this the fetishization of coding? Is this sourcery? How is the code (un)like the tubes of the pan flute? Are there moments in this code where Sorensen seems more slave than wizard? What does it mean to be a slave of the coding environment you yourself built (I know, asking any code monkey, ask Microserf)?

Also, I wanted to say how iconic this video is for CCS - the way we watch you build your reading live and unedited, observing aspects of the code, its effects on the output, and then developing your reading on how Sorensen is “playing the comments and parameters” or how he is ending his song by commenting it out. You engage with the programming language and environment, the programmer, the output and processes, as well as the code itself. You contextualize your examination in electronic music and live coding, while gesturing toward larger issues, such as the real and authenticity as well as programmer as performer/composer/musician. Maybe for some, our comments are taking the music out of the code, or hearing music in the windmill. I think, too, about all the on-the-fly interpretations we’ve been working to produce, as in Week 3. Here are the postcards we are sending even if we are not too sure how to address them.
 


Reply by Stephen Ramsay on March 1, 2010 at 3:20pm

I see what you’re saying, Amanda. Honestly, though, I think those moments in the film are more about my own conditioned response as a programmer. If I saw that first comment in a source file, I’d assume there was something wrong and the programmer was “commenting it out” so that it didn’t interfere – cauterizing the wound, as it were. I’ve done it a thousand times. It was really my first thought.

But of course, as you say, that’s not what Sorensen is doing at all (as became clear to me later on, when the comments become almost like keys on a flute or stops on an organ).

As far as improvisation goes, yes. But can’t you hit a “wrong note” while improvising? That is, a note that is “wrong,” not because it fails to conform to a preset pattern, but because you didn’t like it, or it didn’t work, or you changed your mind? I really had something like that in mind.

Here’s one thing, though: If I’m improvising on the harp, say, and I hit such a note, the moment is gone. But if I set up an oscillator to start generating some sound wave, I can change it and have it start doing something else. This seems to me a difference between computer music as it is usually conceived and what a bluegrass picker does (unless you really are “playing your laptop” like the gentleman demonstrating GarageBand’s “musical typing” feature in the video).

 

Reply by Amanda French on March 1, 2010 at 6:18pm

Sure, you can hit a wrong note in improvising, as I know you know, though any skilled improviser will leave a listener unsure as to whether that really was a “wrong” note. (I, by the way, am totally unable to improvise.) But I don’t quite get the distinction you’re making in the last paragraph between instrumental improvisation and computer music, if you’d care to elaborate.

Actually what I found myself thinking about was degrees of improvisation in music: at the symphony everyone’s usually reading sheet music, which they’d almost have to, because the music is so complex. But your basic rock band isn’t being any more improvisational than a symphony, usually, because they’ve just memorized what they’re playing. Some, of course, do, especially jam bands. But jam bands and jazz bands and blues bands have really very simple structures (logic) that soloists improvise their complexities over: the musical foundation is simpler, and that’s what enables the improv.

But again, it came back to livecoding as an art, for me: I couldn’t, off the top of my head, think of a form of livecoding that would be analogous to a symphony. Probably it would have to be a massive group endeavor, like that of a symphony, where you take all the code for an existing program and get a hundred coders to type it in, live. With live unit tests, which I know you love. :)


Reply by Max Feinstein on March 2, 2010 at 4:50pm

I’ve isolated some Impromptu code and posted it here for a further examination of improvisation via the random function. I have to admit that posting this code is slightly uncomfortable because the snippet is so lifeless in this form. Compared to its traditional context, which involves continuous code execution, sounds, and constant modification from the author, this snippet is just a static chunk that doesn’t do anything. I suppose this critique is much like a photograph in that it captures a brief glimpse of something “alive” and allows viewers to experience the object in a different context. That said, an Impromptu example, complete with commentary, from our favorite free encyclopedia :

;; first define an instrument
(define dls (au:make-node “aumu” “dls ” “appl”))
;; next connect dls to the default output node
(au:connect-node dls 0 *au:output-node* 0)
;; lastly update the audio graph to reflect the connection
(au:update-graph);; play one note by itself
(play-note (now) inst 60 80 (* *second* 1.0));; play three notes together
(dotimes (i 3)
(play-note (now) inst (random 60 80) 80 (* *second* 1.0)));; play a looping sequence of random notes
(define loop
(lambda (time)
(play-note time inst (random 40 80) 80 1000)
(callback (+ time 8000) ‘loop (+ time 10000))));; start the loop
(loop (now));; stop the loop by defining loop to be null
(define loop ‘());; define a new loop to play a repeating sequence of notes
;; with a small random twist
(define loop
(lambda (time pitches)
(play-note time inst (car pitches) (random 40 80) 8000)
(callback (+ time 4000) ‘loop (+ time 5000)
(if (null? (cdr pitches))
(list 60 63 62 (random 65 68) 68 59)
(cdr pitches)))))(loop (now) ‘(60 63 62 67 68 59));; stop the loop by defining loop to be null
(define loop ‘())

My interest in this code is the prominent role that “random” plays in the piece. There’s an interesting tension between the rest of the code, which seems precise and systematic, and this intentional randomness generated by the computer. For me, music isn’t ordinarily composed “randomly,” nor are any sounds produced randomly. For example, every time a timpani player strikes his drum, the resulting sound is precisely what was expected. Perhaps the sound wasn’t what the musician intended, say if he missed hitting the desired spot on the drum, or if the head has steadily de-tuned from use, but these variables can all be predicted. On the other hand, when a livecoder calls for a random X or Y or Z, nobody can really predict what the computer will generate.

Of course, improvisational pieces can be random, but probably not in the same sense as random in the above code, for even improv pieces typically follow certain guidelines (chord progressions, rhythms, etc), as Amanda notes above. I’m curious if anyone else is intrigued by the implementation of “random” and what it introduces to livecoding that is absent from all other musical performances. Or if anyone would make the argument that instruments other than the computer (e.g., Impromptu) are also random?


Reply by Andrew Sorensen on March 3, 2010 at 4:02pm

Obviously randomness is, by definition, indeterminate. However, it can also be thought of as an abstraction layer. “Random” allows you to abstract away detail without requiring a complete model of the underlying process. This is hugely important in livecoding where your ability to implement complex processes is hindered by task domain (i.e. musical) temporal constraints. Of course “random” when used in my work is usually highly constrained - to a particular pitch class, rhythm set, etc.

Of course I often do build a model of the process under investigation which is stored as library code and called instead of “random”. However, in practice I have found that “random” is often sufficient - and has the added advantage of being generally applicable and easy for an audience to comprehend.

It’s worth pointing out that while nobody can predict precisely what the computer will generate, the probabilistic constraints imposed give me a very good understanding of the approximate result that I will get. When I absolutely need specifics I use a determinate process.

The constrained indeterminacy is part of the fun of working with generative systems, you never know exactly what you’re going to get. Part of the skill is in massaging an indeterminate system towards an aesthetically appealing outcome.


Reply by Mark Marino on March 4, 2010 at 9:18am

I was drawn to a particular line in your answer:

Of course I often do build a model of the process under investigation which is stored as library code and called instead of “random”. However, in practice I have found that “random” is often sufficient - and has the added advantage of being generally applicable and easy for an audience to comprehend.

I’m not entirely certain about the relationship between the library code and the random process. Can you elaborate on that?

More importantly, I’m interested in your attention to the audience’s comprehension, as that is an issue that seems to come up in CCS a lot, from the initial arguments that code is not meant for human readers (we’re past those, I think), to our more recent discussion of the display mechanisms through which we should analyze code (should it be more similar to the highlighting, color-coded interfaces programmers are using – or teletype and punch cards?).

As we saw in Stephen’s reading of your coding performance, audiences (without the benefit of an O’Reilly book at hand) might at times be lost in your code. I can see now that this is a central realm of play in “live coding,” that this is part of the delight, of watching the magician at work (Wendy Chun’s image of sourcery keeps returning!).

To what extent do you see this as a performance that you want to make accessible and to what extent is the fun of live coding (both watching and performing) and game of catch-me-if-you-can?


Reply by Andrew Sorensen on March 4, 2010 at 10:40pm

Actually I’m not overly concerned with the audience’s ability to read the code. At least, it is always subservient to my ability to express my ideas as fluently as possible. That said, I do try to provide something that is reasonably transparent. It’s also important to keep in mind that audience understanding is multi-dimensional. In particular it’s worth bearing in mind that a programmer well-versed in Lisp but with no music theory knowledge may understand the syntax and semantics (program->process semantics) but fail to comprehend their relationship to the task domain (i.e. the musical outcome).

For me the projection of code is largely about building a trust relationship with the audience by displaying a level of engagement and human agency. Yes I am doing this live, and no I’m not just twiddling my thumbs–here’s the proof. Without the projection it becomes quite difficult to assess the level of human agency in laptop performance. After that initial trust is established, the code becomes less important. In fact, in a lot of my audiovisual performances (where visuals are drawn over the top of the code), it becomes harder to see the code as the performance progresses. Audiences don’t seem to mind because the early part of the performance has established the trust.

I agree with Stephen’s post that displaying the code doesn’t really give access to the performer’s mind - at least not in any deep sense.


Reply by Jeremy Douglass on March 8, 2010 at 11:37am

I very much appreciated the way that the coding environment functioned as “proof” in this video. For example, every time the screen briefly flashed orange the running code was being updated, correct?

For me, those orange flashes were evidence, like seeing a percussionist’s stick rising high – I might not understand how the set of percussion instruments are played, or even their names, but I had a visible events that I could use to tie together changes in what I heard with action of the performer.


Reply by Stephen Ramsay  on March 4, 2010 at 9:54am

;; first define an instrument
(define dls (au:make-node “aumu” “dls ” “appl”))
;; next connect dls to the default output node
(au:connect-node dls 0 *au:output-node* 0)
;; lastly update the audio graph to reflect the connection
(au:update-graph);; play one note by itself
(play-note (now) inst 60 80 (* *second* 1.0))

This code is really imagining audio units and softsynth components exactly the way they are imagined in Max/MSP and Puredata – as nodes on a network that are connected in various ways. I gather that ChucK works the same way. So at some level, there isn’t that much difference between the way Impromptu/ChucK imagines a synthesizer and the way Max/Pd does. (I don’t meant to elide the differences completely; I’m just nothing that the various environments tend to use the same metaphors for thinking about sound synthesis – metaphors that further reflect the way the hardware “looks” or “works” in the world).

But watching livecoding with the textual interfaces seems to me very different from watching them with the “visual programming” interfaces. For me, the former seems more “miraculous” somehow (even though I’m fully aware that underneath they’re both doing the same thing). I would even go so far as to say that text->sound invokes ancestral memories of spell-craft, as well the western longing for the “word made flesh” invoked so well by Wendy Chun in Week 4.

Really, I think this is ultimately what I’m trying to get at with all of this. I’m not sure that “show us your code” gives us “access to the performers mind.” I am, however, quite sure that we regularly make this connection with code, because text/code holds such an important place in our culture.


Reply by Daren Chapin on March 2, 2010 at 7:13am

It should be noted that [most] of the interactive advantages in the environment above are mostly to do with merely having a language that supports a REPL  (read-eval-print loop) rather than the specific utilization of Lisp’s (in this case, Scheme’s) rewritable S-expressions.

Having said that, macros are playing an active role in how Impromptu works. The ‘cosr and ‘sinr functions can be found in the Impromptu wiki. They are functions that oscillate the beat around a central point with a defined amplitude and cycle. See for instance: 

cosr .

So:

(cosr 70 10 .5)

will produce an oscillation with center 70, range 10 and cycle 0.5.

But because that oscillation expression is being passed to a macro it is being rewritten rather than evaluated right away as a function, which allows setp (a macro; see setp ) to actually perform that oscillation over time.

Hence a construction like this:

(setp zeb1 *smd2:comb1:damp* (cosr (cosr 70 10 .5) (cosr 10 10 .5) 2))

is setting up an oscillation whose center and range themselves also oscillate. The magic of macro rewriting allows Impromptu to rewrite these expressions and delay their evaluation, but note that the composition (in the function sense, not the music one) still has to be explicitly wired together by hand.

Back to my original point about just needing a REPL, and to see another way in which this could operate, consider Haskell. Despite being a pure, statically-typed functional language, Haskell ends up being a good choice for tasks such as livecoding. The reasons for this are hard to explain, and I could attempt a whole separate posting on this, but they have to do with the ease with which one can build functions out of other functions by using combinators (a form of higher-order function), in a way somewhat-but-not-really analogous to the way Lisp macros work.

The advantage of programming through a combinator library is that the whole is built out of parts declaratively rather than imperatively. Most people’s experience with most programming languages (including Lisp, although Lisp is good at supporting multi-paradigm programming) is through use of imperative code, especially where IO is concerned, so this is hard for most people to envision. A good example in a slightly different domain (graphical animation) is something called Functional Reactive Programming, an idea created by Paul Hudak (Yale) and Conal Elliott (Microsoft Research). Their paper  is somewhat technical and requires a lot of Haskell depth, but Conal has a tutorial post here  that is much more instructive. What’s important here is the natural, declarative composability of the functions, which you can see in how easily each of the successive animations is defined from the prior one.

Another issue worth mentioning is that Haskell has strong, static typing, so all statements in Haskell are completely type safe. This means that it isn’t possible to write functions/programs with type errors and get them past the interpreter, something that I have to imagine is a serious concern in a livecoding environment. As far as I can see in the Sorensen video, we never get to see an outright runtime error get sent to the Scheme environment. If the livecoder mistypes something and it generates an error at runtime, what happens I wonder? Does the music just stop? Or is there some sort of exception handling and stack unwind-protection to prevent this?

Here is a presentation (including several embedded videos) about livecoding in Haskell . (You can kind of zoom through the kooky BBC segment on slide 5 if you want, although starting around 2:00 one of the livecoders is using Haskell and the other one is using SuperCollider I think.)

Finally, one of the things I find interesting is that the Impromptu-style livecoders seem to prefer building a large, evolving program and continuously [re-]evaluating its pieces, where the approach with Haskell seems to be to build a domain-specific language (DSL ) and then modify it from a command line. There’s no particular reason for this a priori except maybe that Haskell is a really good environment for building DSLs, at least in comparison to Lisp. You’ll notice how specific and compact the language to change the sound patterns is by the end of the presentation.


Reply by Andrew Sorensen on March 3, 2010 at 5:14pm

Actually, Impromptu isn’t REPL based, at least not in any standard sense. It is interactive, which as you say is a standard attribute of REPL environments, but the degree to which a REPL makes an environment “live” (as in live coding) is debatable. It is certainly possible to affect change in the runtime system through a REPL but to do so with any temporal accuracy is a completely different question. In other words you need a real-time environment with a suitable semantics for time, a determinate concurrency architecture, and real-time managed memory (i.e. incremental/concurrent GC).

I would argue that Impromptu’s primary “liveness” attributes are its “first class” semantics for time and its co-operative concurrency model - “temporal recursion”. If you’re interested you can read more here:

http://impromptu.moso.com.au/extras/imp_tech_notes.pdf

Functional reactive programming is certainly another interesting option for livecoding although I’m not convinced that a synchronous approach to concurrency (which is basically what FRP is) is the best approach for livecoding. Although the ChucK language follows a synchronous approach and it is popular amongst livecoders, so time will tell. Impromptu does include a variety of runtime safety checks but I agree that static type checking would also be nice for livecoding. I have been making some moves in this direction. Since v2.0 Impromptu includes a basic JIT compiler to x86 which includes some basic type inference support. This is a new project for Impromptu but is coming along quit quickly.

I can’t say that I agree that Haskell’s DSL capabilities are superior to Lisps, but isn’t it great that people choose to work in such different ways. Let a thousand paradigms bloom I say!!

FYI: It’s Alex MLaean working with Haskell in the video, he’s been livecoding since 2000 (5 years before I started) and originally livecoded in Perl. Incidentally Alex is also using supercollider for scheduling and for all audio processing. Haskell is being used to generate messages to send to the supercollider server.


Reply by Daren Chapin on March 6, 2010 at 8:32am

After reading through the implementation document I think I understand how this is architected much better. Indeed, what I was trying to articulate with REPLs was that this was possible in any language environment that supported interactivity, though what I imagined when I saw the video was that we were seeing an emacs buffer and that there was a REPL-like command line á la ghci attached to the session “off camera”. Because the whole architecture is based on passing serialized messages to the scheduling engine, it really doesn’t matter how you do it as long as you can. In other words, a different language environment could still contribute to a performance as long as it had its own concurrency model and could be made to serialize sexps and send them to the task scheduler, is that right?

I have to think a bit more about the implications of using FRP with concurrency, but one of the other things I was curious about in seeing the video and which I don’t think I articulated well in the first post is: what happens if you make a mistake? You said there was some basic syntax/type/soundness checking there, but certainly nothing along the lines of what you would be able to have in a strongly typed language. So concurrency issues aside (the possibility of creating a race condition by messing up one’s temporal reasoning) how much care does one need to take in practice to not create a syntax error or omit an argument to a macro, and what consequences are possible if you do? Are you likely to just halt a local process (which would stop sending whatever component of the performance that process was responsible for) or is it theoretically possible to send something that would break the task scheduler and–proverbially and literally–stop the music?

The above isn’t just a technical question, though. I think it speaks to other questions here about the performative nature of the exercise. Implicit in any live virtuosic performance is a kind of contract with an audience that what you are doing takes skill and there is there is always the possibility of a mistake, which contributes excitement to the performance: a juggler can drop a ball, an actor forget a line, a trapeze artist miss a catch. I think in one of the Haskell videos you do see someone submit something at the prompt that is ill-typed, and you see the interpreter complain. So how much is this a concern for the livecoder, and do you think the possibility of error is conveyed to the audience as part of their engagement, or is it accepted more as something magical?


Reply by John Bell on March 4, 2010 at 3:05pm

I’m a bit handicapped here because I don’t know lisp, but one point I’m curious about is the heavy use of macros. Really, I feel like it’s the macros that make this work as a performance. If Andrew was forced to write out the expansion of cosr every time he used cosr in the code, the viewer would be more apt to get lost in complexity and the timing of the performance would bog down a bit. Similarly, it’s also helped along by using Impromptu as an environment and what I assume were a lot of keyboard shortcuts and code completion tricks that made several lines of code appear on the screen instantly during the performance.

But the question that keeps coming to mind is: is it programming?

Ok, obviously it is programming, in all senses of the term. But does a livecoding performance convey the experience of programming? The vast majority of the programming that goes into producing the music we hear isn’t visible anywhere on the screen. Now, this would be true in any case given the layers of software between Impromptu and the hardware, but things like macros (or functions, objects, or libraries elsewhere) that are part of the language being typed on the screen but do not actually appear hide a lot of complexity from the audience that could plausibly be exposed.

I do want to make clear that this isn’t intended as a value judgment on what’s going on on the screen (the whole “programming vs. scripting” issue where some people seem to think that you’re not hardcore enough if you don’t write your AI in assembly). It’s more a question of perception. I think that a lot of the “miraculous” feeling that Stephen mentions–and I also feel, by the way–comes from all of the prep work that’s done on the language and interface long before the performance begins. Now, when I’m coding my own projects, I never get anything close to that miraculous feeling because I’m actually going in and building out all the little functions and widgets that actually make the code tick as I’m working. So I wonder what the relationship is between programming-as-performance and programming-as-making-things-go, and how finding that relationship can be used to inform other questions of reading code (including, maybe, programming vs. scripting).


Reply by Mark Marino on March 5, 2010 at 2:31pm

John,

I keep returning to this notion in Wendy’s chapter, the flattening out of time and space, or “substituting space/text for time/process” (6)? In her chapter, she identifies the ways in which code-as-performative (as language that does) comes to efface the execution of the code. As she “state[s] the obvious, one cannot run source code: it must be compiled or interpreted. This compilation or interpretation―this making executable of code―is not a trivial action; the compilation of code is not the same as translating a decimal number into a binary one” (6). At the same time she asks “where does one make the empirical observation?” Our “live coding” conversation seems to be hovering around this space/text-time/process axis and also questions of the magical temporal relationship between the text and the process.

When I first watched Stephen’s video, I thought, a-ha! here is the moment where that time in between code and execution has been erased. Sorensen’s changes to the code are being executed in real time.

However, after reading through these threads and becoming more familiar with the distinctions that Daren draws out, I see that the live-ness of this coding is a function of the delay between Sorensen’s changes in the code and their execution.

Your discussion of the much less magical experience of grinding out programs seems to speak to this as well.

Also, I can’t help but imagine a spectrum something like this:

analog physical instruments
digital physical instruments
digital software instruments
patch-based environments (puredate, max)
REPL/functional temporal recursion
imperative programming

(Can someone build out this spectrum with a bit more accuracy and detail?)

In other words, the spectrum runs between the person who makes music (or sound) on an instrument through the physics of their body interacting with another physical object to the person making music by writing programs in an imperative language. [No doubt, this is an over-simplification, too.] Within that spectrum are different foci of audience enchantment.

What I keep coming back to is that it is not amazing to see someone merely press a piano keyboard, strum an electric guitar. Usually in that case, we are interested in accuracy, speed, dexterity, creativity of selection, et cetera. Nor is it impressive to see one drag a loop into a Garage Band composition. It does seem magical for someone to write: (cosr 70 10 .5) and then to change a parameter. And certainly there is a certain “And then the Creator made drums and they were good” aspect of:

define drums
(lamda (beat)

Here I’m thinking of that balance Sorensen is striking between legibility and uncertainty, between revealing his selections through clearly named functions to creating less recognizable changes by changing multiple sets of parameters or nesting elements. That “lambda” above reassures me of the mathematics of the process, of the computation, of the unknowable magic, not least of all because it is followed by:

(setp zb1 *smd2:comb1:damp* (cosr (cosr 70 10 .5) (cosr 10 10 .5) 2))

which includes those tantalizing nested cosr calls that I would not be able to (easily) process – at least not in the pub atmosphere that shows up in a video that Daren pointed us to.

This is not to say that someone couldn’t know exactly what Sorensen is doing. That actually makes the experience more like listening to Jazz, which satisfies multiple levels of understanding of the complexities of the performance and the improvisation. It is to underscore the role of (the extent of) natural language structures within the performance of this livecoder.


Reply by Mark Marino on March 5, 2010 at 4:10pm

The more I think about this, the more I realize that these qualities (accuracy, speed, dexterity, creativity of selection) are also key to my expectations in live coding - since someone visibly fumbling at the keyboard, scrolling hopelessly through lines of code, adding lines that obviously had no effect, writing lines and then deleting the instant their effects are experienced, or scripting efficient or needlessly circular processes would not be half as entertaining.

In other words, Sorensen’s performance has a virtuosity, so it is instructive to imagine what bad live-coding might look like in order to more fully examine this art.


Reply by Jeremy Douglass on March 8, 2010 at 11:52am

Dear John and Amanda,

My first response to John’s question “is it programming?” is strongly in line with Amanda’s comment on “degrees of improvisation in music” – I think of them as in some way the same question, about what constitutes authentic liveness, authentic engagement, and what the degrees of preparation and problem-factoring are that we see, whether in Jazz or jam bands or livecoding.

I think the broader question is really interesting to apply to code. To what extent is any act of programming (taken from a huge range of examples of people writing code) “making-things-go”? There is a huge amount of performative coding out there. For example, the whole Ruby on Rails culture rallied around an amazing and carefully scripted series of demo magic tricks. (“Voila!”)

Of course, you can argue that code generation and frameworks are a useful and productive way of focusing on a problem space – but that makes Rails and Impromptu start to have a lot in common. Are pre-written macros and libraries ‘cheating’? At what point do we exit liveness? When we draw on scripted elements or libraries of things that previously ‘worked’? When we operate exclusively in a space without exploration, without the possibility of discovering something new? When we have trivial variability but are essentially prevented anything from going wrong? When there is no variability at all? My sense is that different people will draw different lines in the sand – I’m most interested in tracing the continuum.


Reply by Daren Chapin on March 8, 2010 at 4:26pm

Jeremy,

This is a great question. As one way of thinking about this, I propose that (borrowing from Clarke) any sufficiently advanced macro library is indistinguishable from demo magic. One abstract way of thinking about programming is as a process of constructing maps between domains, for example:

human idea <==> code/algorithm  <==> results/action/output
(what to do)             (how to do it)                    (what is ‘done’)

I think we are accustomed to using the complexity of those maps as a proxy for our sense of the degree to which ‘coding’ vs. ‘cheating’ is going on. For instance, consider the following progression of actions and proposed interpretations:

(person moves volume lever up 20% on sound mixer board)

Not programming. Now change to a virtual mixer with an API (“Recording on Rails”):

mixer.master_volume.change(+20.percent)

Sort of looks like programming, but is there any qualitative difference between that and physical version before? What if roles were reversed and it was dragging the GUI lever that called this function?

mixer.mixers.each {|mxr| mxr.volume.change(+20.percent)}

Same result as just changing the master volume, but feels programing-like because there is iteration on the individual levels.

mixer.mixers.each_with_index{|mxr, i|
mxr.volume.change(i.percent)

}

This definitely looks and feels like programming now, although I could maybe accomplish the same thing physically with the careful use of a straight edge. But it’s trivial to modify the code to the point where I couldn’t; the individual mixer levels could be some very complex function I’ve built out of splines called ‘splunge.’ But once that setting proved useful the code would almost certainly land in a library and look like this at the next session:

mixer.splunge

and we are back to something that doesn’t look or feel like programming again - equivalent to my striking a preset button on my electronic mixer board. The power of libraries and macroization brings us uncomfortably close to collapsing the distance between the idea and implementation domains, the Uncanny Valley of coding. When a Rails programmer types:

30.minutes.ago

and the code is literally isomorphic to the way the idea would be described in English, it is not hard to feel as if there any coding is going on at all.

And yet this kind of heuristic simplification is one of the primary goals of programming, at least insofar as programming is a social exercise. Libraries, extensible classes, polymorphism, macros, code generators, metaprogramming, domain-specific languages, parser combinators: all of these mechanisms aid us in transforming what once seemed like complex maps between domains into ones that are easy for us to reason about, so we can in turn build larger and more complex ones. It’s this constant interplay of building-out-and-abstracting-away that makes coding such a lively activity, and perhaps so simultaneously sundry and magical.


Reply by Jeremy Douglass on March 9, 2010 at 2:27pm

Daren,

Your Clarke paraphrase and discussion of “demo magic” resonates for me with discussions of magic in Wendy Chun’s Week 4 presentation . It is also really interesting how effective macros or library code can be both mystical an an obscurantist way (“what is cosr?”) but alternately can seem like the most straightforward of elocution – sincere, or, as Stephen cites Holden Caulfield, “not phony.”

You use the phrase “Uncanny Valley of coding” to describe isomorphism to English – a provocative idea. The “uncanny” part resonates for me with the uncanny experience of reading the not-code of mezangelle in mez’s Week 6 discussion . More generally, I thirevulsion  or some other aesthetic crises awaits as code and natural language become isomorphic is an interesting one to explore – it leads us past syntactic sugar  and macros, and straight to the strong form of that isomorphism:natural language programming (NLP) . But in my experience, what happens to beginners and intermediate users of NLP languages is not revulsion – it is frustration, but also a kind of superstitious thinking that assumes extremely broad isomorphism from extremely narrow examples, and is then constantly frustrated when these expectations are not met. In other words, learning an NLP language is a Loebner Prize  Turing Test in slow-motion, with all the joys and disappointments.


Reply by Noah Vawter on March 5, 2010 at 5:48am

# Give us access to the performer’s mind, to the whole human instrument.

His or her body is where? Is the mind the whole human instrument?

 

Reply by Mark Marino

>How is the code (un)like the tubes of the pan flute?

The flute’s tubes offer a carefully-contrived space where the denser material, bamboo, confines the movement of air in a mysteriously ordered way – it resonates with a mostly sinusoidal waveform. An electrical circuit is a similarly-contrived space where, in place of fiber and air, metal confines the electronic movement. However, the shape and size of the bamboo are sufficient to produce a perceptible event - an audible tone. Electrons move so fast (in this atmosphere…), that their natural oscillation is inaudible to us. So, instead of using electricity as a fluid (as analog synthesizers do), swishing through a space, we construct channels for it, preferring, instead of hearing the oscillation of a single mass, to hearing the collaboration of hundreds and thousands of small canals/channels interacting with one another.

Code is a chance to architect the electrical canals, a way of controlling the flow of water from basin to basin, selecting which basins overfill and spill into their neighbors. Some types of code simulate the single channel of the flute, while others combine these basins into still more complex structures.

Reply by Max Feinstein

>On the other hand, when a livecoder calls for a random X or Y or Z,>nobody can really predict what the computer will generate.

Musicians, such as friends of mine at Berklee, have learned to hear randomness as a result of their experience composing with random elements. Many random generators have distinct patterns. For example, the LFSR can be interpreted as continuously either a) doubling in value, or b) doubling in value, then subtracting a constant. It’s also difficult to obscure randomness - it must be ‘shaped’. This is the idea behind the various “colors” of noise.

> Or if anyone would make the argument that instruments other than the computer> (eg, Impromptu) are also random?

I see randomness used differently in computer and e.g. electric guitar music.

In computer music, randomness tends to be an economic measure, to “stretch out” a pattern. It also tends to be employed nearly continuously vs. more traditional live instruments. For example, in a guitar solo, there is often a tension behind how much the performer is willing to risk to expand the sound, based on his/her ability to “reel it back in”. Random notes are sometimes briefly in bursts. This is randomness that accompanies brief moments of panic or ecstasy, etc. In comparison, I perceive randomness in computer music more like the random noise around us which we create when we measure cloud cover, or count many other things which we rarely perceive (it is noise after all), such as the deviations in the number of people riding a bus from day to day.

That might sound like a “con” of computer music and a “pro” of acoustic music, but it’s not that simple. While learning to play an instrument, it is largely an unknown and random space. Yes, our intuition can tell us where, for example, the higher notes vs. lower notes are on a guitar, but the difference between a tritone and a perfect fifth is a small step with much greater harmonic implications. They’re located next to each other on the guitar, but can not be easily substituted for one another. This is particularly evident when learning to play for the first times, or when figuring out melodies at any stage in musicianship. How many of us have not swept through a four-chord progression, having compiled the first three chords, and search for the “right” fourth chord to complete the sequence? Along the way, we hit many random notes. Even with experience, one may learn a scale, and make a decision between two intervals to resolve a melody, but with a degree of uncertainty about which one to choose.

This also points to an important difference between computer.random() and acoustic randomness: the same critically-confining architecture/shape which enables a guitar to work also has “sweet spots” and “dead zones.” Generally speaking, these are places which are flaneuristically sought after or avoided. I mention this, because I’m not only talking about sonic sweet spots, but also locations on the instrument which are physically easier and harder to play than others. Computer code (typically…) has no such inhibitions in its random function. A simple random function, like a dutiful feline, will attempt to delight you with a minor second interval just as peppily(*) as it offers up a unison.


Reply by Max Feinstein on March 5, 2010 at 3:19pm

John Cage (1912-1992), an American music composer who greatly influenced electronic music, incorporated some of the very same notions of abstraction as Sorenson employs. The transcript  from Cage’s speech to his audience before a 1957 performance reveals some of these fascinating parallels between his style and Sorenson’s approach:

Those involved with the composition of experimental music find ways and means to remove themselves from the activities of the sounds they make. Some employ chance ok of Changes, or as modern as the tables of random numbers ussicists in resous to the Rorsch may be roughly divided and the actual sounds within these divisions may be indicated as to number but left to the performer or to the splicer to choose. In this latter case, the composer resembles the maker of a camera who allows someone else to take the picture.(Emphasis mine)

The first highlighted bit, about composers who “remove themselves from the activities of the sounds they make,” reads to me just like the OED definition of abstract (v.):

To draw off or apart; to separate, withdraw, disengage from..

I find that Sorensen achieves exactly this with his music. As he stated above, “[randomness] can also be togct away detail without requiring a complete model of the undrynample, and most striking to me, is the irony of a composer abstracting portionsoI hear was written with intent by the composer, each sound a ete impe“[abstraction] is hugely important in livecoding where your ability to implement complex processe of Daren’s discussion about the advantages of programming declaratively through a combinator library. It’s almost as if the composer who implements randomness s livecoding but painstakingly obvious in its operations). I think this idea is neatly illustrated by the last italicized portion of Cage’s remarks above.

Another bit of Cage’s speech that I find interesting:

Whether one uses tape or writes for conventional instruments, the present musical situation has changed from what it was before tape came into being. This also need not arouse alarm, for the coming into being of something new does not by that fact deprive what was of its proper place. Each thing has its own place, never takes the place of something else; and the more things there are, as is said, the merrier.

Cage’s concept of the interplay between conventional instruments and new instruments (which for him consisted of tape, among other things) can be extended to encompass the digital realm as well. In this case, I’m reminded of N. Katherine Hayles’s idea of intermediation – the transformative process that takes place at the intersection of the analog and the digital. When overlapping Haylesian intermediation and Cage’s calmness about the transformative process that occurs through medium changes (e.g., analog to digital), I imagine the act of producing electronic music to be colored with sort of serene and peaceful emotions. Just a side note, really, but the thought has made livecoding performances all the more enjoyable for me to watch.


Reply by Jeff Nyhoff on March 7, 2010 at 6:06pm

The theatrical side of my double-background is creating some “interference” for me as I read through these wonderful posts. Some of the notions of “live performance” that seem to be operating, here, run counter to the way they tend to operate within theatrical discourses. The “liveness” of “performance” turns out to be a delightfully slippery concept when one sets out to try to pin it down …

First, “improvisational” theatrical performances are seldom as loosely structured as they seem. They are usually very, very well ee.g., by suggestions from the audience – but these elements are tightly constrained: by the way the invitation for input is framed, or by the existing circusexts,” if you wilon these dgerienced perohe impression that the “random” elements are mre difficult to accommodate than they really are for the skilled, experienced, and well-prepared performers.

In fact, this illusion of being “unscripted” and “unrehearsed” (what Stanislavsky called the illusion of the first time”) is part of most conventional and popular theatrical forms. Persons who hae never doe any serious acting ften ask “lie” theatre actoslines?part;making ht after night, is theimpression that it is “un-scripted.” Bll-prepared instrumentalists do this – pretend to be doing much more composing-on-the-spot than theyactually are? It is especially challenging  to see this as an extensively improvisational work when it is a solo work. There are no inputs being taken from the audience. There are no othe musicians’ choices to accommodate, as in the case of a “jam” session.

Similarly, when I have teach several sections of the same computing course and, in each, take students through programming examples by coding on the screen  in front of them, it doesn’t take long before I can do most of the code without consulting a “script/listing.” In fact, at that point, it would actually be harder to try to slavishly follow an exact pre-scription of code, rather than proceedtions from the students regarding parameters, function names, etc., which helps make working through the programming example seem more improvised and participatory than it actually is. (And isn’t this the“end user” experience in general? The *ilusion* of large degrees of freedom and of meaningful and substantive participation , when, in fact, we are tightly constrained in terms of our understanding, our actions, our expectations – including what we come to accept as a “pleasurable” interactive experience?)

In TV news broadcasts, classroom lectures, and many other performance contexts, audience engagement is predicated in part upon ke

          How to convert Apple Music to WAV        
Waveform Audio File Format (WAVE, or more commonly known as WAV due to its filename extension) (rarely, Audio for Windows) is a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs.
If you want to play your Apple Music on WAV audio player,  what to do? 
To solve the problem, you need an iTunes Audio Converter  - Macsome iTunes Converter to help you. 
Referral reading: Reviews of Top 3 Audio Converter for iTunes.
With the iTunes Music Converter, users can convert any audio files on iTunes library, including Music files, downloaded Apple Music files, iTunes Match Music files, protected and unprotected Audiobooks in the format of M4B, M4A, AA, AAX and so to MP3, AAC, WAV easly and quickly.
Steps to convert Apple Music to WAV with iTunes Converter
Step 1: Free download the latest version of iTunes Music Converter, install and run it.
Step 2. Click Add button to import the music files from Music library of iTunes.
Step 3. Click setting icon to set the output audio format.

Step 4. Click CONVERT button, and start conversion.

After the conversion is completed, you can enjoy the WAV files without limitations.
You may want to read:
How to save Apple Music to External Hard Drive
How to transfer music files from iCloud Drive to Google Drive


          Mix from me        

knocked this up a couple of weeks ago
3 and a half hrs - plenty of different stuff in there
some new
some not so new
some oldies
even a few breaks!!

enjoy

https://soundcloud.com/kevincasey/kevin … y-2013-mix

When I Rock (Piemont Remix)    M.in    The Factoria (Factomania)
Thirteen Thirtyfive (Lee Foss & MK Remix)    Dillon    Bpitch Control
Blame It On The Youth (Kerri Chandler Club Mix)    Voyeur    MadTech
Spanish Pantalones (Hot Since 82 Remix)    Los Suruba   
Higher Level (GA's Love Is The Dub)    Alex Jones    Hypercolour
Veddel For Life (Original Mix)    Steve Clash    Kolorit Digital
Ignition Key (Original Mix)    Scuba    Hotflush Recordings
The Same Thing (Huxley Dub)    Baunz    This Is Music - More Music
Create Balance (Steve Lawler Remix)    Shlomi Aber    Ovum Recordings
Surya (Reprise)    Tim Deluxe    Get Human
Surya (Club Mix)    Tim Deluxe    Get Human
Clap Back (Original Mix)    Elon    Metroline Limited
Taped & Gorgeous (Nic & Mark Fanciulli Remix)    Shlomi Aber    Ovum Recordings
Roosted (Original Mix)    Massimo Cassini    Waveform Recordings
Who Made Up The Rules (Josh Wink Remix)    Agaric    Ovum Recordings
Revolution (Original Mix)    Butch    Cecille Numbers
Let The Show Begin (Original Mix)    Pig & Dan    Bedrock Records
The Warmest (Original Mix)    Egoism, Bazu    Leap4rog Music
Into The Darkness (Original Mix)    Pirupa, Hollen    Bitten
Carpe Diem (Original Mix)    Jay Lumen    Tronic
Crawler (Original Mix)    Johnny Kaos, Mattew Jay    Amazing Records
Sunkiss (Original Mix)    Technasia    Ovum Recordings
The Calling (Original Mix)    Siwell    Great Stuff Recordings
Crystals (Guy J Remix)    Cristior    microCastle
Karma (Original Mix)    Sidney Charles
Ultraviolet (Original Mix)    Denney    Hot Creations
Oasis (Original Mix)    Solee   
Handsome (Original Mix)    Sam Ball    Intec
Ur Around Me (Original Mix)    Chaim    Bpitch Control
Slick (Original Mix)    Ramon Tapia, Sandy Huner    Remote Area Records
NE1BUTU (SCB Edit) (Original Mix)    Scuba    Hotflush Recordings
Space Shanty    Leftfield   
Salamanca (Original Mix)    Kaiserdisco    KD Music
Phuture Bound (Ame Remix)    Akabu    Z Records
Mendoza (Lank Remix)    Mariano Favre    Perspectives Digital
Give Yourself to Me (Dub Mix)    Spektre    Iboga Records
NE1BUTU (Original Mix)    Scuba    Hotflush Recordings
Real Talk (Original Mix)    Touch Sensitive, Anna Lunoe    Future Classic
Methane (Original Mix)    Colombo    iBreaks
Fuck U (Original Mix)    Kuplay    Sound Break Records
White Mouse (Original Mix)    Adam Marshall    New Kanada
Black Rhythm (Original Mix)    Franco Cinelli    Bass Culture Records
PR812 (Original Mix)    Ikonika    Hum And Buzz Records
Blue Monday (Vandalism Remix)    Kurd Maverick    Ministry Of Sound (Germany)


          Knobs, Buttons and Dials        
The phonograph, also called gramophone or record player, is a device introduced in 1877 for the recording and reproduction of sound recordings. The recordings played on such a device consist of waveforms that are engraved onto a rotating cylinder or disc. As the recorded surface rotates, a playback stylus traces the waveforms and vibrates to…

Read More →


          Paterson "resonant"        

by Nichola Deane

‘All were skies falling silent.’ In this way, Don Paterson, poet, guitarist, aphorist and editor, distils the nature of his ‘revelations’ in the opening salvo of The Blind Eye, one of his three collections of aphorisms. To paraphrase Octavio Paz, it is the job of the poet to show the silence, and indeed to become a ‘master of silence.’ Paz made these comments with reference to Elizabeth Bishop, but that title, ‘master of silence,’ is one that in just over fifteen years Don Paterson can surely be said to have earned. Prizes and praises have greeted every collection to date, but the accolade ‘master of silence,’ whilst it cannot be awarded, can and should be suggested. Paterson’s poetry is the real thing: it resonates out of silence and returns the reader to silence. In his poetry collections, Nil Nil (1993), God’s Gift to Women (1997), The Eyes (1999), Landing Light (2003), and Orpheus (2006), his unique poetic voice refines and purifies itself. It is a voice that has such a fiercely independent existence that critical commentary hardly seems needed. For this reason, I have merely tried to point, in three different ways, to the resonant silence of the poems, the river of absence that flows through them.

Trying to flow with the poems, I have not stopped to analyse themes; rather, the themes emerge if you read the lists of words I have gathered from each of the collections, starting with Nil Nil. Each ‘word-hoard’ is comprised of words in the poems that have reacted on me and which, when placed together, tell a story from, if not the story of, each book. Hell; The Road; the search for a ‘you;’ trains; spent desire; God; and, of course, silence are all themes to look out for.

Difficult as these poems often are in terms of their argument and their subject-matter, they are easy to trust. Their brilliance stems from their syntax, the thread on which each ‘word-hoard’ is strung, and therefore I make some brief comments that point towards Paterson’s gifts in this area. But more than this, his poems claim us through their cadence, and so from syntax the natural progression is to a capsule essay on metre and its music. If the poems convince because of their syntax, they seduce because of the way they sing; for no matter how dazzling or intricate Paterson’s ideas, the music in his poems is never wayward, and carries all pain and delight in it. Watch the water move.

i) Word-hoards

Before you read or re-read Don Paterson’s poems, try this miniature compendium of words garnered from his various collections, not for size but for sound. Roll around a few of these gems on your tongue, against your palate, between your teeth: ‘gurry, winterbourne, rancour, Hilltown, futterin, shadows, lyre, ticktock, woodsmoke, whitewing, blackedged, gracile.’ How does that feel? Now try it at half the speed of your first attempt, making sure you say each word aloud and that there’s a slight pause between each one. For each comma read in a heartbeat. Keep track of your pulse, your breath. Let the words resolve into morphemes and almost lose their meaning: let them simply vibrate.

These poems are sounds that walk ahead of you. Their fabric is stitched with an array of threads. In Nil Nil, Paterson’s word-hoard reveals a hunger for every texture you can think of, from ‘lino’ to ‘gurry,’ and every kind of register from the obscene (‘cunt’) to the divine (‘irenicon’).

Tongue, pish, Murphy’s, black, guck, gurry, winterbourne, jism, tenement, Fomalhaut, ictus, bodhran, scything, cloaca, epicene, cunt, resonance, blind, lino, hare-lip, sick, gibbered, tenement.

Nil Nil begins with a solitary game of pool in The Ferryman’s Arms and the words you see here are little oil lamps in the gloom. Music, violence, poverty, religious mania, sickness and desire are some of the subjects explored, and the poetry feels like a sick pleasure, perhaps a spilt self.

lacuna, concatenation, Leucotomy, Origamian, pollen, junkie, golden, lifetime, sweetpea, loop-tape, weight, pricktease, hieratica, heart.

In every book of Paterson’s, the word ‘heart’ appears, and the heart of this book thuds like a bodhran played at battle speed:

glare, monochrome, half-lotus, balletic, kickabouts, Clatto, shanty-town, Tayport, Carnoustie, irenicon, wind, cloud, Venus, haar, nirvana, goodbye.

Even out of the element of their poems and forced into a morganatic union with other bedfellows, these words, in roughly the order they appear in the book, have a shuddering, debased grandeur, as though Paterson can gut a word like ‘pish’ and turn it inside out, revealing its acoustic swim-bladder, its sound-skeleton. How does he do that? Well, he both tells you and doesn’t tell you:

Thrown out in a glittering arc
As clear as the winterbourne,
The jug of Murphy’s I threw back
Goes hissing off the stone.

Whatever I do with all the black
Is my business alone.  

(‘Filter’)

Alchemy is Paterson’s business, and he delights in any and all materials. As with the ‘pish’, so with with ‘cloaca’, ‘kickabout’, ‘tenement’: black to gold is always the trajectory. It is a matter of physiological process—and a mystery. The process involves fortuitous meetings of two or more words gathered together, words finding each other as lovers, disciples and congregations might, the words breaking their boundaries to belong, to sink into each line. What assurance: to begin this way, with all these dark sounds igniting and blazing! ‘Tongue’ is one of Nil Nil’s opening words and ‘Goodbye’ is its last, as if by the end of his first collection, Paterson is already doing a disappearing act. But then he starts Nil Nil at ‘The Ferryman’s Arms’ with a coin already on his tongue.

Where do you go from there, if Charon is already waiting and the meter is ticking? If you are Paterson, you go AWOL looking for what was lost before you reached the crossing-place: a brother, mother, sisters, the Horseman’s word and a Scheherezade or two. Yet, rifling through the word-hoard of God’s Gift to Women the book begins to look like a strange affirmation of faith.

Church, butterscotch, rancour, heartburn, pray, Caird, lochan, Kemback, kiosk, saint, stalled, thorn, mother, carcass, Fetherlite, harem, sea, weight, vortex.

We begin in the ‘little church’ of poetry, a base camp for journeys backwards in time: to lost and perfect days in childhood out by the lochan; to days that a lost brother never lived; to stalled nights; and to what is lost at sea and in the storm, and then

Messiaen, Hilltown, florins, charred, Macalpine, mothers, Cocteau, kiss, black, Ladyburn, chlairsach, North British, whisky, Scheherezade, beggar, fuck, futterin, Hameseek, furrow,

Coins again, drinking, sex and music take us on a long and squalid but beautiful binge until we see

Venus, morganatic, sleekit, bleeding, death-camp, mother, cock, Cerberus, singer, engines, angels, SPONG, phthistical, spanking, breeks, wank, innocence, Wolflaw, tallow, shadows, faith.

By the close, it is morning; morning brought coughing into life with a couple of Nurofen and a pint of coffee. The same Anglo-Saxon dirt is here (‘cock,’ and ‘fuck,’), the Dundee sorrow (‘kiosk,’ ‘Hilltown,’ ‘Macalpine’) and the nightmare (‘death-camp,’ ‘Cerberus,’ ‘Wolflaw’) but there are grace-notes too. Uplift comes from ‘saint’ and ‘singer,’ ‘kiss’ and ‘angels.’ No more comforting or comfortable than the Nil Nil doctrine, God’s Gift nonetheless makes notable adjustments of texture, giving us, in a number of poems, a world to aery thinness beat. A deity is invoked, and the poet gazes up, breaking his focus on the abyss.

What he sees when his gaze shifts falls on us like a sunshower in The Eyes where everything and nothing is his. The Spanish poet Antonio Machado is his master here, his giver of breath and bread. For the first time, Paterson makes versions—not translations. In his versions of Machado’s poems, the fabric of the words is gossamer, or lighter. Words begin their return to breath.

Wait, drink, Buddha, Cain, lyre, breath, rainbow, eyes, sea, heartless, Lord, Guadarrama, obol, desire, forget, desire, knots, heart, shoreline, silence, salt-grains, honeycomb, hour, beloved, dust.

Where are we now? Although the location is the Spain inhabited by Machado, the words gathered here do not belong to any one country. Rootless, they sound like a heart or a river rising:

Andalusia, hosannas, ticktock, bells, no, anchor, work, Bergson, salto inmortal, sea, weep, quiver, nothing, woodsmoke, dream, Name, ashless, ripe, parched, you, you, lover, starless, Christ.

Clocks and bells whirr and chime, but time slips past (there is no anchor in these Machado poems) and even the sense of ‘I’ loosens. The poems hunt out an ever-receding ‘you,’ travelling far into

NIHIL, silver, black, black, heart, hinge, lilies, sing, water, rock, river, pulse, orange-trees, shore, desperate, she, evening, Heraclitean, gathering, zero, oblivion, Machado, absence.

So we ‘wait’ and our reward is ‘absence,’ and this book seems to rest in that absence. Not only rare fauna like ‘obol’ sing (note: a word for Charon’s currency inhabits every book in our journey so far) but words like ‘rock’ and ‘river.’ And not only rock and river cantillate. Even an abstract imperative like ‘wait’ is rendered weightless and airborne: ‘wait as the beached boat waits, without a thought /for either its own waiting or departure’ (‘Advice’). ‘No’ and ‘not’ arrive in you with the force of their heterographs ‘know’ and ‘knot.’ A great ‘no’ is all we know here, and what is ‘not’ is always before us as we read, a slip-knot of longing. Indeed, longing for the beautiful is all that seems to hold some of these poems on the page: the salt-grain’s ache for ‘honeycomb’ and the heart opening like a hinge for more of the lilies, orange-trees, and bells.

In the gathering, scented dusk of The Eyes, it is easy to forget the bleating ‘Mooncalf’ of Nil Nil or the female ‘drunken carcasses’ littering God’s Gift. But Paterson does not, and the shadow-river pulsing below the river returns in his next work, Landing Light, like a bradycardiac bad dream. If The Eyes felt like a Paradiso of sorts, Landing Light loops back to hell—but with moments of blissful surfacing to draw breath. 

Luing, motherland, catholicon, whitewing, work, wingspan, minuscule, boked, Strophades, she, silent, worm, shite, sons, roses, wet, knuckled, pearl, ochre-pink, hawk, cave, Leda, wolfing, wives,  

We begin in a heavenly Scottish landscape—in poems like ‘Luing,’ a remote island becomes as weightless and heavenly as Machado’s Spain— but we quickly rebound from heaven into hell. The middle of the book is largely a place of the skull:

Hindemith, Gromit, rose, worm, Scheherezade, Sodom, pisshole, bricht, luthier, skull, Padmasambhava, delete, heart, blood, lover, triste, arse, feedback-loop, oxter, ochone, malebolge,

And yet, no matter how deeply into the abyss we travel, our guide will lead us back into the upper air, after a spot of purgation:

No No, Babel, ayebydan, loins, thigh, ear, Sika, ecstatic, damn, facsimile, begging-bowl, lyre, alibi, ken, sternless, birk, alane, Mother, girl, wonder, blackedged, love. 

Hell is in the ‘malebolge,’ the ‘pisshole’ stare of the poetaster, the ‘feedback-loop’ of torture. But the bliss! The bliss is animal and sexual (‘wet’ and the mildly sadistic-sounding ‘knuckled;’ ‘pearl,’ and the light, delicious consonantal kick of ‘ochre-pink’). The bliss is also linguistic: Scots rises in Landing Light like a spring in three poems: ‘Form,’ ‘Twinflooer,’ ‘Zen Sang at Dayligaun.’ Not the invented Scots of MacDiarmid, this is something that feels both pentecostal and truly spoken: a currency that is exchanged in a place (a living, breathing Scotland) but escapes out of time, into the unbound realm of Dasein und Engeln, being and angels.

            An’ there’s nae burn or birk at aw
            But jist the sang alane

                                                            (‘Zen Sang at Dayligaun.’)

Pure Rilke in its dissolve, this is also a pure-sounding Scots, not walking softly on the land but flowing and singing across its surface like a caress. ‘Burn’ and ‘birk’ are not Scots exotica, but the plainest and arguably the loveliest metonyms you could use to stand for the Scottish landscape. But these are also literary metonyms. The burn’s fiery water fluting home brushes past Hopkins; birch trees flexing swing forever in the direction of Robert Frost: two words alliterating gently out of the soil and into song.

Ah, song. The word takes us straight to Orpheus, Paterson’s versions of Rilke’s über-poems. Rilke’s sonnets, once read, can enter the reader as if there were no other real poems in existence, so powerful and seductive is their music. After Rilke, even the word ‘tree’ becomes a song in itself (had you really known what a tree was until Rilke set the word ringing for you?); words like ‘mirror’, ‘mouth’ and ‘sigh’ go forth and fructify in entirely new ways; these hungry little words, so pure and unexpectedly vast. Or, at least, this new way of seeing and hearing occurs if you have read Rilke as Paterson evidently has:

Tree, lyre, girl, death, arose, crossroads, sigh, heavy, heavier, spaces, true, song, belong, willow, pitcher, wine, herald, lament, lyre, mouths, hesitance, spurring, reining, praise, O, reach, bestows,

Each word is rootless, as in the Machado poems, but here the tone is weightier, the nouns earthy; the Orpheus keynote of ‘lyre’ returns and returns to set every other word echoing:

apple, leaf, fugitive, juice, curse, pelt, ascent, lyre, perfected, drumbeat, blue, baptised, gracile, maenads, rocks, seas, eyes, losses, mirrors, kiss, beast, negate, invoke, torture, departure, glass, 

Hints of fecundity, of ‘fruit’ and ‘juice,’ follow Rilke into sensual celebration, brief though this is, as the lyre continues its song of departure:

shatters, meadow-brother, wind, heals, spent, balance, dancer, blur, gold, heart, grief, , axe, bough, danger, star, lone, choir, one, lyre, impermanence, lyre, true, dark, crossing, I, flow, am.

Smooth as wave-worn pebbles, the Orpheus word-hoard is full of rounded sensual treasures: the tiger’s eye of ‘pelt,’ the tourmaline of ‘pitcher.’ They fall into two groups, words that spur us on like ‘shatters,’ ‘baptised,’ and ‘juice;’ and words that rein us in like ‘sigh,’ ‘grief’ and ‘willow.’ And yet, how little difference there seems to be between spurring on and reining in: in either case, the energy of the word, be it noun or verb, pulsates, a newborn image.

Only listening is necessary to catch the image and watch it melt, to say the sound and watch it run. But as Rilke takes pains to point out in the Duino Elegies, listening is no easy matter: the purest listening is a kind of emptying out in which the listener does not remain. No one hears; no one is left to hear; there is only hearing. Whatever the arguments about the nature of translation and the creation of versions, the poems Paterson resolves onto the page in Orpheus are made of that luminous hearing: never once in this collection do the individual words sound less than notes coaxed from the lyre.

ii) Syntax

But words are never individual. If Paterson’s words sound as though they come from the lyre, in Orpheus, Landing Light and elsewhere, it is because they flow and move in a particular way: it is because of their syntax. Paterson, the lover of words, is easy to find: name ‘pish’ or ‘grief’ or ‘juice’ and you feel you have him there before you. Evoke Paterson’s relationship with syntax, however, and he starts to run through your fingers.

Paterson is not a ‘lover of syntax’ or a ‘master of syntax.’ Syntax is something that our minds are in, the slipstream of our thought. Paterson is mastered by syntax, as every true poet should be. Think of that line from Hopkins, ‘Thou mastering-me-God,’ where God and poet are part of the same noun phrase; fighting, embracing, and indistinguishable, the ‘me’ subdued by a greater force.

Nil Nil’s murk is marked by attack of phrasing: ‘I’d swing for him, and every other cunt/happy to let my father know his station.’ Aggressive, ‘blunt’ and passionate but speaking, with the swing and punch of verbal combat, as he picks up tired figures of speech (‘swing for him’ ‘know his station’) and puts his lips to them. There is no violent conjunction of phrase here, and therefore no knowing wink directed at the reader. Instead, the old phrases slide into the new, and the fit is perfect.

All that happens to the syntax as Paterson progresses towards Orpheus is a process of tiny, critical refinements. Sentences flex and embrace more and more, as Paterson’s work attains ever lovelier syntactic complexity. Letting you carry idea upon idea at the same moment, his sentences nonetheless do not make you suffer under their weight. Like a gifted ballerina, these phrases know how to hold themselves so that they become light enough to lift:

            I carefully arrange a chain of nips
            in a big fairy ring; in each square glass
            the tincture of a failed geography,
            its dwindled burns and woodlands, whin fires, heather,
            the sklent of its wind and salty rain,
            the love-worn habits of its working-folk,
            the waveform of their speech, and by extension
            how they sing, make love, or take a joke.

                                                                        (‘A Private Bottling’)

One sentence, this contains a bouquet of clauses resolving in a soft bass-note rhyme in which the reader carries a place, a people and all the intangible waveforms of their existence—and doesn’t stagger. Quite the opposite: the clauses leap but you hardly hear them land, you just feel the way they earth as a kind of rightness. Dance terms such as ‘ballon’ and ‘line’ spring to mind here for the way the sentence articulates (‘ballon’ being the illusion of weightlessness given when a dancer jumps; ‘line’ being the way a dancer has of making the curve of limbs symmetrical and beguiling to look at). But in fact we are running out of metaphors that get us anywhere close to understanding the enchantment of Paterson’s syntax. There is only one place to go beyond syntax: music.

iii) Metre and music

Syntax, the language-world we inhabit, is full of patterned noise. The patterns we find ourselves in and being used by jar and loop, clank and squeal—for the most part. Perhaps because of the weight of this noise, the music of a poem can strike us so forcefully that the unintelligible world seems momentarily suspended. I could list, now, instance on instance of Paterson’s music. But list them is all I can do. The music of a line is a complex experience, the marriage of more than acoustic pattern, syntactic grace and verbal acuity. It is also what happens when that complex of ideas and harmonies rains into the waveform of the reader’s life—transforming it. The note is struck that sounds an echo. So, receive these samples and let them resonate, as Paterson has.

‘So take my hand and tell me, flesh or tallow.
Which man I am tonight I leave to you.’ 

‘then swallowed its shout

 in the cave of my breast’

‘the vanished trail of your own wake’

‘Silent comrade of the distances.’

Receiving goes many steps beyond reading; words that resonate overstep their borders. So often, these poems step off into ‘the distances’ and reading them means go with the music, chase the echo. If you leave for the distances, you won’t find Paterson, who disappeared from his own poems long ago, but you might find yourself better able to listen –listen completely, as Paterson’s mentor Rilke recommended.

Nichola Deane, 2009


          PlugInGuru releases OmniPulse 2 | Perspektiv – 101 BPM/ARP Waveform based Synth Multis/Patches for Omnisphere 2.1        

PlugInGuru.com has announced the release of OmniPulse 2 | Perspektiv for Omnisphere 2.1. Here’s what they say: In December of 2013, John “Skippy” Lehmkuhl released OmniPulse Vol 1 which has [Read More]

the post PlugInGuru releases OmniPulse 2 | Perspektiv – 101 BPM/ARP Waveform based Synth Multis/Patches for Omnisphere 2.1 first appears in Pro AudioZ.


          Phaser Coding Tips 8        
Phaser Coding Tips is a free weekly email – subscribe here. Welcome! In the last issue we covered how to make a bullet pool and various shoot-em-up weapon types. This time we’re exploring a way to create waveforms, or paths for your baddies to follow. Motion paths don’t apply to just shoot-em-ups of course. They’re […]
          Traktor Certification for Rane MP2015 Mixer        

Rane is pleased to announce our collaboration with Native Instruments to bring you Traktor certification for the new Rane MP2015 rotary mixer. This incredible mixer is generating a lot of excitement on its own, but now with Traktor Scratch certification people are freaking out, as this is the first Rane mixer to have Traktor Scratch certification! Now, Traktor Scratch users can control of two or more virtual decks with Traktor control vinyl or control CDs. It is also the first Traktor Scratch certified mixer with dual USB ports; allowing easy back-to-back DJing and quick changeovers.

Setup is easy with the MP2015's class compliant Core Audio drivers for Mac. Windows users simply need to install an the included ASIO driver. The MP2015's control surface is MIDI mappable to Traktor giving you software control directly on the mixing console. Traktor isn't bundled with the MP2015, but it is available for easy download from the Native Instruments website: www.native-instruments.com

The #futureofdjing has never been so bright!

 

 

Traktor Pro 2.8.0 Release Notes

 

TRAKTOR PRO 2.8.0 contains some substantial changes to the core of the software, along with numerous other feature enhancements, improvement, and bug fixes. The list below provides a comprehensive overview of all changes since the last public release (TRAKTOR PRO 2.7.3) including the improvements in the two Public Beta releases (TRAKTOR PRO 2.7.4 and 2.7.5):


1.1. 64-Bit Application Architecture

TRAKTOR PRO now has 64-bit architecture. By making TRAKTOR PRO a 64-bit application, it allows TRAKTOR PRO to access all the available RAM on computers with 64-bit operating systems; previously, TRAKTOR PRO could only access a maximum of 2GB of RAM regardless of how much RAM was actually installed and available on the computer. By giving TRAKTOR PRO access to more RAM, it will increase performance of the software by allowing management of more items (larger Track Collections, more Remix Deck samples, better playback caching, etc.).


Are you still using a 32-bit version of Windows? With this release, the 32-bit version of TRAKTOR PRO can now also access an additional 1GB of RAM (if the computer has it available) providing additional performance and stability.


ATTENTION 64-BIT WINDOWS USERS: If you are using an audio interface which only has 32-bit drivers, be sure you are also using the 32-bit version of TRAKTOR PRO in order to access the low-latency ASIO drivers for the audio interface, otherwise your audio interface won’t be selectable in the TRAKTOR PRO Preferences. If you have a 64-bit operating system, the Installer will have installed the 64-bit version of the application by default and you will need to take a few extra steps to run the 32-bit version detailed in the readme.


1.2. Multi-Processor Improvement

A significant update has just been made to TRAKTOR PRO where multi-processor support is concerned: TRAKTOR PRO’s old audio threading model has been completely updated and optimised. For users of multi-processor computers who were experiencing degraded audio performance since the release of TRAKTOR PRO 2.7.0, the new threading model should fix these issues automatically.


1.3. Automatic Deck Flavor Switching

TRAKTOR PRO now features full Automatic Deck Flavor Switching for all users; previously, this was only working when loading items via the KONTROL S8 Browser. Now, loading or dragging a Track or Remix Set onto a Deck will cause the Deck to automatically switch to the appropriate flavor to play the content.


1.4. Parallel Audio Analysis

Also new in this version is a special analysis mode called “Parallel Processing”. This option can be found at the bottom of the Analysis window which appears when you right-click on tracks and choose “Analyze (Async)” from the context menu. If you enable the Parallel Processing checkbox before clicking “OK", TRAKTOR PRO will then use multiple threads to process many tracks simultaneously. Our tests show that processing a large collection of files can now be done three times faster with this option enabled. Be aware, however, that TRAKTOR PRO will use lots of your computer’s resources to do this and it may affect playback of tracks. We therefore only recommend using this feature in an offline situation rather than during a live performance.


1.5. Support for the New TRAKTOR KONTROL D2

TRAKTOR PRO 2.8.0 supports the new KONTROL D2 hardware controller. The D2 will be officially released on May 4th, 2015.


1.6. Spin, Scratch & Hold a Playing Deck with KONTROL S8 or D2 Touchstrip

New is a preference for the KONTROL S8 and KONTROL D2 which changes the SHIFT-behavior of the Touchstrip while a Deck is playing. Previously, holding SHIFT and touching the Touchstrip would perform an Absolute Seek (the Deck’s position would jump to the location corresponding to the touch on the Touchstrip). With this new preference enabled, this behavior is changed so holding SHIFT will allow you to perform spins, scratches, and holds with the Touchstrip while the Deck is playing.


NOTE: Backspins are enhanced by the fact that TRAKTOR PRO will stop the spin as soon as you release the SHIFT button. You can therefore perform a backspin effect for 2 beats by turning on FLUX mode, holding SHIFT, and swiping backwards on the Touchstrip. Two beats later, release the SHIFT button and the spin will stop and normal playback will resume right on the beat you desire.


1.7. KONTROL S8 and D2 Beat Grid Edit Mode Zoom

When enabling Beat Grid Edit Mode on the KONTROL S8 or KONTROL D2, the left-most Performance Button will now be active. Pressing this button will zoom in on Beat 1 allowing you to set the position of the Beat Grid with greater precision. Press the button again to exit the zoom.


1.8. KONTROL S8 and D2 Position-Aware Beat Grid Tempo Adjustment

When in Beat Grid Edit Mode on the KONTROL S8 or KONTROL D2, the two center Performance Knobs are used for adjusting the Tempo of the Beat Grid—the left knob is a coarse adjustment while the right knob is a fine adjustment. The new improvement is that these knobs are scaled based on the viewing position of Beat Grid Edit Mode so that adjustments made far away from the Grid Marker don’t result in abrupt changes to the waveform position. For example, if you are near the Grid Marker at the start of a song and change the Tempo of the Beat Grid, you will see the waveform move under the Beat Grid by a particular amount. If you then scan later into the track, adjusting the Tempo will create a similar amount of motion on the waveform (rather than a large amount of motion) thus allowing for precise setting of the Beat Grid Tempo over the length of the song.


1.9. KONTROL S8 and D2 MIDI Controls

We've added a new feature to KONTROL S8 (which is also available on KONTROL D2) that allows you to use the Performance Knobs, Performance Buttons, and the Performance Faders below the Displays as MIDI output controls. You can therefore use these controls to send MIDI messages to other software or external gear. This feature is not enabled by default and requires some configuration, detailed in release notes.


1.10. Pioneer CDJ-900NXS and XDJ-1000 Integration

Full native support for these two Pioneer players, including all available new functions provided on the XDJ’s touch screen interface, is now integrated into TRAKTOR PRO.


1.11. Rane MP2015 Scratch Certification


The Rane MP2015 rotary mixer is now Scratch Certified and can be used as an audio interface for TRAKTOR PRO in conjunction with Timecode vinyl and/or CDs.

 

 


1.12. Additional Bugfixes
 

Beta #2 (2.7.5) had a problem with FX routing modes—this issue has now been fixed.

Beta #2 also had a problem with playback of long M4A (AAC) files in the 32-bit version of the application. This has been fixed.

Beta #2 sometimes exhibited crackling when loading new tracks into Decks. This version resolves this issue.

Beta #2 could have high CPU spikes when used on some low-performance systems. We have made a change which prevents this.

We fixed a problem where TRAKTOR PRO would unnecessarily update the tags of tracks which are in the Preparation List at startup.

An issue was reported where TRAKTOR PRO could hang when accessing the Explorer node of the Browser Tree. This issue has been fixed.

At the same time, we also fixed an issue that could cause TRAKTOR PRO to crash when opening an Archive folder containing over 2500 .nml files.

We fixed an issue where TRAKTOR PRO would unnecessarily update all file tags when clicking on a Playlist .nml in the Explorer node of the Browser Tree.

We also fixed an issue where TRAKTOR PRO would unnecessarily update file tags when deleting items from a Playlist.

We fixed the CPU Load spike that can sometimes occur when engaging Keylock or Filter for the first time.

There was a problem where you sometimes couldn't re-order tracks in a Playlist without first clicking the "#" column twice and this has been fixed.

The Battery Indicator in TRAKTOR PRO's header was broken on some 64-bit systems and wouldn't show the battery level. This is now fixed.

An improvement has been made to MP4 (AAC) audio handling on Windows which should remove crackling during playback of these file types.

When adjusting the Master Clock Tempo via the S8 or D2, we have removed the "Hold BACK to Reset" text since this function wasn't valid for the Master Clock.

This version contains a fix for some crashes on startup which were part of the first Beta (2.7.4)

There were also reports of crashes or hangs on shutdown on some Windows systems and we have made a fix for it.

A bug was reported in 2.7.3 where a Deck would stop when loading a track into the playing Deck regardless of preference settings. This issue has now been fixed—loading a track into a playing track will leave the Deck playing so you immediately hear the newly-loaded track.

We also fixed an issue where jumping out of an Active Loop via a HotCue would disable the Loop—the Loop will now remain active when doing this just like in TRAKTOR PRO 2.6.8.

Fixed a problem where memory corruption could occur when browsing and sorting the Explorer node under very specific conditions with the S8 or D2.

Lastly, we fixed two issues which occurred when making changes to tracks (such as changing the track Rating) in the Explorer node while that same track was already playing in a Deck; doing so could result in the analyzed tempo being lost (causing the track to fall out of sync) or removal of the “played” checkmarks from the track. These issues should no longer occur.


1.13. Controller Editor

Controller Editor has been upgraded to version 1.8.0.262 to support KONTROL D2 in addition to other improvements and bugfixes.

 

About Native Instruments

Native Instruments is a leading manufacturer of software and hardware for computer-based music production and DJing. The company's mission is to develop innovative, fully integrated solutions for all professions, styles and genres. The resulting products regularly push technological boundaries and open up new creative horizons for professionals and amateurs alike. www.native-instruments.com

 

 


          Re: Why Vinyl Sucks and You Know It!        

I know for a fact that vinyl records have distortion built into them to keep the needle in the groove. As the needle gets closer to the center of the record, high frequency sounds are attenuated to keep the needle from skipping.

I worked with an engineer at RCA that worked in this field. Hipsters are idiots, and audiophiles are frauds. They don't know what they are talking about.

It's easy enough to use a what is called a convolution integral to make the same distortion you hear on a vinyl record with an MP3. You know why it's not done? Because nobody actually likes to hear vinyl records. It's something for vacuous idiots to be impressed by. That's all.

Also, analog doesn't convey more information, even in theory. All information is binary at the quantum mechanics level. Digital simply sets the bar for error, and keeps it there, forever. Analog adds error on every duplication because every duplication creates random noise.

People don't know how good they have it today. A DVD back in the 1990's had better visual fidelity than what was available in a theater in 1980. A CD used 10 bit digital (effectively) - that means the difference the error rate at maximum was 0.1%. You can't hear that, even if you claim to be able to be able to hear that.

What "audiophiles" are complaining about isn't that there is more distortion in digital reproductions, they are complaining there isn't enough. They are just too lazy to actually look at waveforms and understand the math, or too stupid.

There's a reason that CD's then later MP3's destroyed the market for vinyl records - it's because they are better and despite the protests of a few stupid luddites that don't understand what they are talking about, much less what they are actually hearing, it's been a great success.

I wonder if there was a similar controversy when the move to vinyl records happened, over the cylindrical phonograph. Probably, there's always been morons.


          Power Tip 71: Synchronous Boost Labors the High-to-Low Transition        
There are a lot of papers out there describing the switching waveforms in a synchronous buck regulator. However, there are not many for the boost.
          Epson Pro Cinema 1080 UB        

by Evan Powell

Price as reviewed $3,999.99

Epson has been making LCD-based home theater projectors for over five years now. The line started with the industry's first 1280x720 resolution model, the TW100, which was released in the summer of 2002. That unit has been followed by a line of newer, better, and cheaper projectors that have appeared periodically ever since.

Epson not only makes projectors, but they also manufacture the LCD panels that go into them. That puts Epson in a unique competitive position in the marketplace, since other vendors like Panasonic, Mitsubishi, and Sanyo all use Epson LCD panels in their products as well.

Epson home theater projectors have traditionally been good and dependable, but never quite leading edge in terms of price/performance. I've always had the feeling that they were holding back a bit in the design and marketing of their own home theater projectors, perhaps so as not to undermine the wider distribution of LCD panels to their corporate clientele. If that was indeed Epson's thinking, that strategy appears to have changed with the recent release of the Pro Cinema 1080 UB, the Home Cinema 1080 UB, and the entry level Powerlite Home Cinema 720. These three units are without question the most formidable competitors ever released by Epson in the home theater projector market, and they are right there on the leading edge of price/performance.

Differences between Pro 1080 and Home 1080

This review focuses on the Pro Cinema 1080 UB and includes notes on the Home Cinema 1080 UB. For all practical purposes, these are the same physical projector internally. But they are packaged, priced, and distributed differently. We have used a sample of the Pro version for this review. The actual differences between the Pro Cinema 1080 UB and the Home Cinema 1080 UB are as follows:

• The Pro version is black, and the Home version is white.
• The Pro is priced at $3,999.99
• The Home is $2,999.99
• The Pro comes with a ceiling mount and spare lamp, whereas the Home does not.
• The Pro has a 3-year warranty, and the Home is 2 years.
• The Pro model features an Imaging Science Foundation (ISF) certification.
• The Pro model is sold by resellers who are trained to install, calibrate, and support the unit. The Home model is sold by resellers who typically do not offer this level of support.

Product Overview and Observations

The Cinema 1080 UB is a relatively small home theater projector with a form factor that is wider than it is deep (almost 16" wide and 12" deep). Fan exhaust is out the front left corner as you face the unit. The design makes it particularly convenient for mounting on a rear shelf.

The projector has manually controlled vertical and horizontal lens shift. Vertical lens shift allows a total movement range of almost three picture heights (2.9 by our measurements). This is more than ample for both rear shelf mounting, and for most ceiling mount situations. It is about as much vertical shift range as we ever seen on home theater projectors.

[NOTE: In the review as initially posted, we erroneously reported a vertical shift of two picture heights, and noted that this range was restrictive in ceiling mount situations. We also noted that the vertical lens shift range was less than the Panasonic AE2000 and JVC-RS1 in the comparisons with those units below. In fact, all of these units offer about the same vertical shift range. This review as been updated with the correct data as of 1/23/08. EP]

A key advantage is that the Cinema 1080 UB is a very bright projector, and that it has a range of brightness options so you can adopt it to your particular room, screen size, and intended use. It is rated at 1600 ANSI lumens, and believe it or not, in its brightest operating mode ("Vivid") we measured exactly 1600 ANSI lumens, with the lens set to its widest angle configuration. I can count on one had the projectors that have measured at or above their rated lumen spec since we started reviewing projectors in 1999.

The Vivid operating mode is fine for a Super Bowl party, but as usual you trade color accuracy for extra brightness. If you want better color, opt for Cinema Day or Cinema Night modes. Cinema Day produces a whopping 800 lumens, and Cinema Night delivers a still very bright 470 lumens. These measurements are, again, with the lens in wide angle position.

The Cinema 1080 UB has a long zoom range, 2.1:1. The good news is that it gives you great flexibility in throw distance for any desired screen size-you can light up a 120" diagonal screen by placing the projector anywhere from 12 to 25 feet from the screen. The bad news is that when you move it to maximum telephoto, you sacrifice about 45% of the projector's maximum light output. For example, Vivid drops from 1600 to 870 lumens just by shifting the lens from wide angle to telephoto. That's not unusual for a 2x zoom lens, but it means that installation of the projector must be done with consideration for the screen size and operating mode that is anticipated. If you are going to be operating in Cinema Night mode, the use of the extreme telephoto end of the zoom will drop light output from 470 to about 260 lumens. That in turn would limit the screen size you'd want to go with, and/or it may argue for the use of a higher gain screen.

Therefore, despite the added complications of ceiling mounting, you may indeed wish to opt for a ceiling mount to get the projector closer to the screen rather than setting it back on a rear wall. If this sounds a bit confusing, professional installers can help sort it all out for you, which is one of the benefits of buying the Pro version from them rather than buying the Home version and doing it yourself.

Without a doubt the most sensation specification on the Cinema 1080 UB is the 50,000:1 contrast ratio-at this writing, this is the highest contrast ratio claimed for any home theater projector on the market. This is achieved with the action of an auto iris, which changes from scene to scene-in a bright scene the iris opens to boost highlights, and in a dark scene it closes to achieve deeper blacks. The native contrast spec on this unit is 4,000:1, which is the contrast range it can achieve within any given image frame.

The important question is, what does it really look like? The answer is that it looks remarkably good. The combination of the latent contrast and action of the auto iris delivers much more apparent contrast than we would have imagined possible. In Cinema Day mode, overall apparent dynamic range comes within a hair of matching that of the JVC DLA-RS1 when viewed side by side. Of course, the DLA-RS1 is a pricier projector with a much higher native contrast rating, so the fact that the Cinema 1080 UB can compete so well against it was surprising and remarkable.

On the test unit we had, the factory default settings in the various color modes were quite inaccurate, with all of them pushing green to a greater or lesser extent. None of them were acceptable out of the box. However, the system offers extensive controls for calibration, including the ability to adjust hue, saturation, and brightness on RGBCMY in each of the six preprogrammed color modes. These adjustments give you the control needed to balance out the projector. In addition, there is a skin tone control in the menu which should be used with caution. It can be set from 0 to 6, with 3 or 4 being the factory defaults depending upon the color mode you select. But use it judiciously, with the understanding that it has an effect on most colors in the spectrum, not just skin tones. The good news is that, once it is tuned up and properly calibrated, the Cinema 1080 UB is capable of delivering beautiful, natural, well balanced color.

In general, the factory defaults on our test unit for color saturation, contrast, and sharpness were overdriven for our taste. Color was simply too intense, and reducing the saturation control yielded a more naturally balanced color. Highlights had too much of an artificial glow, and pulling the contrast control down a few notches solved this problem. Finally, the picture looked a bit too sharp and too digital at the factory setting for sharpness. Reducing the sharpness control a few pegs yielded a more filmlike image without compromising image acuity. However, this is a personal preference-some people really like the appearance of the absolute sharpest possible image. If that's what you want, the Cinema 1080 UB definitely has the juice to deliver it.

As far as fan noise is concerned, in the less bright operating modes like Cinema Night, there is very little. But setting the unit in Cinema Day not only boosts light output substantially, but it raises the fan noise to a noticeable level. I wouldn't call it loud, but it is more noticeable than competing home theater projectors in their high lumen modes. If one were opting to run in Cinema Day mode on a regular basis, we would suggest positioning the projector as far from the seating area as is practical.

Epson Cinema 1080 UB vs. Panasonic PT-AE2000

This is an interesting comparison. Both projectors are extremely good, and both have distinct advantages over the other.

The Cinema 1080 UB clearly trumps the AE2000 in brightness and dynamic range. On a black screen with white credits, the Cinema 1080 shows both deeper black and whiter white. But in most film/video scenes with a lot of mid-tone values, black level on the two units is for the most part comparable, and on occasion slightly deeper on the AE2000, the differences being due, we would guess, to the different behavior of the auto irises on each unit. Highlights are invariably slightly brighter on the Cinema 1080, but overall picture luminance and snap are similar when viewing scenes with average light levels.

As far as lumen output is concerned, the Cinema 1080 UB is about 25% brighter in comparable calibration modes for dark room viewing. On the other hand, the AE2000 produces a bit more lumen output in its Normal mode (about 900 lumens), than does the Cinema 1080 it is comparable Cinema Day mode (about 800 lumens).

At factory default sharpness settings, the Cinema 1080 UB looks a bit sharper than the AE2000, and it accentuates more detail in facial features. However, as noted previously, the factory sharpness setting is somewhat overdriven on the Cinema 1080. Meanwhile, the AE2000 is factory preset at close to its minimum, so sharpness can be boosted if the user desires.

While the Cinema 1080 UB shows stronger performance in black level, dynamic range, lumen output, and perhaps a slightly sharper image, the AE2000 has advantages of its own. First, it shows less digital noise. This is true in both standard and high definition material, but it is most noticeable in SD. Even with its noise reduction filter off, the AE2000 shows less noise than the Cinema 1080 with its filters on. The result is that the image on the AE2000 has a smoother, more filmlike characteristic.

The AE2000 has no pixelation due to the SmoothScreen filter. The Cinema 1080's pixelation is a bit more apparent. However, at 1080p resolution, we do not consider the pixel structure on any 1080p projector to be an issue of concern at normal viewing distances.

The AE2000 has quieter fan noise in all operating modes. In low lumen modes, the Cinema 1080 fan noise is low and unobtrusive, but the AE2000 is virtually silent. In the brightest modes, the AE2000 is still extremely quiet, whereas the Cinema 1080 puts out some noticeable audible noise.

The AE2000 has greater connectivity, offering three HDMI ports and two component ports, compared to the Cinema 1080 two HDMIs and one component. However, the Cinema 1080 has a 12-volt trigger which the AE2000 does not have.

The AE2000 has a vertical stretch mode to accommodate an anamorphic lens, whereas the Cinema 1080 does not, at least in HDMI.

Finally, the AE2000 has several features that don't exist on the Cinema 1080-its split screen calibration is unique, it has a waveform monitor onboard to assist in calibrations, and it has a learning remote that enables you to control several devices in your theater from the one remote control.

So the bottom line is that the head to head competition between the Cinema 1080 UB and the Panasonic AE2000 is a toss up. There is no clear winner as neither outperforms the other in all ways. The decision to go with one or the other depends on which among the various features and image characteristics offered by each projector are the most important to you.

Epson Cinema 1080 UB vs. JVC DLA-RS1

This side by side shootout was quite fascinating as well. First and foremost, the question was which would show better black levels and dynamic range? The RS1 has a native contrast rating of 15,000:1, whereas the Cinema 1080's native rating is just 4,000:1, but it is assisted by an auto iris that clearly improves its actual performance.

The results of our viewing were that the RS1 has just a slight edge in performance on these characteristics, but the Cinema 1080 UB is surprisingly close. In a number of scenes there was no practical difference. We were very surprised to see the Cinema 1080 show so strongly against the RS1, considering the huge difference in their native contrast ratings.

Of course, the Pro Cinema 1080 UB has a price advantage. Not only is it selling for a thousand dollars less, but the price includes a spare lamp and ceiling mount. And the Home Cinema 1080 UB has an even more radical price advantage, selling for at least two thousand dollars less at the moment. So if you want contrast and black level performance that is almost equal to the RS1, but don't want to spend the money, the Cinema 1080 models will get you there.

We found it much easier to get to an ideal color calibration on the RS1, starting from that unit's factory defaults. The Cinema 1080 required more extensive manipulation, and for most users a professional calibration will be needed to get the most from it. (We say this based on our experience with one early test sample. Epson could alter the factory defaults at any time, so it is possible that customers will have better luck with out of the box color performance than we did.)

The RS1 is a D-ILA based projector, which is JVC's version of LCOS. One of the attractive attributes of LCOS technology is the lack of pixelation. Accordingly, the RS1 has less apparent pixel structure than the Cinema 1080 UB when viewed up close. But as noted previously, we don't find anything to complain about as far as pixel structure on the Cinema 1080 goes-it is not visible at normal viewing distances unless you have the eyes of Superman.

The Cinema 1080 is much smaller and lighter than the RS1, actually about half the size and weight. That makes it easier to shelf mount, less bulky to ceiling mount, and in general less visible in the room when not in use. If you are planning an installation in a multi-purpose room and you don't want your video system to be seen in the room when you are not using it, the Cinema 1080 is the more unobtrusive choice.

Connectivity on these two units is almost the same. Both have two HDMI ports and one component port. However, the Cinema 1080 also includes a VGA connection and a 12-volt trigger as well. The Cinema 1080 is HDMI 1.3 compatible whereas the RS1 is not.

In their brightest operating modes, there is some fan noise on both units, but the RS1 is a bit quieter than the Cinema 1080. In lower lumen modes, fan noise is a non-issue on both of them.

Neither of these models offers a vertical stretch aspect ratio to accommodate an anamorphic lens.

In short, the Epson Cinema 1080 UB competes extremely well against the RS1. Once it is tweaked up, it is capable of delivering a picture that comes very close to matching that of the RS1, and it does so for a lot less money.

Conclusion

The Epson Pro Cinema 1080 UB is a beautiful projector once it is calibrated. And in buying the Pro version you are likely to get some assistance with the calibration. The overall package is fairly priced, and a highly competitive value proposition. If you want to budget about $4,000 for your next home theater projector, it would be difficult to find a better choice than the Pro Cinema 1080 UB. We can give it our Editor's Choice Award with great enthusiasm.

Since we have not yet seen the actual Home Cinema 1080 version, we will reserve further comment on that particular model for a later date. For those who are adept at video display calibration and who prefer to do everything themselves, the Home version may be the better choice from an expense perspective. More on this to come ......


          Hantek 6022BE PC-Based USB Digital Storage Oscilloscope        
Bandwidth : 20MHz (-3dB) Maximum real-time sample rate : 48MSa/s Memory depth : 1Mbytle/channel Built-in Fast Fourier Transform function(FFT) 20 Automatic measurements Automatic cursor tracking measurements Waveform storage User selectable fast offset calibration Add, Subtract, Multiply and Division Mathematic Functions Adjustable waveform intensity, more effective waveform view User interface in [...]
          Lo-fi week day #5 : SID (does really) Matter (free C64 inspired VST)        
Quantum 64 : a free VST Commodore64 inspired instrument


There are several (good) Commodore64
"SID" soundchip emulations here, and you probably know them.
So I'll present you a quite old but still on-the-top instrument for me, the Quantum 64.
Quantum 64 is not really a SID emulation, but an instrument inspired by the wise and worshiped soundchip. It adds some great features, that's why I tend to prefer it.
Classical one VCO, LFO modulation with waveform and destination choice, AMP envelope, but also a nice step sequencer, ADSR resonant filter and "Quantum" (kind of lofizer) unit.

All in all, that makes a nice sounding, easy to use and good looking VST (you know, my top three criterias ;)

Windows only


Il existe plusieurs (bonnes) émulations du "SID", le chip sonore du COmmodore64, vous les connaissez peut être déjà.
C'est pourquoi je préfère vous présenter un VST pas tout jeune mais toujours au top dans sa catégorie selon moi :
Quantum 64.
Quantum 64 n'est pas à proprement parler une nième émulation du SID, mais un instrument inspiré du vénérable chip son. Il y apporte plusieurs améliorations intéressantes.
En plus du classique VCO, de la modulation LFO avec forme d'onde et destination, l'enveloppe d'amplitude, vous trouverez également un step sequencer, un flitre résonnant avec enveloppe, et le module "Quantum" (sorte de bitcrusher/lofizer).

Au final, un instrument qui sonne, facile à utiliser et qui a une chouette dégaine (mon tiercé gagnant ;)


Pour Windows uniquement.



Browse VST for free to get free VST instruments and effects, handmade selection !
If you liked this post spread it ! Click on the "Tweet" button just under to tweet it without leaving this page ;)
          Deal: Wave Alchemy Spectrum vintage synth instrument for Kontakt 50% off        
Wave Alchemy Spectrum salePlugin Boutique has launched a sale on the Spectrum vintage synthesizer instrument for Native Instruments Kontakt. 14 iconic vintage synthesizers, 10,000 samples and 175 unique patches; each expertly programmed and creatively combined into a single unified, incredibly powerful virtual instrument – our most versatile to date… Introducing Spectrum for Kontakt 5 –The hybrid waveform synthesizer […]
          Deal: Klevgränd Produktion synth & effect plugins 20% off        
Klevgrand salePlugin Boutique has launched a sale on Klevgränd Produktion‘s sleek and intuitive collection of synths and effects for a limited time only. Klevgränd Sale Baervaag is a fairly simple FM synthesizer with one carrier and one modulator. The oscillator waveforms can be modified seamlessly between sine wave to pure square wave with PWM (pulse width […]
          Glossary of Audio & Video Media Terminology        
A

AC Adapter: A circuit which modifies an AC current, usually converting it to a DC current.

A/D Converter: A circuit which converts a signal from analogue to digital form; the opposite of a D/A converter.

Adobe: A software manufacturer based in San Jose, California, and traded on the Nasdaq National Market under the symbol ADBE. Adobe is a leading provider of media productivity software. More info: Adobe Tutorials, Adobe Premiere, Adobe Photoshop, Adobe ImageReady.

AGC: Automatic Gain Control. A circuit which automatically adjusts the input gain of a device, in order to provide a safe and consistent signal level. AGCs can be handy features, but professional applications often require manual gain control for optimum results.

Aliasing: Distortion of an image file or sound recording due to insufficient sampling or poor filtering. Aliased images appear as jagged edges, aliased audio produces a buzz.

Alpha Channel: A special channel in some digital images reserved for transparency information.

AM: Amplitude Modulation. A method of radio transmission which sends information as variations of the amplitude of a carrier wave.

Amperage: The amount of electrical current transferred from one component to another.
Ambient: The environmental conditions, e.g. surrounding light and sound. More info: Ambient Sound, Ambient Light.

Amplifier: A device which increases signal amplitude.

Amplify: To increase amplitude.

Amplitude: The strength or power of a wave signal. The "height" of a wave when viewed as a standard x vs y graph.

Anamorphic Lens: A special type of wide-angle lens which stretches the width of the image but not the height, creating a widescreen aspect ratio.

Analogue: Information stored or transmitted as a continuously variable signal (as opposed to digital, in which the analogue signal is represented as a series of discreet values). Analogue is often technically the more accurate representation of the original signal, but digital systems have numerous advantages which have tended to make them more popular (a classic example is vinyl records versus CDs).

Antenna: A device which radiates and/or receives electromagnetic waves.

Aperture: Literally means "opening". The camera iris; the opening which lets light through the lens. By adjusting the size of the aperture, the amount of incoming light is controlled. The aperture size is measured in f-stops. More info: Video exposure/iris.

ASF: Windows Media file format ending with the extension .asf. Used for delivering streaming video.

Aspect Ratio: The ratio of width to height of an image. Can be expressed as a number, or a relationship between two numbers. For example, the standard television screen ratio is 4:3 (4 units wide by 3 units high) or 1.33 (the width is 1.33 times the height). The new "wide screen" television ratio is 16:9 (1.78), and many new video cameras have the option to record using this format. Theatrical film aspect ratios vary, but the most common is 18.5:10 (1.85).

ASX: Windows Media file format ending with the extension .asx. This is a metafile which works in conjunction with ASF files for delivering streaming video.

Audio: Sound. Specifically, the range of frequencies which are perceptible by the human ear.

Audio Dub: The process of adding audio to a video recording without disturbing the pictures. The original audio may be replaced, or kept and combined with the new audio.

Audio Insert: A feature of some video equipment which allows audio dubbing.

Automatic functions: Functions which are performed by equipment with little or no input from the operator. Auto-functions can be very useful, but tend to have serious limitations. As a general rule, it is desirable to be able to operate audio-visual equipment manually.

B

Backlight: A light which is positioned behind the subject. It's primary purpose is to make the subject stand out from the background by highlighting the subject's outline.

Backlight Correction (BLC): A feature of some cameras which increases the apparent brightness of the subject when lit from the rear.

Back Focus: The focus between the lens and the camera. Adjusted by a ring at the rear of the lens (the closest ring to the camera body). If the camera appears focused when zoomed in, but becomes out of focus when zoomed wide, the back focus needs adjusting.

Balanced Audio: An audio signal which consists of two "hot" signals plus the shield. The hot signals are inverted relative to each other as they travel along the balanced cable. They are re-inverted when entering an audio device — this has the effect of inverting any unwanted interference, thus eliminating it.

Bandpass Filter: A circuit which filters out all but a certain range of frequencies, ie. it allows a certain band of frequencies to pass.

Bandwidth: A range of frequencies.

Barn Doors: Metal projections attached to the front of a light, which can be positioned in various ways to control the dispersal of the light.

Batch Capture: The process of capturing multiple video clips automatically. A batch command is set up from the capture software which includes in and out points for each clip.

Baud: Unit of signal speed — the number of signal "bits" per second.

Best Boy: On a film set, the assistant to the Gaffer and Key Grip.

Beta (1): A group of video formats developed by Sony Corporation. Beta, Beta SP, Digital Beta and other variations are all professional television formats. Betamax is a failed consumer version, losing to VHS in the 1980's.

Beta (2): A pre-release version of computer software. Often distributed widely without charge, in order to obtain feedback, identify bugs, and attract customers.
Binary: The "base two number system" which computers use to represent data. It uses only two digits: 0 and 1. Binary code represents information as a series of binary digits (bits). In the table below, binary numbers are shown with their decimal equivalents.
Decimal: 0 1 2 3 4 5 6 7 8 9 10
Binary: 0 1 10 11 100 101 110 111 1000 1001 1010


Bit: Binary digit. One piece of binary (digital) information. A description of one of two possible states, e.g. 0 or 1; off or on.

Bitmap: A series of digital image formats, which record colour information for each individual pixel. As a result, the quality is very high, as is the size of the file.

Biscuit: Square/rectangular metal part which screws to the bottom of the camera plate, and allows the plate to attach to the head. The biscuit comes as part of the head's package, whereas the plate comes with the camera. The biscuit is the "interface" between the two, and is designed to attach to any plate, and fit into a corresponding slot on the head. When the head's quick-release mechanism is activated, the biscuit, plate and camera are all released as one.

Black balance: A camera function which gives a reference to true black. When auto-black balance is activated (by a switch, positioned with the white balance switch), the iris is automatically shut, and the camera adjusts itself to absolute black.

Black burst: A composite video signal with no luminance information, but containing everything else present in a normal composite video signal.

Black noise: Usually refers to silence with occasional spikes of audio. Other definitions are also in use but this is the most common.

Blonde: A term used to describe tungsten lights in the 2Kw range.

Blue noise: Random noise similar to white noise, except the power density increases 3 dB per octave as the frequency increases.

Bluetooth: A wireless data transfer system which allows devices to communicate with each other over short distances, e.g. phones, laptops, etc.

Blu-ray: A high-definition DVD format supported by a group of manufacturers led by Sony. More info: The Blu-ray Format, Blu-ray vs HD-DVD.

BNC: A type of video connector common in television production equipment, used to transmit a composite video signal on a 75Ω cable.

Bridge: Another term for A/D converter.

Broadband: A general term to describe an internet connection faster than 56K. Broadband usually means 512K or greater.

Brown noise: Random noise similar to white noise but with more energy at lower frequencies.

Bucket: A solid coloured horizontal bar across the bottom of a colour bar test pattern. The most commonly used bucket colour in PAL patterns is red, referred to as a "bucket of blood".

Burn: The process of recording information to an optical disk (CD or DVD).

Bus: Pathway which a signal passes along. For example, the main output of an audio mixer is referred to as the master bus.

C

C: A computer programming language, with variations C+ and C++.

Cable Television: A system of television progam delivery via cable networks.
Camcorder: A single unit consisting of a video camera and recording unit.

Candlepower: A measurement of light, generally that which is output from an electric lamp.

Cans: An informal term for headphones.

Capture Card: A type of computer card with video and/or audio inputs which allows the computer to import an analogue signal and convert it to a digital file.

CCD: Charged Coupled Device.

CCU : Camera Control Unit.

CD Compact Disc. Optical storage device, capable of storing around 600-700MB of data.

Channel (1) : On audio mixers, the pathway along which each individual input travels before being mixed into the next stage (usually a sub-group or the master bus). Each channel will typically have an input socket where the source is physically plugged in, followed by a sequence of amplifiers / attenuators, equalisers, auxiliary channels, monitoring and other controls, and finally a slider to adjust the output level of the channel.

Charged Coupled Device: The image sensing device of video and television cameras -- the component which converts light from the lens into an electrical signal. Made up of pixels - the more pixels, the higher the resolution. CCDs are commonly referred to simply as "chips". They replaced previous tube technology in the 1980's. Larger CCDs can naturally accommodate more pixels, and therefore have higher resolutions. Common sizes are 1/3" (pro-sumer level), 1/2" and 2/3" (professional level). Consumer cameras generally have a single CCD which interprets all colours, whereas professional cameras have three CCDs -- one for each primary colour.
Chroma Key: The process of replacing a particular colour in an image with a different image. The most common types of chroma keys are bluescreen and greenscreen.

Chrominance: Chroma, or colour. In composite video signals, the chrominance component is separated from the luminance component, and is carried on a sub-carrier wave.

Cinematographer: AKA Director of Photography, the person on a film production responsible for photography.

Cinematography: The art and science of movie photography, including both shooting technique and film development.

Clear Scan: A video camera function which allows the camera to alter it's scan rate to match that of a computer monitor. This reduces or eliminates the flicker effect of recording computer monitors.

Codec: Short for compressor/decompressor. A tool which is used to reduce the size of a digital file. Can be software, hardware or a combination of both.

Colour Bars: Click here to see an illustration of colour bars A television test pattern, displaying vertical coloured stripes (bars). Used to calibrate vision equipment. There are numerous variations for different applications.

Colour Temperature: A standard of measuring the characteristics of light, measured in units called kelvins.

Common Mode Signal: A signal which appears equally on both wires of a two wire line, usually unwanted noise. Common mode signals are eliminated with balanced audio cable.

Component Video: A type of high-quality video signal which consists of separate signal components. Usually refers to Y/Pb/Pr, which includes one channel for luminance information and two for colour.

Composite Video: A type of video signal in which all components are recorded or transmitted as one signal. Commonly used with RCA and BNC connectors.

Compression (1): A method of reducing the size of a digital file, whilst retaining acceptable quality. This may be desirable in order to save memory space or to speed up access time. In the case of digital video, large files must be processed very quickly, and compression is still essential for playback on consumer-level computers. Professional digital systems can work with uncompressed video. There are many compression techniques in common use, and digital video often uses various combinations of techniques. Compression can be divided into two types: "lossless" and "lossy". As the names imply, lossless techniques retain all of the original information in a more efficient form, whereas lossy techniques discard or approximate some information. With lossy compression, there is an art to finding a compromise between acceptable quality loss, and file size reduction.

Compression (2): Audio compression is a method of "evening out" the dynamic range of a signal. Compression is very useful when a signal is prone to occasional peaks, such as a vocalist who lets out the odd unexpected scream. The compressor will not affect the dynamic range until a certain user-definable level is reached (the "threshold") - at which point the level will be reduced according to a pre-determined ratio. For example, you could set the compressor to a threshold of 0db, and a compression ratio of 3:1. In this case, all signals below 0db will be unaffected, and all signals above 0db will be reduced by 3db to 1 (i.e. for every 1db input over 0db, 1/3db will be output). Other controls include the attack and decay time, as well as input and output levels.

Contrast Ratio: The difference in brightness between the brightest white and the darkest black within an image.

Convergence: The degree to which the electron beams in a colour CRT are aligned as they scan the raster.

CPU: Central Processing Unit., the "brain" of a computer.

Crab: Camera movement across, and parallel to, the scene.

Crossfade: A video and/or audio transition in which one shot/clip gradually fades into the next. AKA mix or dissolve. More info: The crossfade transition
Crossing the Line: A video transition in which the camera crosses an imaginary line drawn through the scene, resulting in a reversal of perspective for the viewer.

Crossover: An electrical network which divides an incoming audio signal into discreet ranges of frequencies, and outputs these ranges separately.

CRT: Cathode Ray Tube.

Cut (1): An instantaneous transition from one shot to the next.

Cut (2): A location director's instruction, calling for the camera and audio operators to cease recording and all action to halt.

D

D Series Tape Formats: A series of broadcast digital formats, designated D1, D2, etc. Dx is basically a replacement for 1-inch formats. D2 and D3 combine chrominance and luminance information, whereas D1 and D5 store them separately (and are therefore higher quality).

Dailies (1): Daily raw footage shot during the production of a motion picture (AKA rushes or daily rushes).

Dailies (2): Newspapers that are published every day (or 5/6 days per week).

DAT: Digital Audio Tape.

Data Rate: The amount of data which is transferred per second. In a video file, this means the amount of data the file must transfer to be viewed at normal speed. In relation to optical disks, this means the amount of data which can be read or written per second.

DC: Direct Current. The electrical current output by batteries, etc.
Decibel (dB): Logarithmic measurement of signal strength. 1/10 of a Bel.

Deliverables: The final products of the filmmaking process, used to create prints and other material for distribution.

Depth of Field: The zone between the nearest and furthest points at which the camera can obtain a sharp focus.

Depth Perception:The ability to recognize three-dimensional objects and understand their relative positions, orientation, etc.

Device Control: A tool which allows you to control another device. For example, a window within a video editing package from which you can control a video camera.
Differential Amplification: Method of amplifying a signal, in which the output signal is a function of the difference between two input signals.

Digital: A signal which consists of a series of discreet values, as opposed to an analogue signal, which is made up of a continuous information stream.

Digital S : Professional digital tape format, introduced by JVC in the mid-1990s.
Digital Video Editing: Editing using digital video formats and computer software. Also known as non linear editing.

Digital Zoom: A method of zooming which digitally crops and enlarges part of the image. This is not a true zoom and results in loss of quality.

Dissolve: A video transition in which one shot dissolves (fades) into the next. AKA mix or crossfade.

DLP: Digital Light Processing. A television technology that uses a coloured light beam which bounces across an array of hundreds of thousands of hinge-mounted microscopic mirrors attached to a single chip called a "micro mirror device".

Docutainment: From the words documentary and entertainment. A television programme which includes both news and entertainment content, or a blending of both.

Dolly: Any apparatus upon which a camera can be mounted, which can be moved around smoothly.

Dolly Zoom:: A cinematography technique in which the camera moves closer or further from the subject while simultaneously adjusting the zoom angle to keep the subject the same size in the frame.

Downstage: Toward the camera.

Dropout: Loss of part of a recorded video or audio signal, showing up as glitches on playback. Can be caused by damaged record heads, dirty tapes or heads, etc.

Driver: A piece of software which enables a piece of hardware to work with a computer. Usually supplied with the hardware, but can often be downloaded from the vendor's website.

Dry Run: Rehearsal, without recording or transmitting etc.

DTMF: Dual-Tone Multi-Frequency, better known as touch-tone. The standard system of signal tones used in telecommunications.

Dutch Tilt: A camera shot which is deliberately tilted for artistic effect.

DV: Digital Video.

DVCAM: Digital tape format from Sony.

DVCPRO: Professional digital tape format from Panasonic, introduced in the mid-1990s.

DVD: (Digital Video Disc or Digital Versatile Disc). An optical disc format which provides sufficient storage space and access speeds to playback entire movies.

DVD Authoring: The process of taking video footage, adding chapter stops, menus, and encoding the footage into MPEG files ready to be burned.

DVD Burning: Taking the authored DVD files and physically writing them to a disk.

Dynamic Loudspeaker: Loudspeaker which uses conventional cone and dome drive elements.

Dynamic Microphone: A moving coil microphone, which doesn't require power.

Dynamic Range: The difference between the weakest and strongest points of a signal.

E

Earth Hum: An unwanted noise which has been induced into a video or audio signal by faulty earthing.

Edit: The process of assembling video clips, audio tracks, graphics and other source material into a presentable package.

Edit Decision List (EDL): A list of all in points and out points for an editing task. Can be stored on a removable disc (e.g. floppy disc). This enables an edit to be constructed in one edit suite, then taken to another (better) suite to make the final version.

ENG : Electronic News Gathering. This term was introduced with the evolution of video cameras for shooting news in the field (as opposed to film cameras). It is still widely used to describe mobile news crews.

Exposure : The amount of light which is passed through the iris, and which the CCD or film is exposed to.

Equalisation: The process of adjusting selected ranges of audio frequencies in order to correct or enhance the characteristics of a signal.

F

Fade: A transition to or from "nothing". In audio, to or from silence. In video, to or from a colour such as black.

Field: In interlaced video, half a video frame. A field comprises every second horizontal line of the frame, making a total of 312.5 lines in PAL and SECAM, 262.5 lines in NTSC.

Film Noir: French for "black film" or "dark film". A term used describe a genre of film popular in America between 1940 and 1960.

Filter: A transparent or translucent optical element which alters the properties of light passing through a lens.

Flanging: In audio work, a type of phase-shifting effect which mixes the original signal with a varying, slightly delayed copy.

Floating Point Color: Available in 32-bit color digital images, floating point space allows colors to be defined as brighter than pure white or darker than pure black. This has advantages in image processing techniques.

Flying Erase Head: In video recorders, an erase head which is mounted on the drum assembly. The erase head wipes any previous recordings as new ones are made. "Normal" erase heads are stationary, and mounted to the side of the head drum. Because of their close proximity to the record heads, flying erase heads provide cleaner edits.

Floor Manager: In television production, the person in charge of the "floor", i.e. the area where the action takes place.

Focal Length: The distance from the centre of the lens to the camera CCD.

Focus: v. The process of adjusting the lens in order to obtain a sharp, clear picture.
adj. The degree to which an image is correctly focused.

FPS: Frames Per Second. The number of video or film frames which are displayed each second.
Frame (1): The edges of a television / video / film image.
Frame (2): To compose a camera shot. More info: Camera framing, Common shot types
Frame (3): One complete video, television or film picture. In video and television, each frame is divided into two interlaced fields. PAL and SECAM systems deliver 25 frames per second, with 625 horizontal scan lines. NTSC delivers 30 fps with 525 lines.

Frame Rate: The number of video or film frames displayed each second (frames per second; fps). PAL frame rate is 25 fps, NTSC is 30 fps, film is 24 fps.

Fresnel: A type of lens with concentric rings which focus the light. Pronounced fra-NELL.

Frequency Response: The sensitivity of a microphone (or other component) to various frequencies, i.e. the amount each frequency is boosted or attenuated.

F-stop: Measurement of aperture. The higher the f-stop number, the smaller the aperture.

F-type : A family of cable connectors, in which the centre (hot) pin is the centre wire of the cable itself.

G

Gaffer (1): Chief electrician on a film set.

Gaffer (2): Industrial-strength sticky tape, AKA duct tape.

Gain: The volume/amplification level of an audio or video signal.

Gauss: (pronounced "gows", abbreviation "G") Unit of magnetic induction.

Gel: (pronounced "jel") Semi-transparent heat-resistant material which is placed in front of a light source in order to modify it's colour temperature or other characteristics.

Geosynchronous: A satellite orbit in which the satellite remains in a fixed position above the Earth.

Graphic Equalizer: A type of audio equalizer which uses a graphical layout to represent the changes made to various frequencies.

Gray Card: A gray-coloured card which reflects a known, uniform amount of the light which falls upon it. Used as a reference to calibrate light meters and set exposure.

Gray Noise: Random noise, similar to white noise, which has been filtered to make all frequencies appear equally loud to the human ear.

Green Noise: An unofficial term referring to the background ambie