Code your own Neural Network

Code you’re own neural network:

Great Book coding Neural Networks

A step-by-step gentle journey through the mathematics of neural networks, and making your own using the Python computer language. 

Neural networks are a key element of deep learning and artificial intelligence, which today is capable of some truly impressive feats. Yet too few really understand how neural networks actually work. 

This guide will take you on a fun and unhurried journey, starting from very simple ideas, and gradually building up an understanding of how neural networks work. You won’t need any mathematics beyond secondary school, and an accessible introduction to calculus is also included. 

The ambition of this guide is to make neural networks as accessible as possible to as many readers as possible – there are enough texts for advanced readers already!

You’ll learn to code in Python and make your own neural network, teaching it to recognise human handwritten numbers, and performing as well as professionally developed networks. 

02 MA Genetic Algorithms Research

The research was based on the study of Shiffman’s Genetic algorithm explanations as the groundwork for the field of study. This was also the basis of our MA openframeworks journey and code using Genetic algorithms.

Overview

The basis of genetic algorithms is the solving of quite difficult problems which if done sequentially will be very hard to complete in a short space of time. (Example: How to obtain a sentence that is calculated using genetic code.)

Population  We need a population of something defined (examples; sentances ;graphical objects) that we can use our DNA on and to somehow change the physical aspects of the objects in some way. 

The population creation should also create its own DNA. Whenever a class object is created.

Perhaps the attraction force to a mouse or blobs increases when the objects are selected. Also, the attraction force is hereditary.

 

We define a phenotype and genotype

genotype. The DNA. What is the DNA? Normally a code, or number stored in a array list. No limit on number of DNA codes. The key to the DNA is the phenotype. How we choose to physically express the DNA. We can either express the DNA as a actual defined object ( e.g. types of animals ) or we can be quite obtuse and define the DNA as a floating decimal between 0-1.

phenotype: How do we use the DNA? In this case we use a population of spheres with a simple physics engine using perlin noise as the force vector driving the acceleration. What does the the DNA do? We can access only one agent of the DNA. What do we do with it? We use it to drive both the size and speed of the spheres. The direction is using the perlin noise. The DNA give us the values between 0-1 and then we map them to a phenotype ( a physical property).

 

The fitness algorithm. 

This is probably the most important feature of genetic algorithms, It decides on whether an existing member of the population exists or not. How do we decide that? We can either use a simple algorithm to make the case for a member of the population to survive or we can use the input from an outside force the user! The third option is fascinating. Decide on whether a element of the population survives by using a random means. In this case we create x,y point of “food: whereby if the element of the population comes near them then they can be fed and stay alive ( be healthy ). Also if they stay alive for long enough they can create new offspring. They survive and breed.

Why use them ?  – We can obtain difficult solutions very fast. The classical way of using them is to run a series of sequences (either using time as a factor of finishing an event or having a lifespan determining a measured event). However, the clever way to use them is to have “health” factor determine there existence then have this heath factor augmented by a user event, an interactive event. Or do both!

 

Shiffman’s solution for Genetic Algorithms

 

The initial equation by shiffman just uses spheres: Each sphere receives its DNA that defines is speed and size and its color is defined by its “health”. The spheres are dying since there birth. There health is defined by a clock that ticks downwards. If they manage to find food (the grey rectangles) they increase there health. If they live for long enough they have a higher chance of reproduction.

 

 

 

 

Development of the ecology experiment

From the initial setup of the ecology programme the first thing I did was to expand the DNA values to accept over 20 values and start populating them with different variables as a test:

// ACCESS DNA 0 SPEED
// ACCESS DNA 0 RADIUS
// ACCESS DNA 1,2,3 RGB
// ACCESS DNA 4 NO. OF POLYGONS
// ACCESS DNA 5 TYPE OF DISPLAY

The DNA values could also include elements to address the sound associated with each class of display elements. Something to investigate later.

The first two tests showed the DNA controlling the colour,  the  no. of polygons and the type of element displayed.

 

 

The second one shows the food following the mouse position

 

User Interaction

The next step was to add the health if they come into contact with the mouse or attraction force target.

 

 

Add-on DNA attraction force

Add the ability for the elements to be attracted to the mouse if they are selected by the user (or come within a certain distance of the mouse.).

 

 

 

 

 

SaveSave

SaveSave

SaveSave

Posted in ma

02 The Darkness Serenade MAXMSP

 

PROJECT BACKGROUND

The Darkness Serenade by Colin Higgs

medium: interactive installation
space requirements: 2.5 square metres

There is something very engaging about the unexpected. The unexpected in this generative art installation is in the direct and gestural movement the spectator can have on a graphical interactive display and how they react to or notice the sounds they are making. There is a lot for the person to learn about the work. How they fit into it. What they add to the work. How they can construct both unique and unusual sounds and graphics that they feel are part of a performance. Their performance. They are the conductors of the piece.

The project is an installation of a symbolic journey in darkness with an element of hope. The spectator embarks on a symbolic journey of interaction and creation of both sound and visuals.The performance starts up by the spectator entering a small light-projected arena. They stay in a centre area for until the sensors pick them up and all the system is reset.

The graphics which are projected onto the floor are reset and so is the sound score to nothing. The person notices that as they travel around the projected arena the graphics seems to be approaching them or backing off them depending on the type of motion they make. They also notice that the sounds they create are made by there own movements. Each time they move they create a series of sounds. The darkness side of the piece of work corresponds to a black generative graphic that spreads towards the spectator if she/he does not move.

The hope of the piece lies in the somewhat dissonant landscape sounds created by the movement of the person. Partly noise and partly tonal.

The balance between the graphic and sound (darkness and hope) lies between the number of movements and distance moved relating to the numbers of sound samples generated and the way the graphic moves towards the spectator or away from the spectator. The sound will also be influenced as well by the persons movement (motion detection algorithm).

Creative Motivation

The motivation for the piece was driven by a soundscape I made in MAXMSP, using a Fourier transform (this was the sound used in maxmsp called “S00_FOURIER”). I found the sound quite haunting and began to think about how visuals and interaction can form the basis of a artwork.

Future Development

For further development I would like to work with a contemporary dancer in using the artwork as part of a performance. I would like to see it as a first step with the work, I would intend to develop a group performance (2 or 3 people).  

Why make the work? When I see a person interact with my artwork sometimes I feel it elevates my work: it takes it into a new unexpected direction. This for me is quite beautiful and unexpected and very rewarding so thats why I wanted to make it.  In terms of who is it for? I would say the basic version would be for everyone. People find it stimulating and unexpected (especially the gesture interaction). In terms of a performance (dance) piece I would use it only as the start of something and develop it further in terms of a narrative piece of audio visual work.

 

 

 

THE RESULT

 

DARKESSSERENADE POPUP

 

 

This project has an overlap with my Machine Learning project. Half of this project was designed for a MAXMSP sound interactive experience and the graphic experience was designed for machine learning. The max project was further separated by removing all the machine learning code and only using hardcoded inputs and outputs.  A summary of the differences between the 2 projects are as follows

PROJECT              INPUTS                                                    OUTPUTS                 

MAXMSP:            kinect —-> OSC—>                      processing graphics /MAXMSP

MACHINE:            kinnect -> OSC–>WEK HELP/WEK OSC–>      processing   LEARNING                                                                                            graphics /
.                                                                                                              /MAXMSP                                                                                                                                                                                                                                                  

The machine learning installation is using 20 trained inputs. 
The MAXMSP installation is using 9 hardcoded inputs. 

See figure below:

Hardcoded inputs made in the processing sketch for the kinect capture:

// oscFloatx1 – average position of person in x uses depth data
// oscFloaty1 – average position of person in y uses depth data
//
// oscFloatvel1 – average velocity of person detected
// oscFloataccel1 – average acceleration of person detected
// countMinutes – count in minutes since simulation started
// oscFloatSignal – Has kinect detected a person? yes 1 no 0
// oscCorners – Has kinect detected any gestures? (0-no 1-2-3-4==n-s-w-e)
// oscStretchx1 – if any gestures have been detected send x cord
// oscStretchy1 – if any gestures have been detected send y cord

 

MAXMSP CODED INPUTS FOR SOUND:


The controller of all the sounds is called: S_CONDUCTOR1
It controls both the sound duration and events and each mapping in MAXMSP. Remap the persons Y position to play a selected event 1-7. The mapping was zmap 0 480 1 7. Remap a persons x-position to a time. zmap 0 680 20000 35000. 
Remaps the persons X position to 20-35 seconds for the duration of a sample.

The selections:
S00_FOURIER: always triggered on a users erratic movement. Acceleration > 3. This generative sound is always present in the soundscape. 

The other sounds are non-permanent sounds and are generative and also make  use of samples. They are all triggered by the S_CONDUCTOR1 patch: 

sel 1- Play music S01_STARTA
sel 2- Play music S02B_ACCEL & S03_SAMLPLES
sel 3- Play music S03_SAMLPLES & S04_ACCEL
sel 4- Play music S04_ACCEL & S05_ACCEL
sel 5- Play music S05_ACCEL & S06_SAMPLES_RESTART
sel 6- Play music S06_SAMPLES_RESTART  & S02B_ACCEL
sel 7- Play music S01_STARTA & S02B_ACCEL

The detail of the sound creation will be looked at below. Here we look at the choices and what they mean. 

S00_FOURIER – Is a generative sound always playing in the background and is triggered by the persons acceleration.  Its distinctive in different tones and can be crescendoed by the acceleration (a series of different tones can be triggered).

S01_STARTA- Is a poly 16 voice noise patch. Its creates noise with different central noise frequencies placed into a reson filter. Its a subtractive filter using pink noise and gives us a frequency centred noise output similar to the fourier noise transform output in patch 00. Its pitch is further modulated by a continuing random number.

S02B_ACCEL &  S03_SAMLPLES –  02b is a generative sound ( a “comb” filter)  and changes settings every 3 seconds and is triggered when the acceleration is greater than 5.  03 samples is triggered only by its initial call via the conductor, however, as to which sample is played is dependent on the persons Y position. The construction of the two sounds means either one is heard : acceleration causes 02b to be triggered and NO Movement causes the samples to be called. They play off each other,

S03_SAMLPLES & S04_ACCEL- 04 is a generative sound ( a “fffb~” filter) a subtractive filter. Its mapped using a persons X position and using this to map a frequency to make harmonic packets from pink noise. Again the use of samples and generative sounds (03 samples) makes for a great combination as they too play off each each other. The samples are only heard if the generative sound is not.

S04_ACCEL & S05_ACCEL-  Plays two generative sounds. 04 is a generative sound ( a “fffb~” filter) a subtractive filter. 05 is also a generative sound based from the FM_Surfer patch by John Bischoff. It makes a FM synthesiser sound. 04 generative sound is mapped to the position of the person and 05 generative sound is mapped to the acceleration is greater than 8 trigger.

S05_ACCEL & S06_SAMPLES_RESTART 05 is also a generative sound and is combined with a samples patch. Again both play off each other. 06 samples and 03 samples are completely different.

S06_SAMPLES_RESTART & S02B_ACCEL – Samples are played against a generative patch. Off against 02b is a generative sound ( a “comb” filter) . 

S01_STARTA & S02B_ACCEL- Two generative patches play together.  01 patch changes sound changing on the x-position of the person. 2b is a generative sound ( a “comb” filter) and is triggered by the persons acceleration



 

 

PROCESSING GRAPHICS CODED INPUTS FOR DISPLAY:

Based on the 9 inputs from the Kinect capture code:
PERSON POSITION X AND Y – obtained using the persons depth positioning using the average values of all the paid depths obtained. 
VELOCITY VALUES ABS – Calculates the average velocity using last reading of velocity
ACCELERATION VALUES – Calculates the average acceleration using last reading of acceleration
TIME PASSED IN MINUTES – Since the sketch started
SYSTEM ACTIVE OR PASSIVE – Do we capture a reading?
GESTURES SPLIT INTO 4 SIGNALS – We calculate the gestures based on the extreme values of leftmost, rightmost, topmost and botmost signals detected.
GESTURES X AND Y – The positions of the extreme gestures are calculated

These parameters are then sent on to both the graphics sketch (in processing) and the sound collective sketch (MAXMSP) using the OSC library for processing.

 

Re-mapped values in the graphics sketch:

Once the OSC values are received in the graphics sketch they are further changed as follows:

oscFloatx1 = map(theOscMessage.get(0).floatValue(), 0, 640, 0, 1920);
oscFloaty1 = map(theOscMessage.get(1).floatValue(), 0, 480, 0, 1080);
oscFloatvel1 =map(theOscMessage.get(2).floatValue(), 0, 100, 0, 100);
oscFloataccel1 = map(theOscMessage.get(3).floatValue(), 0, 200, 0, 200);
oscCountTime = int(map(theOscMessage.get(4).floatValue(), 0, 20, 0, 21));
oscFloatSignal = int(map(theOscMessage.get(5).floatValue(), 0, 1, 0, 1));

oscCorners = int(map(theOscMessage.get(6).floatValue(), 0, 4, 0, 4));
oscCornersx1 = map(theOscMessage.get(7).floatValue(), 0, 640, 0, 1920);
oscCornersy1 = map(theOscMessage.get(8).floatValue(), 0, 480, 0, 1080);

// Remappings based on  the above readings from the Kinect

oscAccelMax = max(oscFloatvel1, oscFloataccel1);
oscAccelStdDev = max(oscFloatvel1, oscFloataccel1);
oscposXStdDev = max(oscFloatvel1, oscFloataccel1);
oscposYStdDev = max(oscFloatvel1, oscFloataccel1);

// These were embedded  and coded directly into MAX MSP
oscConductEvents = int(map(oscFloaty1, 0, 1920, 1, 10));
oscConductTime = map(oscFloaty1, 0, 1080, 10000, 30000);

//  Remappings

oscGraphicsTrigger=map(oscAccelMax, 0, 200, 5, 12);
oscColourTrigger=map(oscAccelMax, 0, 200, 8, 12);
oscParticlemaxspeed = map(oscFloatx1, 0, 1920, 0.8, 18);
oscParticlemaxforce = map(oscFloaty1, 0, 1080, 1, 10);
oscMixGraphics = (int(theOscMessage.get(4).floatValue() ) % 5) +1;

 

Adding Variety

 

13 and 14. Graphics Trigger Values/Color Trigger Values.

These values controlled if a graphic is created and if it changes colour.  They trigger different graphic counters and add all of the “bright swarms”  and “black swarms”.  

If the acceleration of the person was above either Graphics Trigger Values/Color Trigger Values they would trigger new graphics and change those graphics respectively.

oscAccelMax = max(oscFloatvel1, oscFloataccel1);
oscGraphicsTrigger=map(oscAccelMax, 0, 200, 5, 12);
oscColourTrigger=map(oscAccelMax, 0, 200, 8, 12);

As shown above the graphics trigger is using “oscAccelMax” ( the max of acceleration and velocity which ever is largest at the time). New graphics are generated when acceleration is greater than 5-12 and colours of the graphics are changed when the acceleration is 8-12

 These were logical choices to choose for the activation of new shapes to be produced on the floor when the graphics trigger control value was exceeded by a fast/erratic movement.

The Graphics Trigger Values limits were further constrained to produce reasonable results. (So a sudden movement would push the acceleration max up dramatically but this movement was further confined to medium changes. Otherwise hardly any graphics would be made.)  The Color Trigger Valuess limits were restricted as well to give a good balance between non-excited graphics colours and excited graphics colours

 

15. and 16. Particle max speed and Particle max force.

oscParticlemaxspeed = map(oscFloatx1, 0, 1920, 0.8, 18);
oscParticlemaxforce = map(oscFloaty1, 0, 1080, 1, 10);

These parameters control all the particle swarms particle speed and force of attraction to the target. These outputs were mapped on x and y of a person. 

 

17. Mix up Graphics 1-5.

oscMixGraphics = (int(theOscMessage.get(4).floatValue() ) % 5) +1;

An arbitrary solution based on the acceleration value of the person and selects 1 of 5 values.  This value controls when different graphics are seen when the correct counter values for each graphics have been triggered. 

 

18,19 and 20. Gestures.

 

oscCorners = int(map(theOscMessage.get(6).floatValue(), 0, 4, 0, 4));
oscCornersx1 = map(theOscMessage.get(7).floatValue(), 0, 640, 0, 1920);
oscCornersy1 = map(theOscMessage.get(8).floatValue(), 0, 480, 0, 1080);

The gestures calculate extreme Kinect depth values either top or bottom or left or right. They will directly control the position of all of the graphic positions if they are detected.

The gestures were probably the best visual cue feature in the work. Everyone loved them. They were triggered by the distance between extreme depth value pixel readings from the average depth pixels picked up. They were immediate and developing them further on the graphics side would be nice feature that they change the size of the graphics when they were activated. For the sound would be nice if they changed the generative sounds that were chosen currently they trigger generative sounds. 

 

 

 

 

Instructions for compiling and running your project.

The setup is as follows.

 

What software tools were used?


The data inputs: kinnect input via processing and OSC
Data outputs: all outputs sent to both MAXMSP AND PROCESSING

The inputs that detect if someone is there or not are being detected by using a Kinect using infrared dots. A Kinect is needed to run the project properly via processing. However, the mouse inputs can be used for testing purposes and a processing sketch with the mouse setup for making up outputs has also been made. The Kinect passes on positional, velocity,acceleration data and gesture data as well.

The input Kinect processing sketches used:                            

sketch_15_KINECT_INPUT_SENDSOSC
sketch_15mouse_MAXMSP_INPUT_SENDSOSC

 

Next the outputs from the Kinect or the mouse feed into the sound assembly with MAXMSP with the main patch  SALL.maxpat and the graphics which were made in processing with the patch sketch_TEST33A_9inputs_MAXMSP_INPUTOSC.

The Graphics

The Graphics were based upon a Shiffman swarm algorithm https://processing.org/examples/flocking.html then modified to have different behaviours depending on the last distance the particle has travelled to and the distance from current particle position to the target position. The Particles had the ability to apply different kinematics solutions depending on the answers to these two questions. Further to  one swarming solution multiple swarms were coded with different kinematics on top of this to look completely different from the original particle swarms.

 

The processing sketch consists of 5 particle swarms called:
Ameoba02Vehicle: cell like but big
AmeobaVehicle: cell like but small
BrightVehicle: tadpole like
DarkVehicle: small black particles
EraticVehicle: like a “firework” pattern that uses a lissajous pattern

These are triggered by the Graphics Trigger Values. This in turn triggered counters for all the the graphics which keep getting updated.   However, when they are seen is triggered by “Mixup Graphics” value that keeps changes it mind about the trigger values.

If the person does not move or the distance to the target is within a certain range that has been set up for each swarm they will be deleted. If the distance to the target is within another certain range all the particles have a random target algorithm incorporated.

The patch which contained all the graphics of the swarms:

sketch_TEST33A_9inputs_MAXMSP_INPUTOSC

 

 

 

The MAXMSP Sound Control:

All the sketches are contained within the “SALL” patch.
Which consists of 8 patches:


S_CONDUCTOR1 – decides which sounds to play for and how long for (from S01 to S06). Gets readings from Kinect based on a persons position in X and Y. All of the generative and samples only play for “x'” seconds if called by S_CONDUCTOR1.

The following sounds and samples can be called:
S01_STARTA    –   A 16 voice poly patch that uses noise with a Reson filter to
                              output random noise values
S00_FOURIER  –  Uses Noise in fourier transform to output discreet packets of
     
                         noise that slowly dissipate
S02B_ACCEL      –  Another filter comb~ uses a  sample input to mix to timings of
                              the 
samples.
S03_SAMLPLES- Uses 9 samples in the patch and mixes them together with
                              different timings

S04_ACCEL      –  Uses a fffb~ filter a subtractive filter uses pink noise to give
                              discreet packets of noise similar to S01_STARTA  patch.
S05_ACCEL      –  Based on the fm-surfer patch. A frequency modular syntheziser 
S06_SAMPLES_RESTART – Uses 9 another set of samples in the patch and mixes
                              them together with different timings.

03_SAMLPLES_REV & 06_SAMLPLES_REV- Same as the other sample patches but only uploads one sample at a time. Was not used.

 

Max code related origins:

The code for sound was based upon :

S00_FOURIER :  This code was partially based on The forbidden planet patch.

S01_STARTA patch:  This code was partially based on Polyphony Tutorial 1: Using the poly~ Object

S02_ACCEL patch: This code was based on MSP Delay Tutorial 6: Comb Filter

S04_ACCEL patch:  This code was based on Filter Tutorial 4: Subtractive Synthesis

S05_ACCEL patch : This code was partially based on FM-SURFER patch.

S03_SAMLPLES & S06_SAMPLES_RESTART: This code was partially based on the Sampling Tutorial 4: Variable-length Wavetable

S06_SAMPLES_RESTART & S03_SAMLPLES: This code was partially based on the Sampling Tutorial 4: Variable-length Wavetable 

03_SAMLPLES_REV & 06_SAMLPLES_REV: This code was partially based on the Sampling Tutorial 4: Variable-length Wavetable

S_CONDUCTOR1 patch: My code.

 

 

 

FILTER INFORMATION (mainly for my own comprehension)

 

The Fourier Transform Patch

This graphic EQ implementation is based on a frequency-domain signal processing technique. The input signal is converted to a frequency-domain signal using the Fast Fourier Transform (FFT). It is then convolved (complex multiply) with another frequency-domain signal, based on a function describing the desired spectral attenuation. The resulting signal is then converted back to a time-domain signal using the inverse FFT (IFFT). The number of “bands” in this implementation is given by the FFT’s window size. In this case, it is 1024 samples long, yielding 1024 / 2 = 512 bands, evenly covering the frequency range (sampling rate / 2). Each band’s width in Hz is (1/2 * sampling rate) / 512. Probably around 43 Hz in your case.

Comb filter

An example of such an object is comb~, which implements a formula for comb filtering. Generally speaking, an audio filter is a frequency-dependent amplifier; it boosts the amplitude of some frequency components of a signal while reducing other frequencies. A comb filter accentuates and attenuates the input signal at regularly spaced frequency intervals — that is, at integer multiples of some fundamental frequency.

Technical detail: The fundamental frequency of a comb filter is the inverse of the delay time. For example, if the delay time is 2 milliseconds (1/500 of a second), the accentuation occurs at intervals of 500 Hz (500, 1000, 1500, etc.), and the attenuation occurs between those frequencies. The extremity of the filtering effect depends on the factor (between 0 and 1) by which the feedback is scaled. As the scaling factor approaches 1, the accentuation and attenuation become more extreme. This causes the sonic effect of resonance (a ‘ringing’ sound) at the harmonics of the fundamental frequency.

The comb~ object sends out a signal that is a combination of a) the input signal, b) the input signal it received a certain time ago, and c) the output signal it sent that same amount of time ago (which would have included prior delays). In the inlets of comb~ we can specify the desired amount of each of these three (a, b, and c), as well as the delay time (we’ll call it d).

Technical detail: At any given moment in time (we’ll call that moment t), comb~ uses the value of the input signal (xt), to calculate the output yt in the following manner:
yt = axt + bx(t-d) + cy(t-d)
The bx(t-d) term in the equation is called the feedforward and the cy(t-d) term is the feedback.

The subtractive THE FFB FILTER

The fffb~ object stands for Fast, Fixed, Filter Bank. Unlike the cascade~ object, which implements a number of biquad~ filters in series, the fffb~ object arranges a number of reson~ objects in parallel, which is to say that the settings of one filter will not affect any of the others. The fffb~ object takes a number of arguments which set its behavior: the number of filters, the base frequency of the filter bank, the ratio between filters, and the Q of the filters. All of the parameters of the object with the exception of the number of filters can be changed with Max messages; the number is fixed because, as we can see, each filter connects to a separate outlet. This allows us to create filter banks, where we can ‘tap’ each bandpass filter individually:

FM Surfer
“The first step is to admit that no one really understands FM synthesis. There, doesn’t that make you feel better? By putting your trust in a higher power(in this case, the “random” object), you can spare yourself and your loved ones a lot of rebooting.”

This patch simulates a TX81Z-style FM synthesizer, but the parameters are randomly generated. John Cage meets John Chowning.

Poly~
The poly~ object takes as its argument the name of a patcher file, followed by a number that specifies the number of copies (or instances) of the patch to be created. You’ll want to specify the same number of copies as you would have had to duplicate manually when implementing polyphony the old-fashioned way. Here’s an example of the poly~ object.

Working with samples I came up with 2 solutions in working with 10 possible samples to call. The two solutions are shown in these patches:

Both of them use:

wave~ reads from a portion of a buffer~ to produce a repeating waveform, given a signal input that goes between 0 and 1 (for example, using a phasor~) to define the position in the buffer. Every sample has been chosen to start from a selected in pointing its speed has been chosen.

Both patches have 9 different samples but how they go from one sample to another is different.

S06_SAMPLES_RESTART – Uploads all nine samples at the start but only plays one sample at a time. Sends messages of the volume to all nine samples which are changing depending on the selected sample to be played. Its a giant mixing desk. 

06_SAMLPLES_REV– chooses to import the samples chosen on the fly. Only one sample is ever loaded to be played but still uses the volume messages to control all nine samples volume.

 

 

Summary in running the installation:

 

 

A reflection how successful your project was in meeting your creative aims. 

For a hard-coded solution the results were only a 80% of what I achieved with Machine Learning. The results were very good. It was impossible to say which was the Machine Learning and which was hard coded. Both outcomes were very good.

The hardcoded Gestures were one of the best features of the work; it made the whole piece so much a part of the user interacting with the piece. They felt so much more connected to the pieced it left personal for them. The code was quite logical and it was possible for further code to predict which was the left and right hand leading to even more gesture outcomes.

The sound even though controlled mainly by using the position of X and Y of the person also worked surprisingly well in using a “Conductor” to predict which sounds should be played and when. The calling of the patches through the conductor (although a very straight forward solution) also gave me an open solution. It could be endlessly expanded. As the sounds called could be endless and generative (not fixed as well). I was very pleased with this outcome. Using the acceleration values as triggers also worked well for of the generative sounds.

The whole piece came around from the S00_FOURIER generative sound which I kept on exploring how I could trigger discreet packets of Fourier noise to create a sound reactive generative piece of work which relied on triggering events. It was only after achieving this that I decided to explore a full body of work reacting with sound and graphics.

What works well?  

The best features were the conductor and the gestural features. They were intuitive and liked a lot by most users. The conductor played the generative music and played the samples when the person wasn’t moving much. It worked perfectly. The conductor decided what samples to play and how long they lasted for depending on the position of the person (either the x or y co-ordinate of a person). This could easily be changed to another parameter in Wekinator. The gestural features were based on collecting extreme values of the depth readings in terms of their position. From the extreme data positions like a compass (N-S-E-W) I was able to decide where the graphics should move to and when if a person made a gesture and this again worked so well for a relatively simple solution.

What challenges did you face? 

MAXMSP can be quite formidable in terms of approaching and solving problems. A lot of the results were purely empirical in nature. Occasionally trawling the net for some help. But  a good trait of mine is if I can’t do something the first time then I will try 20 times to achieve my goal and eventually I achieve my goals or get to a solution that is acceptable.. 

What might you change if you had more time?

Apart from the current direct outputs, the coding could always be developed and if the work we’re to carry on with two contemporary dancers it would need to have a more visual and audio story. Something I would be keen to do in the future and it would be great to document the results. Taking more time to fully explore the wealth of MAXMSP as well. This software has amazing scope and ability. I need far more time to explore MAXMSP and what it can achieve.

MY MAXMSP FAILURES

03_SAMLPLES_REV & 06_SAMLPLES_REV – Even though the code was clever: it loaded on the samples one by one I didn’t like there transitions so I decided to use the much heavier solution of loading up all the sample at once. Why? The transitions of the patch that used all sounds to control which sample was heard was gentler than the abrupt changes between one sound only loaded one sample at a time.

S04B_ACCEL- This patch also used presets to change the sound that would be played depending on the input of a persons position converted into a central frequency for the patch. However, although the presets worked fine individually when they went from one preset to a new one they caused loud glitches and so decided not to use it. (The same idea, however,  worked in patch 02B and so was used.)

 

FURTHER RECORDINGS

Nightime recording:

 

Data set 01 only:

Data set 01 and 02 alternating:

References:

 

MAXMSP 

https://www.youtube.com/watch?v=9gQAHf0Sf9I

 

PROCESSING SKETCHES

https://www.youtube.com/watch?v=IoKfQrlQ7rA&t=533s

https://www.youtube.com/watch?v=MkXoQVWRDJs

https://www.youtube.com/watch?v=AaGK-fj-BAM&t=927s

 

OSC PROCESSING

https://www.youtube.com/watch?v=2FG7AszjWDc

OSC MAX
https://www.youtube.com/watch?v=bupVHvMEAz0&t=942s

OSC  OPENFRAMEWORKS
https://www.youtube.com/watch?v=TczI-tSOIpY&t=223s

 

INSPIRATION

https://vimeo.com/39332848

https://vimeo.com/69709493

https://vimeo.com/14811642

https://vimeo.com/57689391

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Posted in max

09 The Darkness Serenade Machine Learning

 

PROJECT BACKGROUND

The Darkness Serenade by Colin Higgs

medium: interactive installation
space requirements: 2.5 square metres

There is something very engaging about the unexpected. The unexpected in this generative art installation is in the direct and gestural movement the spectator can have on a graphical interactive display and how they react to or notice the sounds they are making. There is a lot for the person to learn about the work. How they fit into it. What they add to the work. How they can construct both unique and unusual sounds and graphics that they feel are part of a performance. Their performance. They are the conductors of the piece.

The project is an installation of a symbolic journey in darkness with an element of hope. The spectator embarks on a symbolic journey of interaction and creation of both sound and visuals.The performance starts up by the spectator entering a small light-projected arena. They stay in a centre area for until the sensors pick them up and all the system is reset.

The graphics which are projected onto the floor are reset and so is the sound score to nothing. The person notices that as they travel around the projected arena the graphics seems to be approaching them or backing off them depending on the type of motion they make. They also notice that the sounds they create are made by there own movements. Each time they move they create a series of sounds. The darkness side of the piece of work corresponds to a black generative graphic that spreads towards the spectator if she/he does not move.

The hope of the piece lies in the somewhat dissonant landscape sounds created by the movement of the person. Partly noise and partly tonal.

The balance between the graphic and sound (darkness and hope) lies between the number of movements and distance moved relating to the numbers of sound samples generated and the way the graphic moves towards the spectator or away from the spectator. The sound will also be influenced as well by the persons movement (motion detection algorithm).

Creative Motivation

The motivation for the piece was driven by a soundscape I made in MAXMSP, using a Fourier transform (this was the sound used in maxmsp called “S00_FOURIER”). I found the sound quite haunting and began to think about how visuals and interaction can form the basis of a artwork.

Future Development

For further development I would like to work with a contemporary dancer in using the artwork as part of a performance. I would like to see it as a first step with the work, I would intend to develop a group performance (2 or 3 people).  

Why make the work? When I see a person interact with my artwork sometimes I feel it elevates my work: it takes it into a new unexpected direction. The for me is quite beautiful and unexpected and very rewarding so thats why I wanted to make it.  In terms of who is it for? I would say the basic version would be for everyone. People find it stimulating and unexpected (especially the gesture interaction). In terms of a performance (dance) piece I would use it only as the start of something and develop it further in terms of a narrative piece of audio visual work.

 

 

 

THE RESULT

 

 

 

 

 

 

 

 

MACHINE LEARNING


I wanted to use Machine Learning with this work to experiment with unusual outcomes. This is something Machine Learning can do very well as it can map non-linearly and quite 
arbitrarily using different features. Its very fast to train when using Wekinator and changes are made quickly as well. An added bonus to this in a new version of Wekinator is the ability to use multiple training sets in one piece of work.  Switching between them when you would like. This too could be decided by Wekinator. When you see all these possibilities Machine Learning becomes a valuable tool in all interactive installations.

What datasets did you use? 
I used a mixture of straight forward mapping to using arbitrary trained data based on either a logical solution or were chosen arbitrarily using different features. Each feature and the data sets are spoken of again in greater detail below.

The final project used to trained data sets: 

WEK_TRAIN09C and WEK_TRAIN08D Both of these were used and switched between each other every minute.

WEK_TRAIN08D : Training data set 01

WEK_TRAIN09C: Training data set 02

 

 

 

 

 

This project has an overlap with my MAXMSP project. Half of this project was designed for a MAXMSP sound interactive experience and the graphic experience was designed for machine learning. The max project was further separated by removing all the machine learning code and only using hardcoded inputs and outputs.  A summary of the differences between the 2 projects are as follows

PROJECT              INPUTS                                                    OUTPUTS                 

MAXMSP:            kinect —-> OSC—>                      processing graphics /MAXMSP

MACHINE:            kinnect -> OSC–>WEK HELP/WEK OSC–>      processing   LEARNING                                                                                            graphics /
.                                                                                                              /MAXMSP                                                                                                                                                                                                                                                  

The machine learning installation is using 20 trained inputs. 
The MAXMSP installation is using 9 hardcoded inputs. 

See figure below:

Hardcoded inputs:

PERSON POSITION X AND Y
VELOCITY VALUES ABS
ACCELERATION VALUES
TIME PASSED IN MINUTES
SYSTEM ACTIVE OR PASSIVE
GESTURES SPLIT INTO 4 SIGNALS
GESTURES X AND Y

Wekinator Helper added values:
Accel Max. Value over the last 10 readings
Acceleration Standard Deviation
Position X and Y Standard Deviation

Wekinator trained outputs:

No.                            Name

1,2                             PERSON POSITION X AND Y
3                                VELOCITY VALUES ABS
4                                ACCELERATION VALUES
5                                TIME PASSED IN MINUTES
6                                SYSTEM ACTIVE OR PASSIVE
7                                Accel Max. Value over the last 10 readings
8                                Acceleration Standard Deviation
9,10                           Position X and Y Standard Deviation
11                              Conductor Events 1-10
12                              Conductor Time milliseconds 0-60000
13                              Graphics Trigger Values
14                              Color Trigger Values
15                              Particle Maxspeed
16                              Particle Maxforce
17                              Mix up Graphics 1-5
18                              Corners 1-4
19,20                         Gesture positions X Y

 Machine Learning Development

The Machine Learning Development started with a straightforward solution of direct mappings to match the hard coded versions (outputs 1 to 6). I did this so that communication was coming from only one source and not 2 separate sources and also as have the ability to train non-linearly.  The position of the person is a very important interaction with the installation. Graphically this is fundamental to making the person feel like they were interacting with the piece. Wherever they move the graphics tends to follow suit. All the data sets were set up by me using my data.

What machine learning and/or data analysis techniques have you used, and why? 

The project uses Wekinator helper and Wekinator main to train it to select different graphics, whether the graphics approaches or goes away from the user, the force of attraction to the user, when new different graphics are used, when and what and for how long generative music and samples are used. The algorithms used were either polynomial regression or neural networks or Classification k-nearest neighbour. 

 

Adding Variety

11 and 12    Conductor Events 1-10.   Conductor Time milliseconds  0-60000.

These values are used for MAXMSP. They control which sounds are made and how long for. These are in turn triggered by the posy of the person. Depending where the person is standing they will call two different sounds either generative or generative and samples. This is what the Conductor Event does for the project.

 The conductor events were trained with posy co-ordinates depending on where you are standing a different value between 1-10 will be defined.  Conductor events used a k-nearest neighbour to select events. Conductor Time used polynomial regression to select the time based on posy as well.

The results were very good. It was not predictable which time would be chosen and which event would be chosen as the person was generally moving continuously. Very happy with this setup. 

In set 02 of training data Conductor Events used pos x  and the Conductor Time  was used with pos x as well.  Although in theory the results should be similar they were not. People tend to move less in X than in Y!

Also the standard deviation of acceleration or a limit above a value of acceleration would also simulate generative sounds when they chosen to played by the Conductor Events. This was a key aspect of the setup.

 

13 and 14. Graphics Trigger Values/Color Trigger Values.

These values controlled if a graphic is created and if it changes colour.  They trigger different graphic counters and add all of the “bright swarms”  and “black swarms”.  

If the acceleration of the person was above either Graphics Trigger Values/Color Trigger Values they would trigger new graphics and change those graphics respectively.

These values were triggered in Wekinator using polynomial regression and a neural network. I had some slight issues training with zero using values that were only available if there was a acceleration. I couldn’t get exactly a zero value. As well as this issue I had a problem training to non-zero starting points as well. They would not give start and end values which didn’t start at 0. However, these were minor problems as remapping Wekinator values at the destination program was easy.

These values were trained in set 01 of training data Graphics Trigger Values to acceleration max over a window of 10 with polynomial regression and Color Trigger Values to a Neural network and the acceleration stand deviation over window of 10. Both gave me good results.

 These were logical choices to choose for the activation of new shapes to be produced on the floor when the graphics trigger control value was exceeded by a fast/erratic movement.

The Graphics Trigger Values limits were further constrained to produce reasonable results. (So a sudden movement would push the acceleration max up dramatically but this movement was further confined to medium changes. Otherwise hardly any graphics would be made.)  The Color Trigger Valuess limits were restricted as well to give a good balance between non-excited graphics colours and excited graphics colours

These were good parameters to get Wekinator to train with as they had a lot of variety in readings and were not constant. 

In set 02 of training data I chose position standard deviation in position x and y for both graphics trigger  and colour trigger changes.These were more sensitive than the acceleration values and gave better results.

 

15. and 16. Particle max speed and Particle max force.

These parameters control all for the particle swarms particle speed and force of attraction to the target. 

These outputs were trained on x and y standard deviations respectively with a poly. regression and neural network. I didn’t notice any real differences in performances between the poly. regression and the neural network.  They work well but had to be further refined to less extreme outputs. However, they produced unexpected patterns of swarming behaviour. They ranged from very small movements to forming circular patterns. The small movements we’re expected but the circular patterns were not. That was really nice and totally unexpected.   

In set 02 of training data I chose to train just the position x and y

 

17. Mix up Graphics 1-5.

This value controls when different graphics are seen when the correct counter values for each graphics have been triggered. 

 The data was trained using k-nearest neighbour on classification using a input of of pos-y. The “Mix up Graphics” outputs values from 1-5 based on the position Y of the user. The results were good as well as these outputs were changing all the time and they were switching outputs to different sets of visuals all the time. Totally unpredictable as to which solution would be chosen depending on a counter trigger of accelerations reached  over a certain time and that limit value being set by Wekinator and changing all the time depending on where the user is positioned.

In set 02 of training data I chose to train just the position x

 

18,19 and 20. Gestures.

The gestures calculate extreme Kinect depth values either top or bottom or left or right. They will directly control the position of all of the graphic positions if they are detected.

The gestures were probably the best visual cue feature in the work. Everyone loved them. They were triggered by the distance between extreme depth value pixel readings from the average depth pixels picked up. They were immediate and developing them further on the graphics side would be nice feature that they change the size of the graphics when they were activated. For the sound would be nice if they changed the generative sounds that were chosen currently they trigger generative sounds. Using Machine Learning to generate the gestures could be possible but you would need a visual cue for the gestures as the current data inputs are not time related as much a visual related. The current visual traits could possibly work but not with a kinect.

 

 

 

 

Instructions for compiling and running your project.

The setup is as follows.

 

What software tools were used?


The data inputs: kinnect input via processing and OSC
Data outputs: all outputs sent to Wekinator Helper and Wekinator
Wekinator Outputs: Set to MAXMSP and  Processing for sound and graphics outputs

The inputs that detect if someone is there or not are being detected by using a Kinect using infrared dots.  A Kinect is needed to run the project properly via processing. However, the mouse inputs can be used for testing purposes and a processing sketch with the mouse setup for making up outputs has also been made. The Kinect passes on positional, velocity,acceleration data and gesture data as well.

The Kinect processing sketches used:

sketch_14_WEK_INPUT_SENDSOSC
sketch_14mouse_WEK_INPUT_SENDSOSC

 

Next the outputs from the Kinect or the mouse feed into Wekinator helper and they in turn feed into WekinatorWekinator helper adds on 4 output values:

Accel Max. Value over the last 10 readings
Acceleration Standard Deviation
Position X and Y Standard Deviation

which are all used in the Wekinator main project.

After that all Wekinator outputs (shown in the machine learning section) feed back into the graphics display patch (in processing) that display all the particle swarm systems and also they further re-send the outputs as inputs to the sound control module that uses MAXMSP. Two training sets of data were used in the final project.

 

The Graphics

The Graphics were based upon a Shiffman swarm algorithm https://processing.org/examples/flocking.html then modified to have different behaviours depending on the last distance the particle has travelled to and the distance from current particle position to the target position. The Particles had the ability to apply different kinematics solutions depending on the answers to these two questions. Further to  one swarming solution multiple swarms were coded with different kinematics on top of this to look completely different from the original particle swarms.

 

The processing sketch consists of 5 particle swarms called:
Ameoba02Vehicle: cell like but big
AmeobaVehicle: cell like but small
BrightVehicle: tadpole like
DarkVehicle: small black particles
EraticVehicle: like a “firework” pattern that uses a lissajous pattern

These are triggered by the Graphics Trigger Values. This in turn triggered counters for all the the graphics which keep getting updated.   However, when they are seen is triggered by “Mixup Graphics” value that keeps changes it mind about the trigger values.

If the person does not move or the distance to the target is within a certain range that has been set u for each swarm they will be deleted. If the distance to the target is within another certain range all the particles have a random target algorithm incorporated.

Every minute the wekinator training set data was switched around between the 2 training sets from the graphics control and display patch made in processing: 

sketch_TEST33A_20inputs_WEK_INPUTOSC

WEK_TRAIN08D: Training data set 01
WEK_TRAIN09C: Training data set 02

 

 

 

The MAXMSP Sound Control:

All the sketches are contained within the “SALL” patch.
Which consists of 8 patches:


S_CONDUCTOR1 – decides which sounds to play for and how long for (from S01 to S06). Gets readings from Wekinator. All of the generative and samples only play for “x'” seconds if called by S_CONDUCTOR1.

The following sounds and samples can be called:
S01_STARTA    –   A 16 voice poly patch that uses noise with a Reson filter to
                              output random noise values
S00_FOURIER  –  Uses Noise in fourier transform to output discreet packets of
     
                         noise that slowly dissipate
S02_ACCEL      –  Another filter comb~ uses a  sample input to mix to timings of
                              the 
sample
S03_SAMLPLES- Uses 9 samples in the patch and mixes them together with
                              different timings

S04_ACCEL      –  Uses a fffb~ filter  a subtractive filter uses pink noise to give
                              discreet packets of noise similar to S01_STARTA  patch.
S05_ACCEL      –  Based on the fm-surfer patch. A frequency modular syntheziser 
S06_SAMPLES_RESTART – Uses 9 another set of samples in the patch and mixes
                              them together with different timings.

Max code related origins:

The code for sound was based upon :

S00_FOURIER :  This code was partially based on The forbidden planet patch

S01_STARTA patch:  This code was partially based on Polyphony Tutorial 1: Using the poly~ Object

S02_ACCEL patch: This code was based on MSP Delay Tutorial 6: Comb Filter

S04_ACCEL patch:  This code was based on Filter Tutorial 4: Subtractive Synthesis

S05_ACCEL patch : This code was partially based on FM-SURFER patch.

S03_SAMLPLES & S06_SAMPLES_RESTART: This code was partially based on the Sampling Tutorial 4: Variable-length Wavetable

03_SAMLPLES_REV & 06_SAMLPLES_REV: This code was partially based on the Sampling Tutorial 4: Variable-length Wavetable

S_CONDUCTOR1 patch: My code.

Summary in running the installation:

 

 

A reflection how successful your project was in meeting your creative aims. 

The project more than matched my aims as a lot of the results I can’t predict the outcome as its too complex. Fascinating the difference between a machine learning environment and just programmed: I cannot predict what will happen in machine learning. Its too complex. Especially when I start using standard deviation outputs. In adding in this complexity the result of which has given it DIFFERENT FEELING. It seems to have a nature of its own the graphics seem more alive to me. For me the results of the sound were amazing as well for two weeks work. They blended brilliantly. The samples would kick in when the person stopped moving. The generative sound would kick in when they detected acceleration values above a limit. That combined with the S_CONDUCTOR1 patch which decided when and how long to play samples for gave a very organic feeling to the installation.

What works well?  

The best features were the conductor and the gestural features. They were intuitive and liked a lot by most users. The conductor played the generative music and played the samples when the person wasn’t moving much. It worked perfectly. The conductor decided what samples to play and how long they lasted for depending on the position of the person (either the x or y co-ordinate of a person). This could easily be changed to another parameter in Wekinator. The gestural features were based on collecting extreme values of the depth readings in terms of their position. From the extreme data positions like a compass (N-S-E-W) I was able to decide where the graphics should move to and when if a person made a gesture and this again worked so well for a relatively simple solution.

What challenges did you face? 

The initial challenges faced are knowing the best way to use Wekinator it was was not so much about Wekinators algorithms as it was in creating new outputs that were coming only from Wekinator and being created by Wekinator. This was a steep learning curve to understand that it was the creative outputs that were the most important use of Wekinator. However, once I understood that this the creation of the outputs was relatively straight forward. Obviously, the limits to these outputs are only prohibited by the time it takes make  them and ones imagination. The results are great.

What might you change if you had more time?

Apart from direct outputs Everything. I would just keep on experimenting until I could envisage as many variations as possible. The only way to really understand the best use of the tool is to keep trying out different solutions.

 

 

FURTHER RECORDINGS

Nightime recording:

 

Data set 01 only:

Data set 01 and 02 alternating:

References:

 

MAXMSP 

https://www.youtube.com/watch?v=9gQAHf0Sf9I

 

PROCESSING SKETCHES

https://www.youtube.com/watch?v=IoKfQrlQ7rA&t=533s

https://www.youtube.com/watch?v=MkXoQVWRDJs

https://www.youtube.com/watch?v=AaGK-fj-BAM&t=927s

 

OSC PROCESSING

https://www.youtube.com/watch?v=2FG7AszjWDc

OSC MAX
https://www.youtube.com/watch?v=bupVHvMEAz0&t=942s

OSC  OPENFRAMEWORKS
https://www.youtube.com/watch?v=TczI-tSOIpY&t=223s

 

INSPIRATION

https://vimeo.com/39332848

https://vimeo.com/69709493

https://vimeo.com/14811642

https://vimeo.com/57689391

 

17. Sensing Climate Change and Expressing Environmental Citizenship in Program Earth

The use of environmental sensor technology which is low cost to enable Citizens to became engaged in their environments. Sensors along with practises become environmental (smart cities that join up gadets and sensors and smart phones). The Citizen becomes a sensor node in these scenarios. Using digital tech. to enact engagement. Urban sensing.

 

Chapter 4: Sensing Climate Change and Expressing Environmental Citizenship in Program Earth: Environmental Sensing Technology and the Making of a Computational Planet (Electronic Mediations) by Jennifer Gabrys

We are now in the mountains and they are in us.
—John Muir, My First Summer in the Sierra 

We are in the world and the world is in us.
—Alfred North Whitehead, Modes of Thought 

 practices of climate change monitoring in the Arctic and ask: How do we tune into climate change through sensing and monitoring prac- tices? What are the particular entities that are in-formed and sensed? How do the di ering monitoring practices of arts and sciences provide distinct engagements with the experiences of measurement and data? And what role do more-than- humans have in expressing and registering the ongoing and often indirect e ects of climate change, such that categories and practices of “citizenship” and citizen sensing might even be reconstituted? 

Climate change becomes a recurring factor that in-forms how and why environmental monitoring takes place and the environmental data that might be generated. Sensing of temperature in air, water, and soil; inventories of organisms and pollutants; and samples of pH in lakes and streams are examples of monitoring practices that can accumulatively demonstrate how environments are changing in relation to a warming planet 

ECOLOGICAL OBSERVATORIES AND MONITORING ENVIRONMENTAL CHANGE 

persistent organic pollutants (POPs) 

to increasing temperatures and shifts in land use, the Arctic is a region undergoing considerable changes 

While envi- ronmental monitoring at observatories may not initially have been established to study climate change, the decades-long stores of data that observatories now hold have often provided useful records for understanding how environments have changed over tim 

Planetary warming is taking place in much greater intensity in the Arctic regions due to the circulation of atmo- spheric and ocean currents toward the northern regions.7

Fifty Essential Variables 

In order to gather the data that are the basis for observed change, direct and ongoing measurements as well as historic and proxy measurements are gathered in relation to fty “essential cli- mate variables.” These variables include everything from air and sea surface tem- peratures to carbon dioxide levels, ocean acidity, soil moisture, and albedo levels (or the ability of surfaces to re ect solar radiation) 

Measurements gathered in relation to more contemporary events are col- lected through airborne instruments, satellites, ocean vessels, and buoys, as well as terrestrial monitoring stations such as carbon ux towers that can be found dotted around the globe. 

most discussions focus on the ris- ing concentrations of CO2, which correlate to increasing global average tempera- tures. 

The current concentration of CO2 currently hovers around 400 parts per million (ppm), a level that was last reached in the mid-Pliocene, two to four mil- lion years ago, when sea levels were up to twenty meters higher than present-day levels.11 

Climate change monitoring produces pronounced and startling encounters that unfold across environmental datasets. Rates of greenhouse gas rises in the atmosphere are referred to as “unprecedented”16 and connected to increases in air temperature in the troposphere, marine air temperature, sea surface tempera- ture, ocean heat content, temperature over land, water vapor, and sea levels, as well as decreases in glacier volume, snow cover, and sea ic 

SENSE DATA, SENSING DATA 

environmental sensors have become a common device within ecological study. 

creative practitioners are also developing new practices in relation to computational sensors in order to gather and repurpose distinct sense data about environmental phenomena. 

including geophones and hydrophones, YSI water sensors, light sensors, and more 

it has historically had an absence of biota such as algae. But through the collecting and recording of sense data including temperature, water samples, sediment samples, oxygen measurements, and anal- ysis of diatoms as bioindicators, evidence of increasing levels of biota has emerged. The warming of Arctic lakes, in other words, is in part expressed through the increasing numbers of organisms populating these waters 

Whether it be for meteorological, hydrological, oceanographic or climatological studies or for any other activity relating to the natural environment, measure- ments are vital. Knowledge of what has happened in the past and of the present situation can only be arrived at if measurements are made. Such knowledge is also a prerequisite of any attempt to predict what might happen in the future and subsequently to check whether the predictions are correct.25 

Also included is the Arctic Perspective Initiative, an artists’ project that develops a DIY environmental sensor network for studying ora and fauna through computational techniques, and which focuses on installing sensors for community-oriented scienti c research 

Creative-practice projects that deploy environmental sensors often focus on ways of monitoring pollution. 

. We began our conversation by asking who or what is a citizen, and how di erent notions of “citizen” might in uence the type of sensing that might take place. We also asked how citizen sensing might shift when we trouble assumptions about who or what is a citizen in these projects. 

We discussed additional examples of citizen-sensing projects from Beatriz da Costa’s Pigeon Blog, to Safecast, a project for detecting radiation after the Fuku- shima nuclear fallout in 2011, to the dont ush.me project, which uses proximity sensors to inform New Yorkers when to avoid ushing the toilet when the sewer system may be at capacity and in danger of dispersing waste into the harbou 

Other projects, such as Vatnajökull (the sound of ), allow listeners to phone up a melting glacier in Iceland, while Pika Alarm puts mountain rodents to work as sentinel species for climate change.37 

While we had initially hoped to develop speculative practices around what other possible forms of citizen-sensing practices might look like if new forma- tions of citizens were introduced, many discussants were concerned about the use of the term “citizen” to describe more-than-humans. Don’t citizens have free will and rights? Aren’t animals simply the props for human experiments into sens- ing? Are these sensing practices perhaps even exploitative? How could a tagged reindeer possibly be counted as a citizen? In this way, one discussant asked, “Is this about trying to talk with dolphins? I know of an artist who tried to do that and he went a bit mad, actually.” 

ip. One project reference, the Million Trees NYC project in New York, was cited as an example of a practice where crowdsourcing was used to identify where trees might be planted in the city.38 Once planted, the trees could be monitored in order to ensure their longevity. Such a practice of urban tree stewardship implies a relationship with the trees, and environmental citizenship might be practiced through sensing— with or without computational devices—trees and their local environment. 

We are now in the mountains and they are in us.” Included in the epigraph to this chapter, Muir’s statement seems to be a recognition of the ways in which milieus and subjects commingle. 

Sensor networks are not just formed by bits of circuitry and code but also in-formed through exchanges of energy, materializations, and relations that concresce across organisms and that are brought into practices of measurement with climate-change monitoring. 

What a complicated and complicating approach to citizen sensing sug- gests is that we not simply consider what monitoring data makes evident but also experiment with the new subjects, experiences, relationships, and milieus that monitoring practices might set in motion. With such an approach, we might also develop ways to invent new collectives and politics relevant to the concerns of climate change. 

 

 

http://www.sentientcity.net/exhibit/?p=5

 

The Living Architecture Lab at Columbia University Graduate School of Architecture, Planning and Preservation (Directors David Benjamin and Soo-in Yang) and Natalie Jeremijenko, Environmental Health Clinic at New York University

Network of floating tubes at Pier 35 in the East River.

Network of floating tubes at Pier 35 in the East River.mphibious Architecture submerges ubiquitous computing into the water—that 90% of the Earth’s inhabitable volume that envelops New York City but remains under-explored and under-engaged. Two networks of floating interactive tubes, installed at sites in the East River and the Bronx River, house a range of sensors below water and an array of lights above water. The sensors monitor water quality, presence of fish, and human interest in the river ecosystem. The lights respond to the sensors and create feedback loops between humans, fish, and their shared environment. An SMS interface allows citizens to text-message the fish, to receive real-time information about the river, and to contribute to a display of collective interest in the environment.

Posted in art

10. non-human turn & Speculative Realism

Richard Grusin Non-Human Turn

The nonhuman turn can be traced to a variety of different  developments from the last decades of the twentieth century: 

• Actor-network theory, particularly Bruno Latour’s careerllong project to articulate technical mediation, nonhuman agency, and the politics of things 
Affect theory, both in its philosophical and psychological manifestations and as it has been mobilized by queer theory
Animal studies, as developed in the work of Donna Haraway and others, projects for animal rights, and a more general critique of speciesism
e assemblage theory of Gilles Deleuze, Manuel De Landa, Latour, and others
New brain sciences like neuroscience, cognitive science, and arti cial intelligence
e new materialism in feminism, philosophy, and Marxism
New media theory, especially as it has paid close attention to technical networks, material interfaces, and computational analysis
• Varieties of speculative realism including object-oriented philosophy, neovitalism, and panpsychism
Systems theory, in its social, technical, and ecological manifestations 

Posthuman entails a historical development from human to something after the human, even as it invokes the imbrication of human and nonhuman in making up the posthuman turn. The nonhuman turn, on the other hand, insists (to paraphrase Latour) that “we have never been human” but that the human has always coevolved, coexisted, or collaborated with the nonhuman—and that the human is characterized precisely by this indistinction from the nonhuman. 

Cubing
1) describe it,
2) compare it,
3) associate it with something else you know,
4) analyse it (meaning break it into parts),
5) apply it to a situation you are familiar with,
6) argue for or against it.

Latour: Society a complex assemblage of human and non-human actors.
Discussions talk about the weight of non-human activities (Facebook etc) and how they affect discourse and challenge scholars on opinion and also in the “time” to form a authoritative opinion. The non-human exchange of opinion has a speed that outweighs traditional conversations and can be both trite and weighty at the same time.

“speculative realism” philosophies is the question of whether reality exists outside the human mind.

Speculative realist philosophers like Quentin Meillassoux have argued that this is wrong, that reality does exist outside the human mind, but various speculative realists disagree for different reasons. For example, Meillassoux argues in After Finitude that correlationism is wrong and that the only fact that humans can be certain of is that reality can change radically without warning and for any reason. Graham Harman has offered a different kind of speculative realist philosophy called “object-oriented ontology.” Arguing in favor of Kant’s correlationism, he proposes that everything, humans included, can be considered as objects, and that an object has an inner essence as well as what he calls “sensual” traits with which objects can interact with each other. But these are just two examples.

Posted in art

08 Morphogenesis Proposal

Art Theory Project Proposal

What is the overarching area of research?
The use of Morphogenesis in design

What are the key questions or queries you will address?
The origin of creative morphogenesis 
Different maths used in creative morphogenesis 
Unsual ways of using morphogenesis: diffusion. socialogical,l-systems,architecture

Why are you motivated to undertake this project?
I have been using morphogenesis design for clients for over 10 years mostly in scultpure and branding across uk (L-systems, creative particle systems, flocking systems and fluids)
https://vimeo.com/255545506/76377857e9

What theoretical frameworks will you use in your work to guide you?

Mostly from the references from different research papers. From the chemical diffusion paper by Alan Turin and the Gray-Scott diffusion paper. Lindenmeyer L-systems. In architecture Digital Morphogenesis by Neil Leach and Towards Morphogenesis in Architecture Stanislav Roudavski. Reviewing the work of Otto Frei. Looking at Philosophy via Delanda and Deleuze and the use of the Genetic Algorithms in Architecture. Reviewing the work of Andy Lomas.

https://www.amazon.co.uk/Occupying-Connecting-Territories-Particular-Settlement/dp/3932565118/ref=sr_1_10?ie=UTF8&qid=1518531215&sr=8-10&keywords=frei+otto

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.453.2327&rep=rep1&type=pdf

https://www.academia.edu/10400559/Digital_Morphogenesis

https://minerva-access.unimelb.edu.au/bitstream/handle/11343/26591/116799_Roudavski_Towards_Morphogenesis_in_Architecture_09.pdf?sequence=1

https://phylogenous.wordpress.com/2010/12/01/alan-turings-reaction-diffusion-model-simplification-of-the-complex/

http://www.andylomas.com/extra/andylomas_paper_cellular_forms_aisb50.pdf

http://karlsims.com/rd.html

http://www.dna.caltech.edu/courses/cs191/paperscs191/turing.pdf

http://www.sidefx.com/docs/houdini/nodes/sop/lsystem.html

What theoretical frameworks will you use in the analysis of your project?

Due to the complexity of the maths involved, the analysis will be founded upon readings on how other people have used and interpreted the work. From creatively using the math theories and general botanical observations to Delanda essays on Deleuze philospohical interpretations.

Gray-Scott diffusion PAPER http://karlsims.com/rd.html

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.453.2327&rep=rep1&type=pdf

https://www.academia.edu/10400559/Digital_Morphogenesis

How will you document your project? 

On my blog at http://blog.chiggs.com/art-theory-reviews/

Timeline for project milestones

1 week ground work (overall picture reading research)
2 weeks writing different parts of the essay.
1 week on creativity.

Budget (if any) 

 
 
Posted in art

07 Morphogenesis blog

The Sea; forever changing and morphing. A good place to start for a project on morphogenesis. From the impact and outcome of waves moving across a sand bed to produce a morphogenetic outcome to the various biological surfaces that morph and grow under the ocean from Corel-like structures to shell-like sructures. Biological cell division produce the shape outcome but emergent behaviour also arises. The genetic iterative processes are abound within the sea ( The fish colouring and pigmentation is a process of chemical diffusion algorithms.)

Morphogenesis (from the Greek morphê shape and genesis creation, literally, “beginning of the shape”) is the biological process that causes an organism to develop its shape. It is one of three fundamental aspects of developmental biology along with the control of cell growth and cellular differentiation, unified in evolutionary developmental biology (evo-devo).

In terms of art practice the artist either mimics a biological process of some kind and applies it to a structure as in architecture. Frei Otto was using biomorphism in his work and his concepts embraced Morphogenesis. His studies led him to further research of the structure and building properties of bamboo and soap bubbles. Otto observed that given a set of fixed points, soap film will spread naturally between them to offer the smallest achievable surface area. Any child blowing bubbles can, more or less, see how this works. In 1974 the German-born civil engineer Horst Berger, working in the US, came up with the maths that allowed this process to be translated into building structure.

 

Biomorphic algorithms of interest to follow up on : 

circle packing algorithm:

 

 

MORE NOTES:

Pasted Graphic.tiff

http://www.generativeart.com

Frei Otto’s

 Otto, Frei, His Book “Occupying and Connecting. Thoughts on Territories and Spheres of

Influence with Particular Reference to Human Settlement”

Frei Ottto background , born in 1925, is a German architect and engineer, well known for his lightweight tensile and membrane structures. He founded in 1961 the research team Biologie und Bauen, and in 1964 the Institut für Flächentrageweke at the TescnischeHochschule in Stuttgart, 

In Occupying and Connecting. Thoughts on Territories and Spheres of Influence withParticular Reference to Human Settlement Frei Otto does not particularly look for ideas for constructive structures, but, as the very explicit title of his book says it, he explores very fundamental topics about space, how it is occupied and how places are connected.

 But more accurately he observes that two «forces» are at stake in any process of occupation: he qualifies some occupations as «distancing» (which could have been called «repulsive»), others as «attractive», and remarks that many occupation mechanisms are both attractive and distancing. Those types of «occupations» (i. e. distributions) are illustrated with sketches. Attraction and repulsion are present in two physical forces: magnetism and static electricity. Those are the forces that Otto uses in his experiments. Otto’s photos of distancing using magnets.

Pasted Graphic 1.tiff

Pasted Graphic 2.tiff

Pasted Graphic 3.tiff

Frei Otto relates to the distributions of points to «territories», whose formation is described in these words: «one demarcates the territory by the perpendicular bisectors of the nearest point.

ARCHITECTURE NOTES

In architecture, morphogenesis is understood as a group of methods that employ digital media not as representational tools for visualization but as generative tools for the derivation of form and its transformation often in an aspiration to express contextual processes in built form .

page3image384.png

Above. Kristina Shea, Neil Leach, Spela Videcnik and Jeroen van Mechelen, eifFormStructure, Academie van Bouwkunst, Amsterdam, 2002
The design of this temporary structure was generated using the eifForm program, a stochastic, non-monotonic form of simulated annealing. This was the first 1:1 prototype of a design produced using eifForm and, almost certainly, the first architectural structure built where both the form and related structure were generated by a computer via design parameters and conditions rather than by explicitly described geometry. 

 

page5image384.png

Above IwamotoScott Architecture, Voussoir Cloud installation,
SCI-Arc, Los Angeles, August 2008
Voussoir Cloud explores the structural paradigm of pure compression coupled with an ultra-light material system. The overall design draws from the work of engineer/architects such as Frei Otto and Gaudí who used hanging chain models to find efficient form. The hanging chain model was here coupled with vaulted surface form-finding to create a light, porous surface made of compressive elements. 

300px-Beijing_national_stadium.jpg

Above Beijing National Stadium, officially the National Stadium[3] ( 国家体育场; pinyin: Guójiā Tǐyùchǎng; literally: “State Stadium”), also known as the Bird’s Nest Formation: 
https://www.youtube.com/watch?v=16woqIJF7GM

 

https://architecture.mit.edu/faculty/mark-goulthorpe Mark Goulthorpe of dECOi Architects describes his work as a form of ‘post-Gaudían praxis’, while Mark Burry, as architectural consultant for the completion of Gaudí’s Sagrada Família church in Barcelona, has been exploring digital techniques for understanding the logic of Gaudí’s own highly sophisticated understanding of natural forces.

http://www.nox-art-architecture.com Meanwhile, Lars Spuybroek of NOX has performed a number of analogue experimentations inspired by the work of Frei Otto as a point of departure for some innovative design work, which also depends on more recent software developments within the digital realm.3  

This work points towards a new ‘performative turn’ in architecture, a renewed interest in the principles of structural performance, and in collaborating more empathetically with certain progressive structural engineers. However, this concern for performance may extend beyond structural engineering to embrace other constructional discourses,such as environmental, economic, landscaping or indeed programmatic concerns. In short, what it amounts to is a ‘folding’ of architecture into the other disciplines that define the building industry.4

Digital Computation

Not surprisingly in an age dominated by the computer, this interest in material computation has been matched by an interest in digital computation. Increasingly the performative turn that we have witnessed within architectural design culture is being explored through new digital techniques. These extend from the manipulation and use of form-generating programs from L-Systems to cellular automata, genetic algorithms and multi-agent systems that have been used by progressive designers to breed a new generation of forms, to the use of the computer to understand, test out and evaluate already designed structures.

PHILOSOPHY 

This interest in digital production has also prompted a broad shift in theoretical concerns. If the 1980s and 1990s were characterised by an interest in literary theory and continental philosophy – from the Structuralist logic that informed the early Postmodernist quest forsemiological concerns in writers from Charles Jencks to Robert Venturi, to the post-Structuralist enquiries into meaning in the work of Jacques Derrida that informed the work of Peter Eisenman and others – the first decade of the 21st century can be characterised by an increasing interest in scientific discourses. 

As such, one can detect a waning of interest in literary theories and literary-based philosophies, and an increase in interest in scientific thinking and in philosophies informed by scientific thinking and an understanding of material processes. So it is that just as the work of Jacques Derrida is fading in popularity, that of Gilles Deleuze is becoming increasingly popular. Indeed it has been through the work of secondary commentators on Deleuze, such as Manuel DeLanda, that the relevance of Deleuze’s material philosophies has been championed within architectural circles.(See Manuel DeLanda, War in the Age of Intelligent Machines, Zone Books (New York))

DeLanda has coined a new term for this emerging theoretical paradigm: ‘New Materialism’. This should be distinguished from Marx’s ‘Dialectical Materialism’ in that the model is extended beyond mere economic considerations to embrace the whole of culture, and yet the principle behind Marx’s thinking – what we see on the surface is the product of deeper underlying forces – remains the same. Here we might understand cultural production not in symbolic terms, but in terms of material expressions.

It is not a question of what a cultural object might ‘symbolise’ – the dominant concern in the Postmodernist quest for interpretation and meaning – but rather what it ‘expresses’. The concern, then, is to understand culture in terms of material processes – in terms of the actual ‘architecture’ of culture itself.

Within this new configuration the economist, the scientist and the engineer are among the reassessed heroes of our intellectual horizon, and figures such as Cecil Balmond have become the new ‘material philosophers’.

It has often been said that all scientific progress has involved getting rid of substances and replacing them by processes and relations. So too has this occurred in philosophy. What must not be forgotten is that the philosopher, too, is a result of a morphogenesis. The philosopher is the coagulation, a result, a product, of a series of operations populating both her own life and all of history.

REPETITION:

Generality refers to events that are connected through cycles, equalities, and laws. Most phenomena that can be directly described by science are generalities. Seemingly isolated events will occur in the same way over and over again because they are governed by the same laws. Water will flow downhill and sunlight will create warmth because of principles that apply broadly. In the human realm, behavior that accords with norms and laws counts as generality for similar reasons. Science deals mostly with generalities because it seeks to predict reality using reduction and equivalence.

Repetition, for Deleuze, can only describe a unique series of things or events. ?

Art is often a source of repetition because no artistic use of an element is ever truly equivalent to other uses. 

Difference in Itself

Deleuze paints a picture of philosophical history in which difference has long been subordinated to four pillars of reason: identity, opposition, analogy, and resemblance. He argues that difference has been treated as a secondary characteristic which emerges when one compares pre-existing things; these things can then be said to have differences. This network of direct relations between identities roughly overlays a much more subtle and involuted network of real differences: gradients, intensities, overlaps, and so forth

Deleuze proposes (citing Leibniz) that difference is better understood through the use of dx, the differential. A derivative, dy/dx, determines the structure of a curve while nonetheless existing just outside the curve itself; that is, by describing a virtual tangent (46). Deleuze argues that difference should fundamentally be the object of affirmation and not negation. As per Nietzsche, negation becomes secondary and epiphenomenal in relation to this primary force

Genetic Art Examples

United Visual Artists, Blueprint is an installation designed to explore the relationship and parallels between natural and artificial systems.With cells literally transferring their genes to their adjoining others, colour flows like paint across the canvas.

https://player.vimeo.com/video/166428169

https://vimeo.com/188689675

Data-Masks http://sterlingcrispin.com/data-masks.html

Data-masks are face masks which were created by reverse engineering facial recognition and detection algorithms. These algorithms were used to guide an evolving system toward the production of human-like faces. These evolved faces were then 3D printed as masks, shadows of human beings as seen by the minds-eye of the machine-organism. This exposes the way the machine, and the surveillance state, view human identity and this makes aspects of these invisible power structures visible.

Data-Masks are animistic deities brought out of the algorithmic-spirit-world of the machine and into our material world, ready to tell us their secrets, or warn us of what’s to come.

Cellular Forms http://www.andylomas.com/cellularFormImages.html

EMERGENCE AND BIOMIMICRY 

“We are everywhere confronted with emergence in complex adaptive systems – ant colonies, networks of neurons, the immune system, the Internet, and the global economy, to name a few – where the behavior of the whole is much more complex than the behavior of the parts.” – John Henry Holland

The concept of emergence—that the properties and functions found at a hierarchical level are not present and are irrelevant at the lower levels–is often a basic principle behind self-organizing systems. An example of self-organization in biology leading to emergence in the natural world occurs in ant colonies. The queen does not give direct orders and does not tell the ants what to do. Instead, each ant reacts to stimuli in the form of chemical scent from larvae, other ants, intruders, food and buildup of waste, and leaves behind a chemical trail, which, in turn, provides a stimulus to other ants. Here each ant is an autonomous unit that reacts depending only on its local environment and the genetically encoded rules for its variety of ant. Despite the lack of centralized decision making, ant colonies exhibit complex behaviour and have even been able to demonstrate the ability to solve geometric problems. For example, colonies routinely find the maximum distance from all colony entrances to dispose of dead bodies.

ALAN TURING

lan Turing was neither a biologist nor a chemist, and yet the paper he published in 1952, ‘The chemical basis of morphogenesis’, on the spontaneous formation of patterns in systems undergoing reaction and diffusion of their ingredients has had a substantial impact on both fields Motivated by the question of how a spherical embryo becomes a decidedly non-spherical organism such as a human being, Turing devised a mathematical model that explained how random fluctuations can drive the emergence of pattern and structure from initial uniformity. 

That was the central question that Turing addressed. He presents a theoretical model in which chemicals that are diffus- ing and reacting may produce neither bland uniformity nor disorderly chaos but something in between: a pattern 

To suggest how chemistry alone might initiate the process that leads to a define biologiccal form.

Alan Turing’s 1952 paper, proposed by an author with no real professional background in the subject he was addressing, put forward an astonishingly rich idea. The formation of regular structures by the competition between an autocatalytic activat- ing process and an inhibiting influence, both of which may diffuse through space, now appears to have possible relevance not just for developmental biology but for pure and applied chemistry, geomorphology, plant biology, ecology, sociology and perhaps even astrophysics.

A morphogen is a substance whose non-uniform distribution governs the pattern of tissue development in the process of morphogenesis or pattern formation, one of the core processes of developmental biology, establishing positions of the various specialized cell types within a tissue. More specifically, a morphogen is a signaling molecule that acts directly on cells to produce specific cellular responses depending on its local concentration.

Typically, morphogens are produced by source cells and diffuse through surrounding tissues in an embryo during early development, such that concentration gradients are set up. These gradients drive the process of differentiation of unspecialised stem cells into different cell types, ultimately forming all the tissues and organs of the body. The control of morphogenesis is a central element in evolutionary developmental biology (evo-devo).

 

ALGORITHMS 

 

Alan Turin. The Chemical Basis of Morphogenesis http://www.dna.caltech.edu/courses/cs191/paperscs191/turing.pdf

 

greg turk. Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion https://www.google.co.uk/urlsa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjHjZvk7b7ZAhXBa1AKHchhCcMQFggsMAA&url=https%3A%2F%2Fwww.cc.gatech.edu%2F~turk%2Fmy_papers%2Freaction_diffusion.pdf&usg=AOvVaw2zstfkKileyn37CrnysTQG

Andy Lomas. Cellular Forms: an Artistic Exploration of Morphogenesis
https://www.google.co.uk/urlsa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwiY05eA7r7ZAhWOYlAKHaddD6cQFgg0MAI&url=http%3A%2F%2Fwww.andylomas.com%2Fextra%2Fandylomas_paper_cellular_forms_aisb50.pdf&usg=AOvVaw16p-ot1UodmI6AwjtNjeNY

Grey stokes diffusion model
https://groups.csail.mit.edu/mac/projects/amorphous/GrayScott/

 

Logic of circle packing:

setup an array list of random x,y points of radius r (vector)

ArrayList<Particle> particles = new ArrayList<Particle>();

ArrayList <Circle> circle;
circle= new ArrayList<Circle>();

///  not for (int i=0; i < number; i++) {

// use a while loop

while ( circle.size < number) {

r=int(random(255));
x=random(width);
y=random(height);

position = new PVector(x, y);

//loop thru current array of circle check and change position

for (int j=0; j < circle.size(); j++) {

if ( i !=j) {

overlapping =false;

Particle otherCircles = circle.get(j);;

float dist= position -circle[j].position;

if ( dist < circle[i].radius + circle[j].radius) {

overlapping =true;

break;

} //check radius distance

if ( ! overlapping) {

// if cirlce not overlapping place into array

circle.add( new Circle(x,y,r));

}

 

} //i not equal to same circle

 

}//i circles new

 

 

draw part:
for (int i = 0; i < circle.size(); i++) {
// Create a temporary arraylist to hold values of each particle class
Particle myP = circle.get(i);

myP.draw(p.get(p.size()-1));

 

}

 

 

 

does it overlap with previous circle: distance >  r1 + r2 (not overlapping)

position = new PVector(myx, myy);

float d = PVector.dist(v1, v2);
float d = v1.dist(v2);

 

 

 

Posted in art

02 Neural Networks

Logistic Classifier

Linear Classifier: y= WX + b       W= weights  b= bias  all scores get turned into probabilities that add to 1.  Using a  Soft max  function.