November 2nd, 2014

We’re Live – Live TV Face Substitution

We’re Live is a project involving live HD cable TV and face substitution software. It is a hardware and software method for doing a real time facial composite/replacement on live television. The original face substitution implementation and cloning shader was created by Kyle McDonald and Arturo Castro in 2012 (link to Github source). The face tracking algorithm that enables this kind of high quality facial substitution was developed by Jason Saragih.

One of television’s greatest powers is in its ability to display very structured and edited views of reality. By watching the fabricated streams of the shows, viewers begin to wish for the interesting, exciting and impossible lives of the characters. They can subconsciously desire the smiles and trouble free lives enabled by buying the products in the advertisements. With this software, viewers can come one step closer to truly seeing themselves on screen.

We’re Live allows a user to composite their face (or any face they choose) onto a live television stream. Essentially, anyone you watch on TV can finally look like you….or anyone you want. You could make everyone on TV look like Bill Murray if you really wanted to.

Download of the software is available in the Technical details section

We’re Live – Live TV Face Substitution from blair neal on Vimeo.

————
Technical details/process:

The first part of the process involves actually getting the live TV into your computer so you can process it for face substitution. I have been using the following:

Blackmagic Mini Recorder
Orei 1×2 HDMI Splitter – strips HDCP

Alternate to the Orei Splitter

The HDMI Splitter is the silver bullet for actually capturing live TV off a digital set top box. Most set top boxes and game consoles have what is called HDCP or High Definition Copy Protection which blocks you from…well…copying or recording the HD signal which is something we could potentially do with this setup. The Blackmagic  Mini Recorder does not comply with HDCP, so your set top box signal will not even show up when you try to just plug it in straight. Certain HDMI splitters, (like the one listed) will comply with the HDCP handshake on their input, but ignore it for their output – these splitters effectively strip HDCP so you can do whatever you like with the signal.

Once you have Set top box HDMI-> HDMI Splitter -> Black Magic Mini Recorder -> Thunderbolt -> Mac Computer – you can start doing some face substitution!

–DOWNLOAD–
Here is a ZIP of the software with an included VDMX project file for working with it (OSX Only):  Syphon_Face_Sub_v1.0.zip (210 downloads)

SOURCE CODE (Github)

Usage:
VDMX is my software of choice for actually getting the captured signal from Blackmagic into the face substitution software. I have included a VDMX Project file with the software to show you how to do this all yourself. With my setup, I am able to get 1280×720 at 60hz (or 720p@59.94 according to the actual capture settings I’m using)

Once I have the TV signal in VDMX, I actually pass the video texture to the face substitution software using Syphon.  Once it is in there, it is constantly searching for new faces to map the mask layer onto. You don’t have to use a live TV signal – you can put a movie file into VDMX onto the “camera layer” source and use that layer to apply your masks to.

On another layer in VDMX, you can pass in the “mask” you want to apply to the footage. For example – this would be your face that you pass in via webcam. To save processing time and not run 2 face trackers constantly – I actually have this capture your face on a key press – when your face is in camera – you press the ‘m’ key in the face sub software to set the new mask. You can put still images or movies or a live webcam into the “masks” layer in VDMX. Alternatively – you can put a bunch of files into the “faces” folder in the “data” folder and use the arrow keys to cycle to different faces.

You can press ‘d’ to bring up the GUI for different settings like viewing which mask is currently loaded and things like that.

Recording/Displaying output: 
You can capture the output of the openFrameworks program in Syphon as well, and use that to go fullscreen in VDMX or something if you hide those layers. I use the Syphon Recorder to record my demos.

Note about Audio:
Currently nothing in this setup fully supports audio. You can capture audio with the Black Magic recorder and use that as your audio source in Syphon Recorder — but a caveat:

The audio is “ahead” of the video due to input and processing steps – meaning you will hear something 6 frames (0.2s) before you see it (for example – your results may vary). To fix this – you could make an audio delay buffer in something like Max/MSP. If you are recording output with audio – you will need to re-sync it later in your editing software – I recommend a clap board or hand clap at the beginning of recordings – something recognizable for sync in video and audio.

Alternate version/Upgrade: There is an alternate version of the software that isn’t fully integrated yet into this executable/binary (but the source is available on github under the OriginalCycling branch) – which will allow you to map arbitrary textures to the live tracked face. This released current version only allows you to put a face on a face. The other version lets you put more abstract non-face things and map them onto the same area. It works by storing the mesh points of a pre-tracked face, and using those to apply an arbitrary image to the face mesh that is properly texture mapped. This alternate versions also features a way to crossfade between different faces.

September 19th, 2013

Applescript to automatically fullscreen Madmapper for installations

This is a simple Applescript that I used with a long running installation that required Madmapper for doing some precise mapping. More info on keeping long running installations going is here: http://blairneal.com/blog/installation-up-4evr/

This script would be used on reboot to both open your Syphon enabled app and to open your madmapper file, open it, select fullscreen, and then hit OK on the dialog box.

It requires you to set your own file path in the script for your madmapper file and application file.

To use the code below:

1. Open Applescript and paste it into a new file

2. Change the filepaths and resolution so they match how it appears in your setup (ie resolution may be 3840×720 instead)

2. Go to “File -> Export” and select “Application” as your file format

3. In System Preferences -> Users & Groups -> Login items drop your applescript application in there to automatically launch on boot

 

You can also add in pauses (for things to load) and other checks with applescript if necessary.

This script will fail if for some reason the resolution has changed on boot or something – if the text doesn’t match exactly how it is in the Output menu of madmapper, it won’t work.

NOTE: I personally do not recommend using Madmapper for long running installations – there are occasional issues with it losing keyboard focus and it can appear as if your machine has locked you out if accessing it remotely. It’s also typically best practice to try and keep everything simplified into one application so you can minimize weird occurrences. In the case that we had to use this, there was not enough development time to add in the mapping code that was necessary.

 

 

tell application "Finder" to open POSIX file "YourInstallationApp.app" --add your absolute file path to your application

delay 10 --wait 5 seconds while your app loads up

tell application "Finder" to open POSIX file "/Users/you/yourmadmapperfile.map" --absolute filepath to your madmapper file

do_menu("MadMapper", "Output", "Fullscreen On Mainscreen: 1920x1200") --change this line to your determined resolution

on do_menu(app_name, menu_name, menu_item)
	try
		-- bring the target application to the front
		tell application app_name
			activate
		end tell
		delay 3 --wait for it to open
		tell application "System Events"
			tell process app_name
				tell menu bar 1
					tell menu bar item menu_name
						tell menu menu_name
							click menu item menu_item
							delay 3 --wait for Is fullscreen OK? box to appear
							tell application "System Events" to keystroke return
						end tell
					end tell
				end tell
			end tell
		end tell

		return true
	on error error_message
		return false
	end try
end do_menu

July 7th, 2013

The Biggest Optical Feedback Loop in the World (Revisited)

Optical feedback is a classic visual effect that results when an image capture device (a camera) is pointed at a screen that is displaying the camera’s output. This can create an image that looks like cellular automata/reaction-diffusion or fractals and can also serve as a method of image degradation through recursion.

Many video artists have used this technique to create swirling patterns as a basis for abstract videos, installations and music videos. Feedback can also be created digitally by various means including continually reading and drawing textures in a frame buffer object (FBO) but the concept is essentially the same. In this post I’m writing up a thought experiment for a project that would create the biggest optical feedback loop in the world.

Sample of analog video feedback:

Optical Feedback Loop from Adam Lucas on Vimeo.

Sample of video feedback (digital rendering from software):

I really enjoy the various forms of this effect from immediate feedback loops to “slower” processes like image degradation. Years ago, I did a few projects involving video feedback and image degradation via transmission, and this thought experiment combines those two interests. Lately, I’ve also been obsessed with really unnecessarily excessive, Rube Goldberg-like uses of technology, and this fits that interest pretty well. It’s like playing a giant game of Telephone with video signals.

While in residency at the Experimental Television Center in 2010, I was surrounded by cameras, monitors and 64 channel video routers. After a few sessions with playing with feedback on the Wobbulator, I drew up a sketch for making a large video feedback loop using all of the possible equipment in the lab…and a Skype feed for good measure. Here is that original sketch:

sketch_mod

 

The eventual output of a large feedback loop ended up not looking the best because the setup was a little hacky and ended up losing detail very quickly due to camera auto adjustments and screens being too bright. The actual time delay through the whole system including Skype was still just a few frames. There was also several decades between equipment and a break between color and black & white feeds at certain points. I’ve returned to the idea a few times and I’ve wanted to push it a little further.

As a refresher, this is the most basic form of optical feedback, just a camera plugged into a screen that it is capturing.

Feedback-1-stage

You can also add in additional processors into the chain that can effect the image quality (delays, blurs, color shifts, etc). Each of these effects will be amplified as they pass through the loop.

Processed_feedback

 

The above are the most common and straightforward techniques of optical feedback. They will generate most of the same feedback effects as the larger systems I’m proposing, generally with a shorter delay and less degradation. Doesn’t hurt to ask about what will happen if we add another stage to the feedback system:

Dual-stage-feedback

We’ll lose a little more image quality now that the original photons have been passed through twice as many pieces of glass and electronics. Let’s keep passing those photons around through more stages. You could put a blinking LED in front of one of the screens and have it send it’s photons through all the subsequent screens as they transform digitally, and electrically. The LED’s light would arrive behind it in some warped, barely perceivable fashion but it would really just be a sort of ghost of the original photons.

6-stage-feedback

We can take the above example of a 6 stage video feedback loop and start working out what we might need to hit as many image and screen technologies as we can think of from the past 50 years. Video art’s Large Hadron Collider.

Click for detail

6-stage-feedback_example

By hitting so many kinds of video processing methods we would get a video output that would be just a little delayed, and would create some interesting effects at certain points in the chain. By varying camera resolutions, camera capture methods, and analog versus digital technologies, we can bounce the same basic signal through all of these different sensor and cable types. The signal would become digital and analog at many different stages depending on the final technologies chosen. The digital form of the signal would have to squeeze and stretch to become analog again. The analog signal would need to be sampled, chopped and encoded into its digital form. Each of these stages would have their own conversions happening between:

  • Video standards/Compressions (NTSC, PAL, H.264, DV, etc.)
  • Resolutions/Drawing methods (1080p, 480p, 525 TV Lines)
  • Voltages
  • Refresh Rates
  • Scan methods (CMOS, CCD, Vidicon Tube)
  • Illumination methods (LED, Fluorescent Backlight, CRT)
  • Wire types
  • Pixel types
  • Physical Transforms. (Passing through glass lenses, screens) etc etc

By adding in broadcast and streaming technologies like Skype, we can extend the feedback loop not only locally within one area, but also globally. One section of the chain can be sent across the globe to another studio running a similar setup with multiple technologies. This can continue being sent around to more and more stations as long as the end is always sent back to the first monitor in the chain.

A digital feedback or video processing step could also be added where several chains of digital feedback occur as well.

If you were able to create a system large enough, there could be so much processing happening for the signal itself to become delayed for a few seconds before it reaches the “original” start location. In this large system, you could wave your hand in between a monitor and camera, and get a warped “response” back from yourself a second or two later.

It’s interesting for me to consider what the signal would be at this point, after going through so many conversions and transforms. Is the signal a discrete moment as it passes from monitor to screen, or does it somehow keep some inherent properties as it fires around the ring?

Suggested Links:

http://softology.com.au/videofeedback/videofeedback.htm

February 21st, 2013

Guide to Camera Types for Interactive Installations

I just published an epic article over on Creative Applications detailing the use of different kinds of cameras in interactive installations. Check it out, and add any additional tips in the comments there!:

http://www.creativeapplications.net/tutorials/guide-to-camera-types-for-interactive-installations/

December 10th, 2012

Painterly Jitter

(Click for  versions in their full 640 x 480 glory)Playing around with old code, feedback loops and Andrew Benson’s always fun optical flow shaders. Sometimes stills of unusual systems are nicer than the thing in motion…

October 10th, 2012

Music video process – “Prairie School” by Lymbyc Systym

“Prairie School” by Lymbyc Systym from Western Vinyl on Vimeo.

An awesome contact at Terrorbird asked if I was interested in coming up with a music video for one of Lymbyc Systym‘s songs on their new album Symbolyst (on Western Vinyl).

Kyle McDonald and I had been wanting to work together on a music video for a long time and I knew he was a big fan of the band as well so I asked him to join forces with me to come up with something for “Prairie School”, the first track on the new album.

I had been playing a little bit with a lens I pulled off a PS3eye camera and found that it fit perfectly over the lens on my iPhone camera and gave me ridiculous magnification. When the option to pitch on the video came around I really wanted to put this microscopic world to good use, and Prairie School was a perfect option for that. The song had a rapid energy, a brightness but also a sense of smallness without bounds (if that makes any sense). We got a really strong kind of retro-futuristic science video vibe from the song at first and offered up something that would be a mix of filming and software to provide a sort of abstract journey from big to small, as a sort of homage to the amazing 1977 Eames short film “Powers of 10”

                   

We also worked through a way of breaking up the song into some kind of narrative that would match the variations in the song. The song had some very clear sections that we wanted to hit with big changes in visual mood. When working on videos I like to make a chart of the song that helps me understand and visualize the entire structure and the spacing between big moments. Here is the diagram that Kyle and I worked off of when coming up with the structure for the video (click  the image for big version).

                                     

The majority of the time spent making the video was just a lot of exploration. I shot a ton of stuff up close and it was never really easy to tell if something was going to be boring or gorgeous underneath the tiny lens. In all I shot about 40gb of footage and about 300 individual clips for the video. In all, i would say i shot about 80% of the video on my iPhone, about 15% on my DSLR and 5% on a 500x USB microscope. In all the time spent exploring this microscopic world we realized that staying small made more sense and offered some compelling options on it’s own. Some of our original ideas for expanding to larger worlds ended up being a little time-prohibitive, even though they seemed like they might work out at first. We were initially going to zoom out of grass in a park which would then somehow expand to some high resolution 3D maps of NYC. Here is a demo version from Kyle of what that would have looked like:

https://secure.flickr.com/photos/kylemcdonald/8061492247/in/photostream

Also to cover up what might have been some odd production value in the expansion, we played around with the idea of making the video something like what an 8th grader of 2082 might make as a video for his futuristic science class. Things would have had different graphical or textual overlays attached to them, giving bogus explanations and distance scales for what you were seeing in this abstract microscopic world. This idea got pulled in favor of a more organic direction. Here are some mockups of what we were thinking the eventually scrapped overlays might look like.

The footage also didn’t have the necessary internal movement to really match the energy of the song, so we experimented with overlaying different content on top of the footage I was getting. I had a lot of old stuff I had been recording for a couple years, but I had some footage I got in 2011 at an optical illusion museum in Edinburgh. They had some awesome stuff there, but I got a ton of nice 60fps footage of the electrical arcs of a tesla ball.

We also really liked the look of screen pixels when they were blown up to be really big. They were great punctuation marks for the drum heavy parts of the song. Kyle wrote a couple Processing sketches that gave me some great RGB line microscopic motion to film off my own screen. Here are links to the source code for the sketches I filmed for the video:

http://www.openprocessing.org/sketch/74603

http://www.openprocessing.org/sketch/74602

As I worked with the footage, I realized I was getting sucked into the visuals of this familiar but alien world. Also all of this material exploration had given me a really personal connection with all of the footage which I feel shaped the story a little bit. Each time I filmed was this new experience with a previously familiar object, but I was experiencing it all through a screen even though it was right in front of me. The same screen I use to experience or learn about many other things I’ve never actually physically been present for, but it was still here as a barrier or a gateway. The experience of rubbing dirt in your fingers versus seeing blown up footage of the dirt getting into your nails and skin folds, just witnessing the same action on a different scales.

                                   

When working with such abstract footage it can be a challenge to shape it into something that flows together, especially when you’re not sure where you’re going (not always necessary). I didn’t want it to just be a bunch of gorgeous footage clumped together, I wanted it to have some kind of thrust or direction to it. A continuous progression like in “Powers of 10” started to not make as much sense because the middle section of the song really held a different world than the bookending sections.

If I had to give a description about the video’s story, it would be something like “a loose narrative about an experience learning about real physical things versus learning about them on a screen.” The video starts with this really unfamiliar but engaging materials (literally just shots of my laptop and touching the speaker grill on the laptop), and these flashes of light give an extra burst of energy to the drums and other sections. This first area still has energy, but it doesn’t have a lot of color to it. In the middle section of the song, you see a lot more interaction with the recognizable natural world and there is more color and texture there. The flashes of light are still there in full force. In the end section, the de-saturated and more organic worlds start to mix with more shots of pixellated things on a screen, and finally you see the hand from earlier touching things on a screen instead of real life. In the end I wanted there to be just a little bit of ambiguity about where the world of the video just occurred, real life or on a screen. I don’t know if I really pulled that message off the right way, but it was hard to dance around it without getting too heavy handed.

The editing process for the video was really intense. This was one of those videos where I started to figure out that I have an editing “style” by now, but now I’ll have to see if I can change that around for whatever my next video is. I’ve been a fan of doing meticulous editing with music ever since I started with Final Cut (now in Premiere), and I can get into a pretty good groove with the material. It’s still a very different feeling than working with the material live, but it can be really nice to get in there and bring out certain parts of the song you really want to highlight.

Below is a super large image of my entire Premiere timeline for the video (click for full readable size to get an idea of the types of materials I was actually filming. Image size: 400px x 26000px).

I ended up shooting, dropping things in and seeing what worked and then going back and shooting more. I probably had 6-8 different established shooting periods where I collected the majority of the footage, and sometimes I just had my lens on me and would shoot stuff if it looked like it might be really unusual looking close up. It was a very different process than having to set up established shooting schedules…just being able to shoot on the fly for the video was an unusual experience. It definitely made the editing process a little more arduous. The whole video probably went through about 2 or 3 different versions before it settled into its final form. All in all, a really fun and tiring process, but I’m really happy with the result.

June 24th, 2012

Crayolascope

The Crayolascope – an Analog Depth Display from blair neal on Vimeo.

Using 12 toys from Crayola called “Glow Books”, I hacked together a charming prototype of what a ~1ft deep 3D display might look like. This would be a similar concept to animating some of those famous depth paintings on dozens of panes of lit glass.

Uses an Arduino Mega to drive it all.

For the animation, I traced a cube I had digitally animated and printed out, frame by frame.

You can control the speed, scrub position/frame, and make a fade effect.

The Crayolascope has been exhibited at the NY Hall of Science in Queens, NY as part of their series that teaches kids about different aspects of animation. It has also been shown at Launchpad in Brooklyn, NY as part of the Slap Dash art series.

For the next version, I’d like to play with more powerful lighting and more full edge lighting, as well as solve the issue of internal reflectivity between panels degrading the quality of the “image”. Once the animation goes in about 14-18 frames, it becomes very difficult to see from one side unless it is in a very dark space. I would love to get it much deeper than that, or at least make a finer Z-space resolution.

Press:
Engadgethttp://www.engadget.com/2012/06/25/crayolascope-hacks-toys-into-foot-thick-3d-display/
Hack-a-dayhttp://hackaday.com/2012/06/24/crayolascope-turns-flat-displays-into-volumetric-coolness/
Makezine Bloghttp://blog.makezine.com/2012/06/25/crayolascope-an-analog-depth-display/

June 24th, 2012

Projection abstraction #1

Projection Abstraction #1 from blair neal on Vimeo.

Playing with a laser pico projector, quartz composer, and some colored gels

April 15th, 2012

Crystal Eye – my first iOs app

Crystal Eye promo video from Fake Love on Vimeo.

At work I got the amazing chance to spend some of my free time developing a simple photobooth iPhone app called Crystal Eye. I wanted to try and make something that I hadn’t really seen before, and a lot of the effects on the App store seemed to be in the same sort of “overlay” style.

I’ve been interested for a while in making effects that are influenced by the content of the image and aren’t simply just overlaid with little regard for what is going on inside the image. Another goal was to create a fun, interactive tool that anyone could just pick up and use. The live tweaking aspect was also pretty important to me.

The app is still in a sort of early development stage with a lot of cool tweaks and extra effects to be made down the road. I also hope to make a variation soon that processes each image as a frame that can be reassembled to a weird rotoscope style video.

Made with openFrameworks. Coded by me and Caitlin Morris. GUI design by Layne Braunstein.

Get the app, it’s free!

December 20th, 2011

Top Music Videos of 2011

(no particular order)

Bon Iver – Calgary (really awesome environment..love the reveal at the end)

Battles – My Machines (amazingly done single shot video)

Hooray For Earth – True Loves (this needs to be made into a movie)

No Age – Fever Dreaming (another good single shot video)

Battles – Ice Cream (all over the place, but the styling is pretty great)

Adele – Rolling in the Deep (some of the shots are really incredible)

Swedish House Mafia – Save the world (what a simple idea..but brilliant)

Oh what the hell:
Katy Perry – TGIF