I was recently creating a study about harmonical progressions and came accross
a theory that sound ambience may create a feel for colors,
some underlying facts may prove that it could.
Anyways here is a short presentation of the theory..
If the above assumptions are true, maybe it can be used to allocate color data by
bouncing sounds off of materials. Just a thought.
Recently found this on my hard drive, it was a compilation of everything I was doing back @ the Tuldok headquarters before it shutdown.
Its a shame to let it go to waste so I uploaded it here. Here ya go.
I swear to myself ill continue these experiments one of these days given the time, space, and equipment that I need.
ok another geeky post, me tracking candies in python.
no theyre actually refrigirator magnets stuck to my pentablet.
This is a python-opencv test that could be used to implement orientation tracking for the NI-droidcam.
I recently learned python-opencv from a site worth mentioning>> www.neuroforge.co.uk
so thank you Devs for the informative site.
Its still experimental, im still learning a thing or two about it.
next step is to get the data stream into blender and perhaps use the points for a camera rig.
hope it all goes well, wish me luck on this one.
Now presenting the lowcost NI-droidcam v1.0
made from pvc pipes, an old laptop cooler, china tablet and accessories.
it has 2 buttons, one for quick animation playback and another for going back to frame start
(in case you made a mistake and need to restart the record)
a mini keyboard for blender shortcuts
(Ill use this for custom shortcuts, viewing different layers, setting wireframe mode etc.)
a hacked numpad for creating more custom buttons
(probably for the next versions)
and a lowcost china android tablet for the display.
super cheap and fast enough to run the virtualization.
This will basically implement the kinect virtualcam hack ive done in my previous posts
only this time itll be displaying through the tablet, allowing me to to move around and see
the changes being made even if I rotate 180 degrees.
here’s a video of the previous virtualization test i did with an android phone.
I havent actually tried this thing out with the droidcam, but ill make another post after I do so.
you can check out my other previous kinect camera vitualization tests here >> Through the looking glass
and dont forget to subscribe. thanks! :)
Here’s another cool Kinect test Ive done
I had this Idea of performing an old Headtracking Wii Hack done by Johnny Lee but with a Kinect sensor.
Originally done by Johnny Lee, the head tracking wii hack is an ingenious way of using headtracking to simulate a virtual room inside your monitor. He called it a VR Display or a “virtual room” display, the process includes tracking a user’s head position and using the tracked data to simulate the view of a virtual room, the feel is like getting to reach into another world or having 3D objects pop right out of your screen.
here’s how it looks like
forest scene assets modelled by Ross Tec and Ramon Del Prado
The concept to achieving this is actually very simple, the kinect sensor tracks the user’s head
and tries to figure out it’s position, the extracted data is then used to change the angle of the scene tricking our eyes to believe that there’s another room inside our monitor.
now this is actually a really cool way to view and record animations and imitate a POV effect shot, which actually lead me to an idea of animating a camera in a whole new way.
So I had another concept that includes using the users hands to point the camera at certain angles,
it takes away the 3D experience but with this setup you could actually view the scene using hand movements.
i wont hold you guys back on this one, the video is pretty self explanatory
so here’s what we came up with @ the office. :)
had some fun scripting recently, finally got a chance to polish my python skills. Im doing a small python script addon that imports kinect facial animation data into blender.
The facial data is captured from Brekel’s face tracking software named “Brekel Kinect Pro Face”, the software tracks/records facial points/movements and creates data that can be used for simulating facial animations.
The software exports float variables labeled as “Animation Units” these units are designed to be used for driving facial shapes/morphs created in a 3D software package. In our case, Blender Shapekeys.
each of the units has corresponding name and value, the names are pretty much self explanatory
blender users can use these to determine the type of shapekeys needed for their models as well as use it to drive the created shapes.
As of now the creation of facial shape drivers are done by hand, ill be writing a script soon that can apply these units as drivers to shapes effortlessly.
An additional button at the sidepanel perhaps.
Apart from the animation units I also managed to Import the “head frame data”, this contains the tracked data obtained from the head’s rotation and location movements .
The data was applied to an “Empty” and can be used to constrain the rotation / location of the head.
here on this example I used it on blender’s most beloved test monkey “Suzzane”
finally here’s the import script addon>> brekel_addon.zip
the script is not done yet, there’s still a few more stuff to be done and I still have to import the tracked facial points.
you can already import the “Animation Units” and the head movements with the script.
For those who doesnt have access to kinect sensors, Ive included there the exported test file i used for the monkey. More test files can be found within Brekel’s site.
for more info about the Brekel Face software please head on to this site
Blender can be found here
downloads for Brekel Face are available within Brekel’s website, just search for it at the right panel. There are Installers for 32 and 64 bit systems. As for me, I was lucky enough to aquire a Licence straight from Brekel for doing this script. Thanks man, really appreciate it.
full tutorial on Blender/Brekel Face workflow coming soon