Archive for the ‘MA Applied Animation’ Category
June 25th, 2009 | Richard Almond
Motion capture, motion tracking, or ‘mocap’ are terms used to describe the process of recording movement and translating that movement into a digital model. Our project explores the use of analogue sensors as control input data for a digital, animated model. The aim is to define a connection between the virtual world and the digital world by investigating new and intuitive methods of communication between user and computer.
Maya was used as a tool to create a virtual environment. The MEL scripting language has allowed the development of complex and even unpredictable animation output. Arduino was used as an interface between computer software and the physical environment. It is both an input and an output device simultaneously. It has the ability to capture both analogue and digital input readings from a series of sensors and can be used to drive motors, lights, and other actuators. Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It is widely used by artists, designers, hobbyists, and those interested in the creation of interactive objects and environments. Arduino’s micro controller is programmed using the Arduino programming language (based on Wiring) and the Arduino development environment (based on Processing). Arduino projects can be stand-alone or they can communicate directly with software such as Maya, Flash, Max/MSP and Processing.
As the means to extract analogue data from the physical world, the project uses Sharp proximity sensors. These sensors consist of two LEDs, one which emits infrared light and another which receives the reflected beam. Using triangulation, the sensor is then able to calculate the distance an object is from itself. Their operational range is 4cm — 30cm.
TESTING THE SENSORS
Initial sensor testing began to ensure proper communication with Maya. A series of simple animations were created in which we mapped sensor inputs to cluster points, allowing us to manipulate a lofted surface through physical interaction with the sensors.
BUILDING THE DEVICE
After refining our initial experimentation with the simple animation, we decided to place the sensors in a more controllable environment. This would allow us to have greater control over the animation, giving us a more accurate set of outputs. For our first prototype we decided to create a box that controlled our three proximity sensors with greater accuracy. The box consists of two sheets of plywood fixed to 4 batons, forming 3 chambers. The proximity sensors are housed at the end of each chamber. The length of the chambers correspond with the range of the sensors (40mm-300mm). Each chamber is fitted with a plunger. Each plunger has an end pad with dimensions just smaller than that of the inside of the chambers. The aim is to remove as much interference from the sensor readings as possible by providing constant, flat surfaces for the infrared beams to reflect off. The box needed to be adapted to allow for the wiring from the sensors to connect to the Arduino board more easily. This meant that modification of the original design needed to be implemented immediately to allow for testing.
TESTING THE DEVICE
Once communication was established between the sensors and Maya, the box underwent initial testing. A series of short videos were made demonstrating the sensor effects upon a digital model when the values were mapped to various attributes.
REFINING THE DEVICE AND CONCEPT
During discussions and testing of the initial device, it became evident that an underlying theme of modularity was rooted in the project. To increase usability and adaptability it was decided that each sensor should be encased in its own individual piston. Since the device uses only one type of sensor, its potential use could be somewhat restricted, but by allowing each individual sensor a higher degree of versatility in its own, independent operation, the device would be opened up to an array of uses. The refined version of the device is a series of individual piston tubes, containing sensors. The tubes are designed to the dimensions of a grid. They are transformable and free to adopt the most intuitive arrangement of operation. This means that firstly, the device is non-specific to an animation, it can be rearranged to suit any, and secondly, that the device is extremely user friendly. The user decides on the arrangement of the pistons that suits them for the specific animation they are operating. The piston casings have a series of male components along one of the longer sides and a series of female components along the other three. This allows pistons to be locked together into a certain arrangement when in use. There is also a baseplate containing female components so as to allow all pistons to be securely fixed down. The device was designed using AutoCAD and the individual components laid out as a net. The design was then ran through the cutter onto a series of plywood boards. The laser cutter can cut and etch material up to 4mm in thickness. The plywood used in the second device is 4mm thick to improve strength, sturdiness and durability. The device took around an hour in total to cut. The laser cutter was used to enhance accuracy in the device, for speed of manufacture, and for its general aesthetic appeal.
ASSEMBLING THE REFINED DEVICE
Once the components were cut out, the piston casings had to be assembled. Each piece of ply was cut with a series of teeth running along its edges, meaning that pieces could be easily and securely slotted together. Wood glue was used to hold the boxes together and to secure the sensors to the end plate. The sensors had to be modified to fit securely into the casings, and a small hole was cut into the end plate to allow the sensor wires to exit the boxes. The male components were fitted into the etched square sockets to create the interlocking modular system. As part of the interaction between the user and the animation, we needed to develop the physical interaction. The first part of this development of the physical experience of our animation involved creating a board in which to mount each box. This allowed for a great deal of flexibility, as it meant that each box could be reconfigured to suit the specific animation and provide the user with the most intuitive method of interaction.
The decision to use the laser cutter proved very beneficial, the device being constructed quickly and easily. Certain initial problems did occur however, possibly due to the laser cutter being too accurate. For instance, the end plates at times were becoming stuck in the inside of the casing and detaching from the piston handles. The solution was to simply sand down the offending edges, but in retrospect there should have been 1mm or so of tolerance allowed for the internal components. The general look and feel of the pistons proved very successful. Operation was smooth and easy after some slight modification and the device felt sturdy when in use.
The next stage was to trial our new device. This began with the manipulation of series of simple, 2-dimensional surfaces by mapping a series of cluster points to the sensor inputs. Early animations involved simply attaching a cluster point to each corner of a lofted surface before mapping them to sensor input value. The next stage saw a surface divided into a 5×5 grid. A cluster point was assigned to each intersection point on the grid, and this cluster had its Y-translation randomly mapped to one of six sensor input pistons.
RE-CALIBRATING THE SENSORS
It quickly became obvious that simply containing the sensors within a casing did not give us the results we had hoped for. The initial idea was that a cluster point was selected depending on how much the piston was pushed in. The obvious way of interacting would be that pushing the piston half way in would select the cluster point half way along an edge. The problem, however, was that the sensors do not give a reading proportional to the distance that an object is from it. That is to say that if when the piston is at its fully open position, the value reads 0, and if when it is in its fully closed position, the value reads 100, the reading will not necessarily read 50 when the piston is in the half way position. If mapped onto a graph, plotting sensor reading values against the distance that the piston is pushed into the casing would result in an exponential curve. This proved a major problem and led to a radical rethink of the method of scripting we used to achieve what we wanted
In order to recalibrate each sensor, we needed to adjust the corresponding sensor value graph in Maya. The graphs on the left are attempts to adjust how the sensor input can be remapped to the output value across the range. This required a lot of experimentation with the graphs to try to counteract any anomalies from the sensors, to produce a constant set of results that were proportional to the actual distances of the plunger along the piston. Each sensor produced different anomalies and so a different graph was needed for each.
After much experimenting with the sensor value graphs, we achieved the best set of results possible, the selection of cluster points relating fairly well to the amount the piston was pushed. The graphs to the left show a series of graphs that produced a reasonably proportional set of input and output values. It became apparent that the main problems were with the lower end of the sensors’ range. This is evident in the graphs, as they all needed to be significantly lowered at one end to counteract any inaccuracies with the sensors.
By recalibrating the sensors we now had far greater control over our animations. This allowed us to begin developing a series of animations which better reflected the full potential of our device. We introduced a digital representation of our piston into an animation to better understand the relationship between the physical and the digital models. We continued to use the theme of the lofted surface, which we deformed with cluster points. The pistons allow the user to interact with and deform a surface in a virtual world via physical instruments in an intuitive manner. This aims to replicate how a physical surface could be deformed in the real world.
In order for us to progress our animation we had to take a different approach. Moving away from the sensor graph we introduced ‘If’ statements. Unforeseen problems began to develop with the calibration of the sensors and we eventually had to settle on a revised method of mapping the device to our digital model. We split up the range of the sensors into slots which could be mapped to certain points of selection. The script consisted of a series of ‘if’ statements, for example ‘if sensor reading is between X and Y, select point Z’. If statements allowed us to define where on the surface the point of manipulation takes place. By assigning a sensor to each of the axis in the animation we were able to control which cluster was controlled. This allowed for greater flexibility in subsequent animations. We continued to use the theme of the lofted surface, which we deformed with cluster points. The pistons allow the user to interact with and deform a surface in a virtual world via physical instruments in an intuitive manner.
For our final animation we took what we had learnt from the previous work and applied it to a more refined method of interaction. Initially, we worked hard to harness the accuracy of our device into an animation which could be precisely and intuitively controlled.
A modular physical interface which is adaptable and changeable in order to suit the requirements of the user and the nature of the animation. The natural tendency for this type of project would be to develop an animation and then design the physical device to operate and interact with this animation. However we considered this approach to be quite restrictive and therefore decided to create a device that would be able to adapt to various animations. This means that the physical interface is not confined to one particular animation type. The practical approach to this concept was to devise a modular system that allowed for maximum flexibility through simple physical transformation. To allow for uniform results across all the physical devices, we opted to use the same sensor type, proximity sensors. Due to the nature of the range of reading of the sensors, it was essential to be able to control each input. This is why we made the box and plunger devices. As the next logical step in our development of the cluster deformed surfaces and in keeping with the form of a modular physical device, we developed an animation based on a series of cubes. These cubes are restricted to a 3D grid. The initial state of the animation is a 4 x 4 grid, all with ‘0’ Y translation. A plunger is assigned to select a cube in both the X and Z axis and a further plunger is used to translate the selected cube in the Y axis. A change of colour from blue to red denotes which cube is active. After testing we introduced a further plunger to prevent unwanted readings being assigned to cubes whilst navigating the space.
Experimentation with the intuitiveness of the device and introducing randomness. The initial animation proved to be very effective. The blocks could be accurately controlled and translated, with the red indicator cube allowing the user to easily see the specific cube they were modifying. The problem was that the animation was rather mundane and monotonous. Our device, although incredibly accurate and intuitive to operate, was also rather complex, and there was an obvious interest in this complex system being used to control a relatively simple animation. We decided to remove some of the predictability of the device. To create this 2nd animation, which is clearly more abstract than the 1st, we removed the red selector box and randomly positioned the pistons, making it very hard for the operator to know which box they were manipulating with each piston. The result sees the operator either naturally struggle to learn which piston is associated to which manipulation by a process of trial and error, or else become completely content in randomly pushing pistons and enjoying the unexpected results that occur. We also began to introduce transparency and reflection into our animation, as well as a panning camera, all of which allow a greater understanding of how the animation is developing in 3D, and increase the general aesthetic appeal. Giving others the chance to try our device alerted us to the joy of its use as a tool to create, rather than simply a tool to control.
Turning our concept on its head — removing the predictability of the device. This new line of thinking proved to be exciting and we explored the idea of turning our concept on its head yet further. Our initial animations were incredibly predictable, we had calibrated and smoothed all the life out of our sensors. We were basically using our analogue sensors as digital ones, giving us 1’s or 0’s, yes or no. We were essentially restricting our device and not exploring its full potential. The theme began to move towards abstraction. Not about how precisely our device could control an animation, but about how the device could be used within a seemingly random and counter-intuitive system. In animation 3, we have introduced further modification options such as rotate and scale. These modifications, as well as the translate function, are mapped to various boxes in a complex system which is almost impossible to understand when the animation is in use. Modifications of boxes are now depended on different modifications of different boxes. For instance, pushing piston 1 may rotate box A by X amount, and in turn trigger the scale of box B by Y amout and the translation of box C by Z amount. The results are wholly more abstract and certainly more interesting. Embracing the analogue nature of our sensors down to removing their value-smoothing capacitors results in a more entropic outcome to our previous attempts, where we enforced digital precision onto our device.
Enhancing the animation and further liberating the sensors. Further experimentation was done with the animation, the scale factor was reduced to help add some visual density and the rotational factor was also increased. The camera angle was reduced to enhance the abstract nature of the animation.
Introducing colour. A colour mapping has been introduced to the animation so that when each piston generates a certain value, the respective box has a blinn shading applied in one of the primary colours [100% red, 100% blue or 100% green]. Whereas in previous animations, each modification [scale, rotate, translate] only applied to a single axis of each cube, here the cubes can be manipulated in all 3 axis.
The device becomes a tool to create and enjoy rather than to control. In the final incarnation of the animation our device is entirely free to create. The user still has precise control over the device, yet there is almost no sense of predictability, and so every action generates a new, unexpected result. The exact role of each piston is now not important, all 6 operate as part of a coherent system, although different results are achieved depending on the number of pistons in use. The transparency of each cube has been reduced to 50%, so there is an added bonus of the introduction of colour theory. At the points that the primary colours overlap, secondary colours are generated. Where secondary colours overlap with primary colours, tertiary colours are generated.
The project was a series of ups and downs, whereas the process of building and refining a physical device was extremely rewarding, the process of interaction with Maya was slow and painfully laborious. We found Maya to be an unintuitive, difficult piece of software to operate, with the most completely ridiculous naming conventions for tools! Despite the numerous difficulties we experienced throughout the project, we are very happy with the results.
June 16th, 2009 | Richard Almond
Further experimentation with the Jitter Mean patch. As well as manipulation of a live stream, the patch can also work on a pre-recorded video file. Here I ran a rendering from our Motion Capture Device through the patch. Ghostly forms of the boxes’ previous states fade to a pale mist. A trail of a past incarnation is captured. Is this digital memory acting in a poetic, human way?
March 11th, 2009 | Richard Almond
This project is to build a physical device with which to interface with Maya in order to create an animation. We are using infra red proximity sensors which give an analogue output reading between the values of 4cm and 30cm.
A short video demonstrating the sensor values being successfuly read in Maya. A simple curve is defined by 3 cluster points, and the X-translation of each cluster point is mapped to a corresponding sensor value. The sensor output values can be re-mapped as desired within Maya to enhance or limit the degree of movement of the cluster points:
After experimentation, we found that the sensors work best in a dark, enclosed environment, and so the concept of using a series of piston-like devices was born. This will hopefully enable us to create a user-friendly, intuitive, and accurate interfacing system between the user and Maya. We built a rough mock up:
A shot of the first version of the device during assembly. The device accomodates 3 sensors and here the battens are visible which seperate each compartment containing a sensor:
The next stage is to refine the device, both physically and in terms of what it controls within Maya.