La magia de la animación.

The magic of animations (prototyping today’s user experiences) (2013-10-31)

The design process of a product can be seen as a conversation designers have with stakeholders, peers and testers. The more fluent this conversation is, the more likely the resulting product is flawless and user centred.
Prototypes are consequently essential enablers for this process, useful at different stages with different levels of fidelity. From communicating and co-designing the initial concept (for instance with sketches on a piece of paper) to validating the solution with existing users (for instance simulating features on a branch of the production code base).

Because of the importance of the execution of an idea and because everything about the execution counts (look, behavior and performances), the low fidelity prototypes quickly become inadequate to take final decisions. And this is the reason companies like Apple are known to bring solution proposals very far in the development process, ending up comparing almost final products.

So, after you have done with the sketches and wireframes, what is the best way to prototype with high fidelity a today’s user experience?

Before trying to answer this question there is another aspect I think it worths considering. Today’s software behavior can be very complex and animations, which can be a result of a direct manipulation of UI elements (e.g. dragging) or a transition between states, are now a big part in UI design. As Apple put it:“Although animation enhances the user experience, it is far from mere “eye candy.” Animations give users feedback or context for what is happening in the user interface”.
The extra information they provide permits you to optimize what presented at any given time, eventually removing the need of some UI elements.
For this reason my initial answer could be: the best tool to prototype today’s user experiences is a tool where it is easy to create ad-hoc animations.
Standard transitions, like the ones applications like Briefs lets you use, in my opinion, suffice until you have not rendered designs (a.k.a. wireframes). After that point chances are the pixel perfect UI doesn’t get enough support from the transitions.

There are different ways you can create custom animations, but to have a better feel of the final behavior the prototype should have a minimum of interactivity. For this reason I would be careful in considering tools like After Effects which are made to export rendered videos. While with this type of output you can simulate powerful hardware accelerated effects, the playback of pre-rendered videos, because heavily compressed, is just not meant to be easily controlled. Of course you could convert it to a sequence of PNG images, but this is definitely not practical for longer animations.
A new solution which is getting traction recently is Framer. With Framer you can easily script animations of individual UI elements using JavaScript, which of course lets you also add any logic required, it even has an official Photoshop exporter. While scripts are very easy to tweaks, and text files easy to version or collaborate with, it is hard to design a complex animation with independent elements without any kind of preview. Very far from what you can get from a WYSIWYG approach.
I think this leaves us with a couple of options, applications born to animate first but where then an interactive layer was added to open a new world of possibilities. I am talking about software like Adobe FlashAdobe Edge Animate orTumult Hype (only for Mac). My final answer is hence: the best tool to prototype today’s user experiences is a tool designed to create ad-hoc animations and where you can add logic into it (not the other way round).

Unfortunately I don’t have much knowledge of Hype, but from what I can see it seems very well designed and I guess starting with a new canvas it had the benefit of learning from the mistakes the very mature Flash did. Edge Animate should be very similar to Flash, but the logic layer is very likely not as solid. Flash still offers more scalability, since you can build proper apps with it, but, if it is just quick and throwable prototypes that you need, the simplicity of Hype may be enough and with Edge Animate could let you champ the so trendy HTML5.

Because I started using Macromedia Flash 14 years ago, this is obviously my weapon of choice. I don’t really have any reason to try something similar just less mature and less featured. But if you have to start from zero with ActionScript, and you don’t have to build a tailored framework from scratch for your prototypes for better integration with your workflow, I would probably suggest you to learn basic programming in JavaScript and go for the other 2 options. Or, if you really want to stick to Flash, use the older version of its programming language (ActionScript 2.0) which is way more scripting friendly.

I am recently focused on mobile apps and my process is generally as follow. If I have remote stakeholders, I start making linear animations, these could also be as little interactive as a click through, and I illustrate the interactivity with a trace of the finger on the screen (a bit like this animated walkthrough made with Hype). The next level, suitable for instance for shortlisted concepts, should be able to run on a device, some interactivity should be at least simulated (e.g. a pan triggered by a tap). For this purpose I programmed some draggable components which, when placed in the editor timeline, allow me to control the playback of the linear animation using standard gestures (tap, pan or pinch). Then I package the Flash movie in an Adobe AIR app so that it can be installed on the mobile device (this process can easily be automated). It’s all very fake, but it feels so real.
Few tips for Flash+AIR: set the movie speed at 60 frames per second, use the GPU as renderer and use Penner’s custom easing curves (so that the animations properties can easily be communicated to and implemented by the developers).

A practical example I can show you is the deletion confirmation process for my app Instants, it took me 20 minutes to build the prototype when implementing the solution natively in the app, after I was happy of how it felt, took another 6 hours.
Try to delete the fullscreen photo in the Flash applet below (in case you are lost, double tapping the stage reveals temporarily the active areas).

If you have any questions or want a live demonstration of my workflow, get in touch!

 

Computadoras utilizan a la imagen para aprender del sentido común

Computer uses images to teach itself common sense

Scientists looking at computers running NEILComputers at Carnegie Mellon University are running a program that analyses images in a bid to learn common sense

A computer program is trying to learn common sense by analysing images 24 hours a day.

The aim is to see if computers can learn, in the same way a human would, what links images, to help them better understand the visual world.

The Never Ending Image Learner (NEIL) program is being run at Carnegie Mellon University in the United States.

The work is being funded by the US Department of Defense’s Office of Naval Research and Google.

Since July, the NEIL program has looked at three million images. As a result it has managed to identify 1,500 objects in half a million images and 1,200 scenes in hundreds of thousands of images as well as making 2,500 associations.

The team working on the project hopes that NEIL will learn relationships between different items without being taught.

Computer programs can already identify and label objects using computer vision, which models what humans can see using hardware and software, but the researchers hope that NEIL can bring extra analysis to the data.

Continue reading the main story

Common sense facts that NEIL has learned

  • “Airbus_330” can be a kind of / look similar to “airplane”.
  • “Deer” can be a kind of / look similar to “antelope”.
  • “Car” can have a part, “wheel”.
  • “Leaning_tower” can be found in “Pisa”.
  • “Zebra” can be found in “Savanna”.
  • “Bamboo_forest” can be/can have “vertical_lines”

“Images are the best way to learn visual properties,” said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute.

“[They] also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”

Examples of the links that NEIL has made include the facts that cars are found on roads and that ducks can resemble geese.

The program can also make mistakes, say the research team. It may think that the search term “pink” relates to the pop star rather than the colour because an image search would be more likely to return this result.

To prevent errors like this, humans will still need to be part of the program’s learning process, according to Abhinav Shrivastava, a PhD student working on the project.

“People don’t always know how or what to teach computers,” he said. “But humans are good at telling computers when they are wrong.”

Another reason for NEIL to run is to create the world’s largest visual knowledge database where objects, scenes, actions, attributes and contextual relationships can be labelled and catalogued.

“What we have learned in the last five to 10 years of computer vision research is that the more data you have, the better computer vision becomes,” Mr Gupta said.

The program requires a vast amount of computer power to operate and is being run on two clusters of computers that include 200 processing cores.

The team plans to let NEIL run indefinitely.

More on This Story

Related Stories

The BBC is not responsible for the content of external Internet sites

Impresoras 3D ayudan a niños japoneses a buscar en Internet

Yahoo Japan’s 3D Printer Helps Blind Children Search the Web

Ads by Google
Impresoras 3D en Mexico – Línea 3D Systems Equipos eficientes a un bajo costo
www.tecsol3d.com/3dsys

 
Colin%20daileda

BY COLIN DAILEDASEP 30, 2013

At a school for the blind in Japan, the Internet is no longer just a visual tool. Yahoo Japan has made it possible for children who are blind to search the web.

In collaboration with Japanese creative agency Hakuhodo Kettle, Yahoo developed a machine called Hands On Search that is part 3D printer, part computer — and will build just about anything at your request.

SEE ALSO: You, Too, Can Print a 3D Robot

For example, children at the Special Needs Education School for the Visually Impaired can walk up to Hands On Search and say “giraffe” or “Tokyo Skytree Building.” Then, the machine searches Yahoo for an image and prints a miniaturized version of the object. The machine will make an online request for more information if an image cannot be found, according to Yahoo Japan’s website.

Yahoo Japan says it has no plans to commercialize the device, which it has loaned to the school for free through mid-October.

“After that, we are planning to donate the machine to some organization that will utilize it, like the School for the Visually Impaired,” Kazuaki Hashida, the creative director at Hakuhodo Kettle, told Mashable. “We are considering which organization is the best. We will decide it by the end of October.”

What kind of impact do you think Hands On Search could have if it’s expanded? Leave your thoughts in the comments below.

Image: Flickr, kakissel