Code News

What a Border Collie Taught a Linguist About Language

The Overlooked Heroes Who Lead Climbers Up Everest

Proposed California Law Targets Sexual Harassment in Venture Capital

Integrating Animation into a Design System

  • Keeping animation choreography cohesive from the outset of a project can be challenging, especially for small companies. Without a dedicated motion specialist on the team, it can be difficult to prioritize guidelines and patterns early in the design process. What’s more likely to happen is that animations will be added as the product develops.

    Unsurprisingly, the ad-hoc approach can lead to inconsistencies, duplications, and rework in the long run. But it also provides space for creative explorations and discoveries of what works and what doesn’t. As useful as it is to be able to establish system foundations early, it is also ok to let the patterns emerge organically as your team experiments and finds their own voice in motion.

    Once there are enough animations, you might start thinking about how to ensure some consistency, and how to reuse existing patterns rather than recreate them from scratch every time. How do you transition a few odd animations to a cohesive system? I find it helpful to start by thinking about the purpose of animations and the feel they’re designed to evoke.

    Start with purpose and feel Purpose

    Like any other element in a design system, animations must have a purpose. To integrate animation, start by looking through your interface and noting how and why you use animations in your particular product and brand.

    For example, at FutureLearn we noticed that we primarily use animation in three ways: to indicate a state change, to add an emphasis, or to reveal extra information.

    A state change shows that an object has changed state due to user interaction. For example, a state can change on hover or on click. Animation here is used to soften the transition between states. Emphasis animations are used to draw attention to specific information or an action, for example a nudge to encourage users to progress to the next step in the course. Reveal animations are used to hide and reveal extra information, such as a menu being hidden to the side, a drop down, or a popover.

    There are no “standard” categories for the purposes of animations. Some products use a lot of standalone animations, such as animated tutorials. Some use screen transitions, others don’t. For example, in Salesforce Lightning Design System personality and branding animations are grouped into a separate category.

    Animation types in Salesforce Lightning Design System are categorized in a different way to FutureLearn.

    The categories are specific to your interface and brand, and to how you use animation. They shouldn’t be prescriptive. Their main value is to articulate why your team should use animation, in your specific project.

    Feel

    As well as having a purpose in helping the user understand how the product works, animation also helps to express brand personality. So another aspect to consider is how animation should feel. In “Designing Interface Animation,” Val Head explains how adjectives describing brand qualities can be used for defining motion. For example, a quick soft bouncy motion can be perceived as lively and energetic, whereas steady ease-in-outs feel certain and decisive.

    Brand qualities translated to motion Brand feel Animation feel Effect examples Lively and energetic Quick and soft Soft bounce
    Anticipation
    Soft overshoot Playful and friendly Elastic or springy Squash and stretch
    Bouncy easing
    Wiggle Decisive and certain Balanced and stable Ease-in, Ease-out Ease-in-out Calm and soft Small soft movements or no movement at all Opacity, color or blur changes, scale changes

    As you look through the animation examples in your interface, list how the animation should feel, and note particularly effective examples. For example, take a look at the two animations below. While they’re both animating the entrance and exit of a popover, the animations feel different. The Marvel example on the left feels brisk through the use of bouncy easing, whereas the small movement combined with opacity and blur changes in the FutureLearn example on the right make it feel calm and subtle.

    Popover animation on Marvel (top) and FutureLearn (bottom).

    There’s probably no right and wrong way to animate a popover. As far as I know it all depends on your brand and how you choose to communicate through motion. In your interface you might begin to notice animations that have the same purpose but have entirely different feels. Take note of the ones that feel right for your brand, so that you can align the other animations to them later on.

    Audit existing animations

    Once you have a rough idea of the role animation plays in your interface and how it should feel, the next step is to standardize existing animations. Like an interface inventory, you can conduct an inventory focused specifically on animations. Start by collecting all the existing animations. They can be captured with QuickTime or another screen recording application. At the same time, keep a record of them in a Google Doc, Keynote, or an Excel file—whatever suits you.

    Based on the purpose you defined earlier, enter categories, and then add the animations to the categories as you go. As you go through the audit, you might adjust those categories or add new ones, but it can be helpful not having to start with a blank page.

    Example of initial categories for collecting animations in Google Doc.

    For each animation add:

    • Effect: The effect might be difficult to describe at first (Should it be “grow” or “scale,” “wiggle” or “jiggle”?). Don’t worry about the right words yet, just describe what you see–you can refine that later.
    • Example: This could be a screenshot of the animated element with a link to a video clip, or an embedded gif.
    • Timing and easing: Write down the values for each example, such as 2 seconds ease.
    • Properties: Write down the exact values that change, such as color or size.
    • Feel: Finally, add the feel of the animation—is it calm or energetic, sophisticated and balanced, or surprising and playful?

    After the inventory of animations at FutureLearn, we ended up with a document with about 22 animations, grouped into four categories. Here’s the state change category.

    The “State Change” page from FutureLearn’s animation audit, conducted in a Google Doc. Define patterns of usage

    Once you’ve collected all the animations, you can define patterns of usage, based on the purpose and feel. For example, you might notice that your emphasis animations typically feel energetic and playful, and that your state change transitions are more subtle and calm.

    If these are the tones you want to strike throughout the system, try aligning all the animations to them. To do that, take the examples that work well (i.e. achieve the purpose effectively and have the right feel) and try out their properties with other animations from the same category. You’ll end with a handful of patterns.

    Animation patterns on FutureLearn, grouped by purpose and feel Purpose Animation effects Feel Interactive state change Color, 2s ease
    Opacity, in – 0.3s, out – 1.1s ease
    Scale, 0.4 ease Calm, soft Emphasis Energetic pulse, 0.3s ease-in
    Subtle pulse
    Wiggle, 0.5s ease-in-out Energetic, playful Info reveal Slide down, 0.4 swing
    Slide up, 0.7s ease
    FadeInUp, 0.3 ease
    Rotate, 0.3 ease Certain, decisive, balanced Develop vocabulary to describe effects

    Animation effects can be hard to capture in words. As Rachel Nabors noted in “Communicating Animations,” sometimes people would start with “friendly onomatopoeias: swoosh, zoom, plonk, boom,” which can be used as a starting point to construct shared animation vocabularies.

    Some effects are common and can be named after the classic animation principles (squash and stretch, anticipation, follow through, slow in and out1) or can even be borrowed from Keynote (fade in, flip, slide down, etc.), others will be specific to your product.

    Vocabulary of animations in Salesforce Lightning Design System. Movement types in IBM Design Language.

    There might also be animation effects unique to your brand that would require a distinctive name. For example, TED’s “ripple” animation in the play button is named after the ripple effect of their intro videos.

    The ripple effect in the intro video on TED (left) mirrored in the play button interaction (right). Specify building blocks

    For designers and developers, it is useful to specify the precise building blocks that they can mix and match to create the new animations. Once you have the patterns and effects, you can extract precise values—timing, easing, and properties—and turn them into palettes. The animation palettes are similar to color swatches or a typographic scale.

    Timing

    Timing is crucial in animation. Getting the timing right is not so much about perfect technical consistency as making sure that the timing feels consistent. Two elements animated with the same speed can feel completely different if they are different sizes or travel different distances.

    The idea of “dynamic duration” in Material Design focuses on how fast something needs to move versus how long it should take to get there:

    Rather than using a single duration for all animations, adjust each duration to accommodate the distance travelled, an element’s velocity, and surface changes.

    Sarah Drasner, the author of SVG Animations, suggested that we should deal with timing in animation like we deal with headings in typography. Instead of having a single value, you’d start with a “base” and provide several incremental steps. So instead of h1, h2 and h3, you’d have t1, t2, t3.

    Depending on the scale of the project, the timing palette might be simple, or it might be more elaborate. Most of the animations on FutureLearn use a base timing of 0.4. If this timing doesn’t feel right, most likely your object is traveling a shorter distance (in which case use “Shorter time”) or a longer distance (in which case use “Longer time”).

    • Shorter time: 0.3s: Shorter travel distance
    • Base: 0.4s: Base timing
    • Longer time: 0.6s: Longer distance traveled

    Similar ideas used in the duration guidelines for the Carbon Design System are related to the “magnitude of change”:

    Duration guidelines in Carbon Design System. Easing

    Different easing values can give an animation a distinctive feel. It’s important to specify when to use each value. The easing palettes in the Marvel Styleguide provide a handy guide for when to use each value, e.g. “Springy feel. Good for drawing focus.”

    Easing palette in the Marvel Styleguide.

    An easing palette can also be more generic and be written as a set of guidelines, as done in the Salesforce Lightning Design System, for example:

    For FutureLearn, we kept it even more simple and just limited it to two types of easing: Ease out “for things that move” (such as scale changes and slide up/down) and Linear “for things that don’t move” (such as color or opacity change).

    Properties

    In addition to timing and easing values, it is useful to specify the properties that typically change in your animations, such as:

    • Opacity
    • Color
    • Scale
    • Distance
    • Rotation
    • Blur
    • Elevation

    Again, you can specify those properties as palettes with a base number, and the incremental steps to support various use cases. For example, when specifying scaling animations at FutureLearn, we noticed that the smaller an object is, the more it needs to scale in proportion to its size, for the change to be visible. A palette for scaling objects reflects that:

    Small: ×0.025
    Large objects
    e.g. an image thumbnail

    Base: ×0.1
    Medium objects
    e.g. button

    Large: ×0.25
    Small objects
    e.g. icon

    Although there’s no perfect precision to how these properties are set up, they provide a starting point for the team and help us reduce inconsistencies in our motion language.

    Agree on the guiding principles

    If you have guiding principles, it’s easier to point to them when something doesn’t fit. Some of the principles may be specific to how your team approaches animation. For example,

    Guiding principles for motion in Salesforce Lightning Design System are kept short and simple.

    If your team is not yet confident with animation, it may be worth including some of the more general principles, such as “reserve it for the most important moments of the interaction” and “don’t let it get in the way of completing a task.”

    The guiding principles section can also include rationale for using animation in your product, and the general feel of your animation and how it connects with your brand. For example, IBM Design Language uses the physical movement of machines to extract the qualities they want to convey through animations, such as precision and accuracy.

    From the powerful strike of a printing arm to the smooth slide of a typewriter carriage, each machine movement serves a purpose and every function responded to a need. IBM Design Language

    In IBM’s Design Language, the rhythmic oscillation of tape reels in motion is used in a metaphorical way to support user’s waiting experience.

    Guiding principles can also include spatial metaphors, which can provide a helpful mental model to people trying to create animations. Google’s Material Design is a great example of how thinking of interface as physical “materials” can provide a common reference for designers and developers when thinking about motion in their applications.

    In Material Design, “Material can push other material out of the way.” To sum up

    When integrating animation in design systems, try viewing it in relation to three things: guiding principles, patterns of usage, and building blocks. Guiding principles provide general direction, patterns of usage specify when and how to apply the effects, and building blocks aid the creation of new animations. Even if your animations were initially created without a plan, bringing them together in a cohesive, documented system can help you update and build on what you have in an intentional and brand-supporting way.

    Further reading:
    Creating Usability with Motion: The UX in Motion Manifesto
    Web Animation Past, Present, and Future
    Designing Interface Animation
    Animation in Responsive Design

    Footnotes

    3 days 11 hours ago

How Will California's Solar Grid React to the Eclipse?

The Best Cars Up for Auction at Pebble Beach 2017

Review: Miggö Pictar One iPhone Camera Grip

MoviePass Wants to Save Moviegoing—If Theaters Will Let It

Create Chatbots on Android with IBM Watson

  • If you've ever spoken to voice-based personal assistants such as Siri or Google Now, or chatted with one of the many text-based bots active on messaging platforms such as Facebook Messenger and Kik, you probably realize how fun, intuitive, and powerful conversational user interfaces can be. However, because most natural languages are extremely complex, creating such interfaces from scratch tends be hard. Fortunately, there's IBM Watson.

    By using the IBM Watson Conversation service, you can create AI-powered conversational user interfaces in minutes, often with just a few lines of code. In this tutorial, I'll introduce you to the service and show you how to use it in Android apps.

    Prerequisites

    To make the most of this tutorial, you'll need the following:

    1. Creating a Conversation Service

    Before you can use the the IBM Watson Conversation API, you must create a Conversation service on the IBM Bluemix platform and acquire login credentials for it. To do so, sign in to the Bluemix console, navigate to Services > Watson, and press the Create Watson service button. In the next screen, choose Conversation from the catalog of available services.

    In the configuration form that's displayed next, type in an appropriate name for the service and press the Create button.

    2. Creating a Conversation Workspace

    A Conversation service can work only if it has at least one Conversation workspace associated with it. For now, you can think of a workspace as a collection of rules and configuration details, which defines the capabilities and personality of your conversational UI.

    The Bluemix console has an easy to use tool that allows you to create and manage workspaces. To launch it,  press the Launch tool button.

    In the next screen, press the Create button to create a new workspace. In the dialog that pops up, give a meaningful name to the workspace and choose a language for it.

    Once the workspace has been created, you are expected to add intents, entities, and dialog details to it.

    While intents defines actions a user can perform using your conversational UI, entities define objects that are relevant to those actions. For example, in the sentence "book me a ticket from New York to Chicago", "book a ticket" would be an intent, and "New York" and "Chicago" would be entities. Dialog details define the actual responses the conversational UI generates, and how its conversations flow.

    Step 1: Create Intents

    In this tutorial, we'll be creating a very simple Android chatbot capable of performing the following actions:

    • greet the user
    • introduce itself
    • quote inspirational quotes

    Accordingly, our chatbot needs three intents.

    Press the Create New button to create the first intent. In the form that shows up, name the intent #Greeting, provide a few sample words or sentences the user might use for the intent, such as "hi" and "hello", and press the Done button.

    The best thing about the Watson Conversation service is that it intelligently trains itself using the sample user inputs you provide to the intent. Consequently, it will be able to respond to several variations of those sample inputs. For example, it will be able to correctly match words and phrases such as "howdy", "good morning", and "yo!" to the #Greeting intent.

    Press the Create New button again to create the next intent. Name it #Name, and provide the following user examples.

    Similarly, name the third intent #RequestQuote, and provide the following user examples.

    Step 2: Create a Dialog

    Our chatbot is so simple that we don't need to define any entities for it. Therefore, we can now directly start specifying how it responds to each intent we created.

    Start by going to the Dialog tab and pressing the Create button. In the next screen, you'll see that two dialog nodes are created for you automatically: one named Welcome, which is to greet the user, and one named Anything else, which is to catch inputs the bot doesn't understand.

    For now, let's leave the Anything else node as it is, and configure the Welcome node. In the dialog that pops up, type in #Greeting in the If bot recognizes field, and then add a few responses. Obviously, the more responses you add, the more human-like your chatbot will be.

    Next, create a new node for the #Name intent by pressing the Add Node button. Again, fill the form shown appropriately.

    The node for the #RequestQuote intent is going to be slightly different. We won't be manually typing in a few inspirational quotes as the responses of this node because doing so would make our bot too static and uninteresting. Instead, our Android chatbot should be able to fetch quotes from an external API. Therefore, the responses of this node should be sentences that ask the user to wait while the bot searches for a new quote.

    At this point, our workspace is ready. You can test it right away by clicking the speech balloon icon. Feel free to test it with a variety sentences to make sure that it associates the right intents with them.

    Step 3: Determine Credentials

    To be able to use the Conversation service in an Android app, you'll need its username and password. Additionally, you'll need the ID of the Conversation workspace. Therefore, go to the Deploy section and switch to the Credentials tab.

    You should now be able to see the credentials you need. After noting them all down, you can close the Bluemix console.

    3. Android Studio Project Setup

    Although it is possible to interact with the Conversation service using any Android networking library, using the Watson Java SDK is a better idea because it offers a very intuitive and high-level API. To add it to your Android Studio project, add the following compile dependency in the app module's build.gradle file:

    compile 'com.ibm.watson.developer_cloud:java-sdk:3.7.2'

    Additionally, we'll be needing the Fuel networking library to fetch inspirational quotes from a remote server, and the Design support library to able to work with a few Material Design widgets.

    compile 'com.android.support:design:23.4.0' compile 'com.github.kittinunf.fuel:fuel-android:1.9.0'

    Both Fuel and the Watson Java SDK require your app to have the INTERNET permission, so don't forget to ask for it in your project's manifest file:

    <uses-permission android:name="android.permission.INTERNET"/>

    Lastly, open the res/values/strings.xml file and add the Conversation service's username and password, and the Conversation workspace's ID to it as <string> tags:

    <string name="username">1234567890-abde-12349-abdef</string> <string name="password">ABCD123456</string> <string name="workspace">abdefg1234567890-abcdef</string>

    You can now press the Sync Now button to complete the project setup.

    4. Defining a Layout

    We will be creating a text-based bot in this tutorial. Therefore, our app's layout should contain an EditText widget where users can type in their messages, and a TextView widget where the user-bot conversation can be shown. Optionally, you can place the EditText widget inside a TextInputLayout container to make sure that it follows the Material Design guidelines.

    It's also a good idea to place the TextView widget inside a ScrollView container to make sure that long conversations aren't truncated.

    <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:id="@+id/user_input_container"> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Message" android:id="@+id/user_input" android:imeOptions="actionDone" android:inputType="textShortMessage"/> </android.support.design.widget.TextInputLayout> <ScrollView android:layout_width="match_parent" android:layout_height="match_parent" android:layout_above="@+id/user_input_container"> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/conversation" android:textSize="16sp" /> </ScrollView>

    Note that we've set the value of the EditText widget's imeOptions attribute to actionDone. This allows users to press a Done button on their virtual keyboards when they've finished typing their messages.

    5. Using the Conversation Service

    The ConversationService class of the Watson SDK has all the method's you'll need to communicate with the Conversation service. Therefore, the first thing you need to do in your Activity class is create an instance of it. It's constructor expects a version date, the service's username, and its password.

    final ConversationService myConversationService = new ConversationService( "2017-05-26", getString(R.string.username), getString(R.string.password) );

    Next, to be able to work with the widgets present in the layout XML file, you must get references to them using the findViewById() method.

    final TextView conversation = (TextView)findViewById(R.id.conversation); final EditText userInput = (EditText)findViewById(R.id.user_input);

    When the users have finished typing their input messages, they will be pressing the Done button on their virtual keyboards. To be able to listen to that button-press event, you must add an OnEditorActionListener to the EditText widget.

    userInput.setOnEditorActionListener(new TextView .OnEditorActionListener() { @Override public boolean onEditorAction(TextView tv, int action, KeyEvent keyEvent) { if(action == EditorInfo.IME_ACTION_DONE) { // More code here } return false; } });

    Inside the listener, you can call the getText() method of the EditText widget to fetch the user's message.

    The TextView widget will be displaying both the messages of the user and the replies of the bot. Therefore, append the message to the TextView widget using its append() method.

    final String inputText = userInput.getText().toString(); conversation.append( Html.fromHtml("<p><b>You:</b> " + inputText + "</p>") ); // Optionally, clear edittext userInput.setText("");

    The user's message must be sent to the Conversation service wrapped in a MessageRequest object. You can create one easily using the MessageRequest.Builder class.

    MessageRequest request = new MessageRequest.Builder() .inputText(inputText) .build();

    Once the request is ready, you must pass it to the message() method of the ConversationService object, along with the workspace's ID. Finally, to actually send the message to the Conversation service, you must call the enqueue() method.

    Because the enqueue() method runs asynchronously, you will also need a ServiceCallback object to get the service's response.

    myConversationService .message(getString(R.string.workspace), request) .enqueue(new ServiceCallback<MessageResponse>() { @Override public void onResponse(MessageResponse response) { // More code here } @Override public void onFailure(Exception e) {} });

    Inside the onResponse() method, you can call the getText() method of the MessageResponse object to get the Conversation service's response.

    final String outputText = response.getText().get(0);

    You can now append the response to the TextView widget again using its append() method. However, make sure you do so inside the runOnUiThread() method because you are currently on a different thread.

    runOnUiThread(new Runnable() { @Override public void run() { conversation.append( Html.fromHtml("<p><b>Bot:</b> " + outputText + "</p>") ); } });

    Our bot is almost ready. If you try running the app, you'll be able to get correct responses from it for the #Greeting and #Name intents. It still can't recite inspirational quotes though. Therefore, we must now add code to explicitly look for the #RequestQuote intent and generate a response manually.

    To extract the name of the detected intent from the MessageResponse object, you must call its getIntents() method, which returns a list of MessageResponse.Intent objects, pick the first item, and call its getIntent() method.

    if(response.getIntents().get(0).getIntent() .endsWith("RequestQuote")) { // More code here }

    There are many websites with free APIs you can use to fetch inspirational quotes. Forismatic is one of them. Its REST API provides quotes as plain text, which you can directly use in your app.

    To make an HTTP request to the Forismatic API's URL, all you need to do is call the get() method of the Fuel class. Because the method runs asynchronously, you must handle the HTTP response by calling the responseString() method and passing a Handler object to it.

    Inside the success() method of the handler, you can simply append the quote to the TextView widget. The following code shows you how:

    String quotesURL = "https://api.forismatic.com/api/1.0/" + "?method=getQuote&format=text&lang=en"; Fuel.get(quotesURL) .responseString(new Handler<String>() { @Override public void success(Request request, Response response, String quote) { conversation.append( Html.fromHtml("<p><b>Bot:</b> " + quote + "</p>") ); } @Override public void failure(Request request, Response response, FuelError fuelError) { } });

    The bot is now complete, and will be able to generate the right responses for all the intents we added to the workspace.

    Conclusion

    Conversational user interfaces are all the rage today. They are so easy to use that everybody loves them. In this tutorial, you learned the basics of creating such interfaces on the Android platform using the IBM Watson Conversation service.

    There's a lot more the service can do. To learn more about it, you can refer to the official documentation.

    And be sure to check out some of our other posts on using machine learning for your Android apps!

    3 days 16 hours ago

Pages