News

RxJS Observable – forkJoin with nested observables

This is really just for archive sake, but since it took a bit of looking into, I thought it worth posting since I hadn’t actually found the answer anywhere.

If you want to fire off several observable objects async but have to wait until they’re all completed before moving on, you’ll want to give the static forkJoin method a look.  ForkJoin will take an array of observables, execute them and give you back an array of their results when all have completed.

Unfortunately, if you’re using a version of RxJS that’s previous to 5.0, nested observables will complete their task, but forkJoin will not see it without a subscriber.complete() call from the observables that have a nested observable.  This is because RxJS does not have a complete() method on the Observable object until 5.0.0 alpha.1 (as of this writing, i’ve tested with 5.0.0-alpha.1 and 5.0.0-beta.11 successfully).  Once I was able to call “complete()” from the observable that was added to the forkJoin initially, the results were returned successfully.

Conclusion (because that sounded confusing): If you have observables that rely on other observables (nested), they must call “complete()” on the subscriber for forkJoin to realize the jobs are complete.

See the Pen RxJS Observable – forkJoin Example by John Grden (@neoRiley) on CodePen.

Thanks to Brian Troncone for the original JSFiddle

Have a bandit day!

Measuring Text in HTML5

For a while now, I’ve been doing quite a bit of work in HTML5/Javascript/CreateJS.  While the work is enjoyable, dealing with getting the bounds of a text field seems to have many solutions.

I’ve read through just about every one I could find (and believe me when I say –  there are many), looked through examples and did many tests.  After all that, I’m sharing my final solution for getting the bounds for a string:

Example call:

Fiddle: https://jsfiddle.net/neoRiley/qcphL0g4/

I give some credit to schickling on stackoverflow (only 6 votes?!) for the main inspiration as he aptly identified what I think is the key component: document.createTextNode()

I’d been using JQuery and wanted a solution that worked without a 3rd party tool.  The only thing that reliably gave me the same bounds as JQuery is the above method.  I’ve added debug support should you want to verify that it is indeed testing your string correctly.

GetBoundsOfText

Have a bandit day!

Parceler: Say good-bye to all that boilerplate code

On Android, if you like typing tons of lines of boilerplate for every property in your Parcelable classes, then this article is not for you.  So, go about your business citizen.

However, if you would like to create your class with 2 tiny additions and call it a day, you’re gonna love this post!

UPDATE – thanks to the author, John Ericksen, for pointing out that Parceler does not actually create equals and hashCode methods.  I have updated the examples and the post to properly reflect this

Parceling is one technique for passing objects from one context to another in Android apps.  To pass the objects along, they must implement the Parcelable interface and you’re forced to implement the methods and type all of that boilerplate code in-between.  In many cases, your objects may contain tons of properties and that equates to exponential typing.  In order to deal with such boilerplate code, we implement rules at Dreamsocket like organizing the properties in alphabetical order because of the requirement to write to and read from the parcels in the same order, and I’m sure you’ve developed your own methods of trying to drudge through copy/pasting the properties in as fast and efficient a manner as possible.

While Android Studio does a great job in helping generate the necessary methods to implement, this solution is a massive waste of valuable time and prone to mistakes.

Introducing Parceler.

Parceler is a code generation library that generates the Android Parcelable boilerplate source code.

In a nut shell, add the @Parcel  annotation to your POJO and a blank constructor (UPDATE:  a blank constructor is necessary ONLY when another constructor with parameters exists – otherwise, you can drop the constructor completely), and you’re in business.  What’s that you say??  Impossible?  Take a look at a comparison between these 2 versions of the same object called “Jedi”.  The first version implements Parcelable along with all of its methods and the other is a Parceler version of the same object structure:

Parcelable version (64 lines of code):

Parceler version (16 lines of code):

Essentially, Parceler is creating a Parcelable wrapper class for use with Parcels.wrap() and Parcels.unwrap() static methods at runtime, while your original class is left as is.

Now, when you’re ready to use Parceler in production, create your Parcelable with Parcels.wrap() method:

and then use Parcels.unwrap() to retrieve your object:

Parceler also offers a level of consistency between developers.   For example, a seemingly harmless action of how to read/write booleans can be achieved in several different ways.  This stackoverflow example shows how easy it is for a team of developers to approach something this simple and yet come up with several different implementations.  Parceler eliminates these types of issues.

For more information on Parceler and it’s many features, head on over to its Github repo and check out the readme

Have a bandit day!

Automated Testing for Android Development

Here at Dreamsocket, we’ve been doing Android development for years. And we’re always trying to improve our process and add to our tool chain, allowing us to deliver higher quality apps in less time. A big push we’re currently on is getting unit and UI testing integrated into our build process.

There are two types of testing for Android:

  1. Unit tests using JUnit.
  2. UI Testing using Espresso.

Unit Testing with JUnit

Unit tests verify individual units of logic: does this method return what it should when it is passed specific parameters? What happens when incorrect or senseless parameters are passed?

Unit tests are not run within an Android app, so there is no context. So it’s not really possible to do any kind of testing of UI objects (anything that extends View). In some instances, you may be able to create a mock context (see Mockito) and create an instance of a custom UI object and test some of its non-UI logic, but in practice, something in that class is eventually going to call some non-trivial method on the mock context and it’s going to crash.

So unit tests are really for testing data objects or classes that manipulate data objects or perform the business logic of the app. Note that you can unit test a class that accesses UI objects. You’d just need to mock those objects, so that you aren’t attempting to create actual UI objects.

UI testing using Espresso

UI tests actually create a context (they actually instantiate an activity or service), thus they can instantiate and run View-based UI objects. Espresso has methods to locate specific views and perform user actions such as clicks, presses, gestures, text entry, etc. on individual components of those views and verify the state of views after these actions have been performed.

Setup

Gradle

For JUnit unit tests, you’ll need to add some “testCompile” dependencies in your app’s gradle build file. These go in the “dependencies” section of the build file.

testCompile 'junit:junit:4.12'
testCompile 'org.mockito:mockito-core:1.10.19'

Mockito is optional, but is useful for creating mock objects to use in your tests.

For Espresso UI tests, you’ll need to add some “androidTestCompile” dependencies.

androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2'
androidTestCompile 'com.android.support:support-annotations:23.0.1'
androidTestCompile 'com.android.support.test:runner:0.5'

The app itself possibly has a dependency for the appcompat library like so.

compile 'com.android.support:appcompat-v7:23.2.1'

But this version of the appcompat library may be in conflict with the version of appcompat that Espresso is using. So if you run into an error stating something along those lines, you can force Espresso to use the same version as the app:

androidTestCompile 'com.android.support:appcompat-v7:23.2.1'

Then, in the android / defaultConfig section of the gradle build file, add this line:

testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"

This lets the app know how to run your tests.

Test Packages

The test classes for the two types of tests go in specific places.

JUnit unit tests should go in a directory named test under src. And Espresso UI tests should go in an androidTest directory under src.

So your project should look something like this:

Screen Shot 2016-04-06 at 10.31.25 AM

Note that under app/src, there are androidTest, main, and test directories.

Test Classes

Within the test directories, you’d have your normal java/com/dreamsocket/etc paths and eventually your classes, which should match the package and class name of the class you are testing in most cases, e.g., if you are testing

com.dreamsocket.widgets.video.UIVideoPlayer.java

The test class would probably be

com.dreamsocket.widgets.video.UIVideoPlayerTest.java

Within src/androidTest/java

This way, you can always find the tests that test a particular class, because they have the same package name.

JUnit Test Classes

For JUnit unit tests, you just need a very basic class. It doesn’t need to extend or implement anything. It should be public, take no parameters and return void.

It’s useful to do static imports for the JUnit Assert methods and Mockito mock methods:

import static org.junit.Assert.*;
import static org.mockito.Mockito.*;

As you’ll be using them a lot and now you can just say assert(something) and mock(something);

Test methods are annotated with @Test. These usually perform some action on an object and then make an assertion about that object’s state.

@Test
public void TestDataObject() {
var foo = new Foo();
assertNotNull(foo);

foo.setBar(99);
assertEquals(foo.getBar(), 99);
}

You can also add a comment as the first parameter. This can be helpful when a test fails. The comment you entered will be displayed.

@Test
public void TestDataObject() {
var foo = new Foo();
assertNotNull("foo should exist.", foo);

foo.setBar(99);
assertEquals(“bar should be equal to 99”, foo.getBar(), 99);
}

Obviously, those are rather useless comments, but in some cases, it can be very helpful to document the intention.

You can also add methods with @Before and @After annotations. The @Before method will be run multiple times, before every single @Test method, and the @After method will be run after every single @Test method is complete. These can be used to create and destroy objects or mock objects for tests, so the objects are always in a new, fresh state for each test, not in some changed state from the previous test.

Espresso Test Classes

UI Testing classes are a bit more complex. First of all the class itself needs a couple of annotations:

@RunWith(AndroidJUnit4.class)
@LargeTest
public class UIVideoTest {

}

There are also a bunch of static imports that make coding UI tests easier. Here are some:

import static android.support.test.espresso.Espresso.*;
import static android.support.test.espresso.action.ViewActions.*;
import static android.support.test.espresso.assertion.ViewAssertions.*;
import static android.support.test.espresso.matcher.ViewMatchers.*;

Then you need an ActivityTestRule. This specifies which activity you are testing. For each @Test method, Espresso will launch that activity, then run the @Before methods, then run the @Test method, then the @After method, then terminate the activity. So each @Test method gets the activity in its pristine, just launched state.

@Rule
public ActivityTestRule m_ActivityRule = new ActivityTestRule(MainActivity.class);

There is also a ServiceTestRule if you want to test services.

Then you set up @Before, @After and @Test methods the same as in JUnit.

The flow for Espresso tests is:

  1. Locate a view.
  2. Perform an action on that view.
  3. Do an assertion on that, or some other view.

For example, say you have a button that displays a particular view. You can locate the button, perform a click action on that button, and then check that the other view is now visible.

Locating views is done with onView(). You pass this a view matcher. A view matcher finds a view with specific attributes. You can almost think of it like JQuery for Android views. For example, to locate a view with a particular id:

onView(withId(R.id.play_button))

Or a view that has particular text:

onView(matches(withText("Play"))

There are other matchers as well. It’s important to make your matcher specific enough so that it finds a single view. If your matcher finds multiple views, you will get an error.

Once you have a view, perform a view action on it, such as a click:

onView(withId(R.id.play_button))
.perform(click())

There are other types of view actions – double clicks, long presses, back button, key presses, text input, gestures, etc.

Then you would perform a view assertion. You could assert something on the same view:

onView(withId(R.id.play_button))
.perform(click())
.check(matches(withText("Pause")));

In other words, when the button is clicked, its text should change to “Pause”. So you look for a view that contains the text, “Pause” and if that exists and matches the view you just found, the test passes.

Or you can check another view at that point.

onView(withId(R.id.cc_button))
.perform(click());
onView(withId(R.id.captions))
.check(matches(isDisplayed()));

Click the closed caption button, then check if the captions view is displayed.

There’s also an onData method that is used for adapter views.

There are many other types of matchers, actions, assertions. This cheat sheet is useful.

https://google.github.io/android-testing-support-library/docs/espresso/cheatsheet/index.html

Running Tests

Once you have a test class with some test methods, you can run tests with Control-Shift-R. If your cursor is within a specific test method, it will run that one method only. If your cursor is outside of any methods, it will run the whole class.

Once you’ve run a particular test method or class, it will show up in the configuration menu at the top of Android Studio and you can rerun it by selecting that and running as usual. It’s also possible to edit the configuration to change what is run, or create new configs.

Automating Tests

You can run automated tests of both types with the command line:

./gradlew cAT

cAT stands for “connected Android Test”. A device or emulator needs to be active on the machine where the testing is being done, if you are running UI tests. The output for the tests will be html documents in the project directory under

app/build/reports/androidTests

And

app/build/reports/tests

Or you can access an xml version of the test results at

app/build/test-results/

And

app/build/outputs/androidTest-results/

Which will look something like this:

So you can programmatically process the failures and errors attributes within an automated build system.

Links

Overall Android Testing link

http://developer.android.com/training/testing/index.html

Mockito (mock objects)

http://site.mockito.org/mockito/docs/current/org/mockito/Mockito.html

Espresso

https://google.github.io/android-testing-support-library/docs/espresso/basics/index.html

Understanding the evolution of the Apple TV.

It has been a few weeks since the marketing machine we all know as Apple announced their plans for the TV. Given that space of time, I think that people can now look at the announcement a little bit more objectively and try to make true judgement calls on it. Everyone is asking, what can we expect from this product?

I think to truly have insight into any product announcement, you have to ask a few other questions first:

  • Is this what they set out to build?
  • What are their future plans?

If you answer these questions, you can begin to answer the larger question of what to expect.

As someone who builds his business around platforms like Apple’s, my mind has been guessing and evaluating the possibilities for quite some time. With the new announcement, we now have a tangible picture of what they’ve done for the Apple TV and can draw some conclusions about where it’s going. Since I have pondered the what ifs a lot longer than most, I’m going to give you a little dive into how my brain has deconstructed it all and you can take from that what you will. A forewarning, I think about this a lot. I have a lot to say, but I think it will help you draw valuable insight into Apple’s new product and how they operate.

So let’s jump into our first question, “Is this what Apple set out to build?”. It is important to note, products typically don’t pop up overnight. They are planned well in advance and evolve over time. Inside discoveries and outside market forces tend to shape the direction. I like to think that Apple has product visions that stretch over multiple years rather than operating on quarterly reactions. Other forces effect it, but I believe more than other companies they have very strategic efforts that they carry out. Let’s take a trip back in Apple’s timeline to see what I mean.

Here are some key dates that we can use to potentially understand Apple’s TV initiatives.

AppleTimeline

Each of the dates above play a role in my mind to the evolution of the product. Some were initial inspirations and others were planned initiatives around the TV itself. If you look at what happened on each of the dates and focus on how they relate to video, the timeline starts to tell a story.

ipod_classic

The iPod and iTunes

There have been many influences over the years, but what can we point back to as the original spark? I’m going to pick the iPod and iTunes. It is common knowledge that iTunes and the iPod changed Apple and the entire music industry forever. Apple hit a sector of the entertainment industry that was falling apart and desperately needed a solution. Apple offered it, and in turn became the the main broker of music entertainment. This introduced a completely new outlet to their business, selling content. This obviously opened their eyes and filled their pocket books. I don’t think many people saw it coming, not even them. It was the beginning of a shift in their thinking.

macmini

Media Centers and the Mac Mini

Over the course of the next few years, Apple really prospered. Obviously anyone at the helm of a ship like that starts asking what other things can we apply this winning formula to. It seems like a no brainer, TV shows and movies right? They are consumable media just like music. At the time of iTunes, video content wasn’t very digital yet, but people were exploring its potential. In 2002 Microsoft introduced a Media Center Edition of XP that was a play into the space. Their OS sought to provide access to movies, TV, pictures, music and more in a “leanback” experience. This really got the conversation going about a computer for the living room that brought all media in digitally.

Fast-foward to January 22, 2005, and Apple introduces the Mac mini, an extremely compact version of their desktop computer. At the time, the market was saturated with monster computer towers that took up enormous amounts of space and had 50 different components thrown into them. This was a great move by Apple on many levels. It started to remove a lot of excess in desktops, reducing both cost and size. At the time, people had become accustomed to desktop beasts they were forced to live with, so it didn’t innovate that market overnight. It was good for it, but it wasn’t a game changer. However, if you look at it from a different perspective you can see a bit more genius in it. As people began exploring “media centers” in their living rooms, the giant towers weren’t accepted there. Consumers weren’t going to be OK with putting some massive brick beside their TV. The Mac mini was the perfect form factor for that room. If you compare it to the modern Apple TV, the two look almost the exactly same. I’m sure a lot of the components, tooling, and manufacturing capabilities used to create the mini were able to be leveraged to produce the TV.

At this time, I believe Apple had a vision of getting into the living room, but they weren’t sure what it was going to look like yet. They knew video content still hadn’t made a huge shift to digital, but it was gaining a lot more traction than ever before. Flash video had been introduced in 2003 and was updated in 2005 to actually be more viable. People were starting to explore the potential of this technology, both large media companies and startups. The biggest player to the movement, YouTube, went live on February 14, of that same year. This time period provided the biggest spark for digital video movement. It started to make it more viable.

frontrow

Front Row and Purchasable video content

Apple was reading all the signs, and the mini provided them a vessel to explore with early adopters. They were playing it safe, because they still needed to determine consumer behaviors around the space before they jumped into it. It was still evolving. Unlike Microsoft and Google who are quick to get their ideas out, Apple tries to make sure they don’t make a play at something until they feel the play is solid. They do explore, but are calculated in their explorations. Jumping to October of that same year, you can see this. Apple announced new software Front Row and introduced TV show purchases to iTunes. For those not aware of what Front Row was, it could be seen as Apple’s first “media center”. When announced, their computers started shipping with small remotes. When used with the Front Row software, it turned your desktop computer into a leanback experience. You were able to browse photos, listen to music, and watch videos. The introduction of TV show purchases to iTunes, along with movies the next year in September of 2006, marked their desire to bring the same model they had applied to music to video. Even if it was an exploration play, it showed Apple had interest in the living room.

appleTV_v1

Apple TV (v1)

The original Front Row, remote, and iTunes purchases probably got very small usage, but I believe this taught Apple a lot. It illustrated, that the living room was a different experience. Consumers weren’t going to just move their desktops into that room. They weren’t going to spend a fortune for that experience. They didn’t need a powerful beast to just consume things. They just needed the simple things that front row provided on a less expensive device. It could be very focused and lean. Fine tuning their software and hardware, on January 9, 2007, Apple brought out their answer, the original Apple TV. With its launch and for years to come, they also made a very strategic note that it was a pet project for them, an experiment. They knew the market wasn’t completely there yet and they didn’t want the product to be viewed as a failure. This type of marketing definitely buffered them from getting criticism over the years as their other products had more stellar sucess stories. I also truly believe that they understood how complex the TV and movie business was. Unlike the music industry, the TV and movie business wasn’t in dire distress. The relationships and how it was run made for a huge hurdle as well. It has many layers, which I’ve mentioned before. Apple wasn’t going to be able to take that market as easily as they had with music. Knowing it was eventually going to happen, their approach was genius.

iphone-1st-gen

The iPhone distraction

The next 8 years, I consider years full of learning and distraction. The distraction came from Apple’s second modern blockbuster, the iPhone released on June 29, 2007. In a world full of people looking at their smartphones every 5 minutes, it is hard to imagine a world that existed without them. Being in the industry, I remember the promise for years of how insane the market could be. The number of people who owned phones vs computers, the rate at which they acquired new devices, everything showed promise. It wasn’t until the delivery of the iPhone that anything ever delivered on it.

Personally, my guess is Jobs was more interested in the space of personal computing than small handhelds. He was a creator. I think what we saw the iPhone become, he originally had planned for the laptop/tablet. In a way the phone was a distraction for him from creating a device to create. It was a device to communicate and consume. I tend to wonder if it went against his grain a bit, given I feel he desired one on one communication.

If you back up  to September 7, 2005, 2 years prior to the iPhone launch, Apple announced an attempt with Motorola to put iTunes on one of their phones. It made total sense, given the closeness in form factor and the ability to have the iPod morph into a connected device. I see the Motorola venture as another experiment. Apple was looking for what that experience might be. They knew the potential for smartphones, but had to figure out how to approach the industry first. I think the experiment showed them how underserved the market was. It probably also ate at Jobs up that it was so bad.

AppStore

The App Store

A year passed after the launch of the phone, and in July 2008, Apple announced the App Store on the phone. This was a monumental game changer for computers in many ways. Up to this point, the software industry was either shrink wrap or purchased direct from the creator digitally. The phone proposed interesting issues of “how do you get apps on it?” and “how do you ensure they don’t wreak havoc on the device?” Since iTunes and music were the original motivating factors to move to the phone, Apple took a play from its own book. They created the App Store to be iTunes for apps. Not only were they able to sandbox the things being deployed to the phone, they became the curators and brokers of what went on the device. It wasn’t a new concept, but it was the first time someone was able to pull it off. With the ability to have an instant market for what you create, this brought developers in swarms to the platform. It transformed the device into something entirely new. It became a brain and game console in your pocket.

Even more so than the iPod, I don’t know that Apple saw how big this would become. Their company focus shifted again. The phone was their golden goose and would hold the company’s main focus for years. Everything that hadn’t been fully developed got pushed down in priority.

ipad_v1

iPad

The one thing that didn’t lose priority was the device that I believe Jobs originally wanted to create, the iPad. In April 3, 2010, the first version was released. In watching Jobs present it, I feel that it was very evident, this was one device he had envisioned for a long time. In many ways, the iPad was just a big iPhone. Some people mocked it as such and couldn’t see the need for it. What they weren’t seeing was that it lended its self to consuming content that wasn’t as suitable for the phone, from both a connection and form factor perspective. One of those was video. If you are going to be in a place that has wifi, sitting instead of standing, would you rather watch a movie on iPhone or iPad? The larger form factor also gives you more room for control. It filled a gap that the phone and laptops didn’t.

movie_rentals

Purchase vs Rental

As I mentioned, while the iPhone revolution was going on, Apple was still learning. The TV took a backseat, but I think it needed to. The market was still too complicated. Apple originally approached it with a purchasing model, applying the same logic to video content as they had to music. The problem with that approach was people don’t consume TV shows and movies in the same way they do music. Typically unless you have kids and you just throw a movie in and hit repeat, you aren’t going to watch the same thing over and over again. With music, you do. Coming from a behavior of purchasing albums and having the desire to listen to the content over and over again, people were willing to buy. They wanted the ability to own their content and take it with them. The iPhone and apps like Spotify and Pandora have since changed this behavior, but coming off of CD sales and Napster, it was what people expected.

If you look at the video market, it was dominated by rentals, VHS and later DVDs. DVD sales existed, but it wasn’t the dominate choice. Apple later realized this and 3 years after introducing purchasing, announced rentals on January 15, 2008. The market for the TV was still very niche though.

AirPlay-Logo-with-Text

AirPlay and one device to rule them all

With the massive success of the iPhone and iPad, I bet Apple’s thinking shifted. Apple started asking the question, what if the iPhone is the center of everything? If I have access to all my content thru that device and I carry it with me everywhere, why not make it be the brain? September 1, 2010, Apple started with another experiment: AirPlay. AirPlay is the ability to mirror or broadcast content from one device (iPhone, iPad, or Mac computer) to another device like the Apple TV. This is a powerful concept and requires that you really only have one smart device and anything it broadcasts to merely needs to be a receiver. Bill Gates outline it in his book “The Road Ahead”, written in 1995. I explain this point because many people don’t know what it is or that it even exists. That is part of its issue, it requires multiple unified elements and an understanding of how they all tie together.

I’ve longed for this concept to become true. In presentations I gave back in 2005, I talked about the iPod becoming just that. Even back then, it was evident that it could become a handheld super computer. Google made a go at this too with Chromecast a simple TV receiver/dongle priced under 40 dollars that introduced on July 24, 2013. More recently there have been attempts on Android and Microsoft to take the extra step and make the device change its UI based on the context of the receiver. Microsoft labels it Continuum. It will come, but the complexity of that market will take time to evolve.

tv-market

The complexity of the TV market

AirPlay illustrated Apple was trying to look at different approaches to the TV market. In addition, I think they were trying to figure out a much harder issue, how to enter it from a content perspective. Apple tried both purchase and rental models with content. Traditional media companies were fine with this approach since it was more of a secondary market. What I think Apple came to realize is that consumption behavior for video is all about first run and being the original distributor. This made their job much harder.

Like I mentioned in the Apple TV (v1) section, there have been huge barriers if you were going after the premium content tier (movies, tv shows). We know from 2014 FCC Filings by Comcast and Time Warner that Apple had approach them about jointly developing a set-top box. The relationships with content providers and MSOs (AKA multiple-system operators like Comcast, DirectTV, etc) is so hard to break, I believe Apple realized that a strong play would be to work directly with the MSOs. The problem with that is that the MSOs knew they still had a good thing and saw what happened to the music industry. I think they may have strung Apple along. Why let them in if you are doing well and your wall is high?

colorbars_distorted

TV starts to break down

If you read my article “What is happening to the entertainment industry?” , you’ll see that the walls that were once so high have broken down. It didn’t happen overnight like the music industry, but the entertainment industry is slowly entering into an initial state of distress. One of the largest catalyst to this movement is Netflix. They originally copied the cable companies subscription model and applied it to DVDs, then transitioned that success to online distribution in 2007. One of the keys was convenience and paying a small monthly fee. Consumers were used to subscriptions, it was expected and welcomed. The cost played a huge role too. In comparison to DVD rentals or cable subscriptions it was more approachable. Netflix could arguably be labeled the first online cable company..

Netflix-TV-Shows

Whats the cheapest way to get Netflix on TV?

As Netflix became more popular online, people began to have the desire to watch it on their TV. This fueled what I like to call the “how can I get Netflix on my TV” phase. Netflix was incredibly smart about trying to get it on anything and everything that could land them on the TV and in your living room. Whether it was a smart TV, Xbox, Apple TV, or some random device like a Boxee the public started to seek out the cheapest way to get Netflix on their TV.

In the spring of 2014, both Google and Amazon introduced versions of their OS for the TV. However, I think the real tide turned in November 19, 2014 when Amazon introduced a small dongle for the TV known as the Fire Stick that ran their TV based OS. The kicker was they sold it for only $19 originally for Prime members. From a cost perspective, it made this a no brainer. It was so cheap, why would you not buy one? What was surprising was the device and OS were actually nice. It entered in for most people as the cheap way to get Netflix (and Prime) on the TV, but the man behind the curtain knew it had an app store and was running Android. If you jump forward to today, a lot of content companies like HBO, Showtime and others have applications running on the device.

appleTV_v3

Is the new Apple TV what Apple wanted?

Finally, let’s talk about the new Apple TV (v2) that was announced on September 9, 2015. I think we can look at all of the points mentioned in this article and see how this product evolved. We know Apple has had an interest in the TV space all along, they knew the promise of it. I think they have been cautious of how they approach it both from a device and content perspective. They experimented with different approaches, but I think very early on realized they need a relationship with content providers in the same way that MSOs had. I believe they also realized that because the industry was relatively stable that it was going to be hard to accomplish that. Their attempts to get an insider advantage by working directly with MSOs failed. However, the industry has also taken a significant turn. Netflix and Amazon have started to illustrate there are approaches that work.

Another key point that we haven’t discussed is games. Games have been a staple of the living room since the days of the Atari 2600. With the iPhone, Apple owns the handheld gaming market. They have instant distribution. Why would they not take on consoles like the Wii? I suspect this was one of the other pushing points for the Apple TV.

I feel Apple really wants to be a distributor like the cable company. From a revenue perspective, it is a market they aren’t capturing like they could. I think they’ve always wanted to be there. How they approached it has changed and evolved, but their underlying goal has been consistent.

With the TV market shifting how it has, other devices seeing success, Apple’s desire to compete with consoles, and most importantly consumer awareness and desire to bring interactivity to the TV experience laid out a perfect time for them to make a play.

The product isn’t exactly what they wanted but it holds to the essence of what they’ve been striving for. Time will tell, but I believe it is missing a big key component they desire: a subscription/distribution outlet.

apple_plans_pcb

What are Apple’s plans?

So, if we consider that Apple’s original plans are incomplete, do we think they are still going after them? Absolutely.

Apple does hardware and OS level software extremely well, but they tend to trail a bit behind companies like Google or Amazon who excel at cloud based software. Netflix’s success has come from their cloud based subscription/distribution platform for video content. To compete and do well in this market, you have to have that. On May 28, 2014, Apple acquired Beats Music which did just that. For those who only see things at a surface level, one might expect the acquisition was done in order to get their line of physical headphone products. I think that was just an added bonus. In buying Beats, they got multiple elements that could play critical roles in their business. The most important one was the subscription service. We have already seen this take form with Apple Music. Their once dominant position in the music industry has faded with Pandora and Spotify. The service helps them address that AND it helps them with a future setup for TV. They will have an app just like Netflix or their own Music app that is a subscription service. My guess is they are still trying to work thru content relationships and they want it to be done right. In addition, having the device in the market will help drive discussions around those relationships.

Siri_icon

Talk to me like “Her”

You may be asking, what about the software on the device itself? Is that what Apple intended. For years I’ve felt like one of the hardest pieces to tackle with the TV is the interaction model with it. Directional based navigation controls, ugh! Apple stayed the course there with the small addition of a swipe face for a little more control. Why do you think that is? Apple showed you why, it is voice. They understand that the living room interaction is limited (outside of games). Really you want to just find and consume something. The goal isn’t interaction as much as it is getting to the content and consuming it with as little effort as possible.

Siri has become more and more powerful over the years. It is no where near the level that we see in the movie Her, but that reality is coming. Amazon, Google, Microsoft and Apple all understand that, for consuming information and media, the best interface is non tangible interface. Think about how much easier and efficient a natural language conversation can be. What’s the weather today? What was the score for last night’s Braves game? Can you find that movie with Robert Redford about baseball? This is Apple’s true play for the future of consumption based interaction with computers. The TV environment lends itself better than any other to a voice interaction model.

The problem with basing interaction on voice is exposing content in applications that by their very nature hide it. How do you say I want to watch Orange is the New Black and the only place I can consume it is in the Netflix app. Unlike the web that was indexed by Google and liberated with its search, mobile apps have been silos. Google and Apple have both approached this problem recently with their App Indexing and Universal Link solutions. We finally have indexing of applictions. Just how search transformed the web, this level of content awareness will take OSs to an entirely new level.

app_store

How important are applications?

If you paid attention to the keynote and how content was structured within the OS, something also stuck out. Not only was Apple providing an easy way to jump into content from search, they curated the results in a similar fashion to how Google does on the web. To see what I mean, do a search for Big Bang Theory. Rather than simply list out various apps that might provide the show, they created a buffer page with show details too. If you are using voice as your interaction model and your goal is to just consume content, applications start to become less important to an extent. It should be very interesting to see what impact this has.

apple_games

The Apple Gaming console

My particular interest with the TV obviously is how it relates to media. As I mentioned the other driver for Apple is games. It is the highest revenue stream they have in the App Store. Games and the living room have been like milk and cookies, a perfect match. It boggles my mind that Amazon and Google haven’t put more emphasis on growing this on their TV based platforms. I know it is in their sights, but it seems they haven’t put a large focus on it yet. Obviously the big draw for them has been the cheapest way to get Netflix on the TV, but man theres a huge opportunity there. Apple consumers will expect it there and developers will see the potential. It is going to happen. If you jump back to Apple’s WWDC conference in the spring, you will see they have big plans. As they were showing off new games being developed in Metal (their programming answer for taking game graphics and processing to the next level), it was obvious the level of games being shown were on a level that didn’t make sense for the casual expectations of handhelds. Apple is looking to take on the Xbox and Playstation.

Should we be excited?

Apple has always been great about entering a market at the right time and using their marketing and approach to transform it. When Apple makes a play, the general public accepts it more than any other companies attempts, because they perceive it as a ground breaking and are made more aware of it thru stellar marketing. They aren’t doing that much more than the others out there, but the public’s attention to it and the developers willingness to get behind it will open this market up.

I have a feeling that it probably scares the MSOs. I would not doubt if content companies put their TV Everywhere apps on the platform, you don’t see some choice providers missing from the list. Once the general public starts to understand that they can have the same ondemand experience on their TV as they have had with their other devices, it changes everything. It exists now, but for the mass public it’s not there. That perception shift will make people question cable boxes. Change is inevitable though.

I personally am excited. I’m not an Apple Fanboy, but a fan of progression and the ability to create great things. Apple tends to open up doors and that makes me happy. I’ve been waiting for a long time for the TV to change. In the background, it already has. It has died little by little, but there is hope.

Just like Apple, at Dreamsocket we’ve experimented thru the years. In November 2006, we worked with Playstation prior to the PS3 launch to dream of what a TV and game experience might look like merged. When asked why the project was important to me, I used the opportunity to try to throw my ideas at them. I noted that for me as a developer, I wanted Sony to create a rapid development model that could allow anyone to publish to their device. I also mentioned that there was no need for discs, digital distribution over the internet direct to the device made a lot more sense. I just wanted to see the opportunity to be in the living room. I’ve always wanted it. There is something about your having your app on a big screen.

Over the course of the next 9 years, we took other projects here and there to test the waters. When the Google TV came out, we built apps for it. Getting an opportunity to help create the actual interface for a gaming device itself, we jumped on it. Exploring how a phone could interact with TV, we made a play at it.

TV has been waiting to be modernized for years. Knowing that the new Apple TV was coming out, we developed new applications for some of our media clients on the Android and Amazon Fire TV devices. They were actually strategically targeted to come out the week after the Apple announcement and a few days before the Amazon one. They haven’t released yet, but they are coming down the pipe.

Why did we do this? Knowing that Apple has gone from experimentation to going after the TV market, I knew that unlike anyone else, they have the capability to be a catalyst for the space. I’ve hated to see TV living in legacy for years. Even though the living room is less dominate now, it still is the first and final frontier for entertainment. Most importantly developing for the TV is fun.

If you managed to read all of this, congrats and thank you ;). I could go into much deeper levels on many different points, missed a lot of things that I also feel are relevant, but it gives you a peek into my brain. That peek may be right or wrong on things, but it shows the importance of trying to understand the big picture. If you can do that, you can see how things emerge and you can spot opportunity. I’m excited about the Apple TV.

The Rise of the Superhuman

Foreword
To preface my ramblings to follow, I’m in a love/hate relationship with modern technology. With every year, we become more empowered, yet we become more isolated and social inept. I have faith though. I believe that at the core of every human exists a desire to do good and an ability to achieve greatness. Therefore due to my faith in humanity I have faith in technology. If advanced and used correctly technology can make us better, it can bring us closer. There is a dark side, but we must look to lead ourselves into the light.

So where are we?
We are in the age of evolution. We are rising as humans, we are augmented. Our intelligence is no longer limited to our internal minds or our direct surroundings. We are able to bring in knowledge from social collectives, from endless repositories. This knowledge travels with us, guides us. We don’t think about it, but it exists today. How? We walk around with small gateways into super computers. We walk around with the “smart phone”. These little devices have transformed our capabilities, extended what we can do, and in short made us super human.

What do you mean super human?
If you are thinking that its just a phone, you are sorely mistaken. Whenever I talk to people about transformation, I always cite my move to San Francisco in the early 90s. Bare with me as I take you on a little story down memory lane. As a teenager, I lived in a very different era. The PRE INTERNET era. There was this mystique about California growing up. It came thru magazines, movies, and word of mouth. These mediums were my gateway into the world of skateboarding. I longed for the day that I could explore the world that I could only imagine. My mediums were very limited. My gateway just a mere peep hole. After months of saving up, my only insight into a city that I had never been to was a few archaeic books from a local library and very small map to get me across the United States. YES, that is all I had to go on. I drove night and day across this great land of ours, from the small rural town of Kings Mountain, North Carolina to San Francisco, California. I didn’t know how to get around the city, where to stay, what part of town was good or bad, where to eat, I was basically clueless. I just knew one thing, I wanted to be there, I wanted to experience it. When I arrived I fell into a complete state of shock. I really didn’t know what I was going to do. I had to figure it out on the fly. Luckily for me, when put in a state of survival I can quickly figure things out (which I must say is in stark contrast to my usual dwelling on trying to make the right decisions).

So now jump forward to the present day. A few years ago I revisited my beloved city for business. This time I had my super human capabilities in tow. Before I even got there, I knew how the city had changed. I knew that certain areas I wouldn’t dare step foot in before, now boasted some of the best restaurants. I knew exactly where to stay in relation to where I needed to be. I knew who else would be there while I was there, where they were staying, and was able to arrange places to meet that none of us had ever been to. The kicker is how it augmented my intelligence in real time. My little brain in my pocket gave me voice step by step instructions of how to get where I needed to be. It told me the status and location of my friends. I knew everything that I wanted to know.

The internet has expanded our knowledge. We can research to the nth degree any subject we want to know about. We can have conversations with and meet people from any place on the planet. We can sit on the floor in a small room in central Kazakhstan and work on some of the core technology components to the largest media companies in the world (YES, I’ve done that). It is really amazing how the world has changed in the past decade. We are more informed and we are more connected.

So where are we going?
We’ve augmented our intelligence. We’ve put no limit on our communication and social boundaries. Our non physical nature has been changed forever. The next step… our physical being. I’m not talking about putting computers within us. I’m talking about using computers to watch us, to inform us, to make us better physically. Ever heard of the Fitbit? A heart rate monitor? These are very simple augmentations. We can do better, much better. I think we are about to see just how much better we can go. Wearable technology has become the in thing to go after recently. No one has hit the nail on the head. Everyone is pretty much fumbling the ball. However, everyone knows what it can be. It reminds me of pre smartphone talk. During those years leading up to the iPhone, there was always talk of how mobile was going to change everything. Yet, nothing delivered. It took someone thinking about it from the users perspective, molding it to that concept, and doing it right. That changed everything. We are about to see that happen again.

What in the world am I talking about?
I’ve been doing a lot of research over the past year. On top of just looking things up, I’ve tried out several apps, bought various accessories, and really tried to see what was happening as a whole in the industry. I belive it is somewhat infant, but biometric engineering is probably going to be one of the fastest growing industries in modern times. All the elements you need to really make something revolutionary exist today. They are fragmented or partial, but they exist. Biometeric and motion sensors are probably the most important.

Imagine for a moment if you will, that you could analyze your blood in real time. What if you could check glucose levels, determine oxygen levels and more all without being physically invasive. If you could analyze all these biometric readings you could probaby in turn create a biometeric signature that was tied specifically to a particular person. You could digitially ID them. In addition, more finite motion sensors that were attached to you could determine a variance of you bending over to touch your toes and you bending over to deadlift a few 100 pounds. Now take those readings and tie them into extremely intelligent system. Are you seeing it at all? YES, you could become super human. You could remove questions, you could get intelligent feedback to make yourself better, physically better.

If you could read all these things, you could know in real time how food effected you. Ever known someone’s kid who went nuts if they didn’t eat, but never really knew when they were going to go over that edge? You could be informed in real time. Ever excercised and felt like you weren’t improving? You just didn’t know how far to push. Did I lift enough, did I push hard enough? Ever known anyone that was on medical watch? Remove all the devices, let them go free, let them live, but let them live knowing that if something came up they weren’t alone.

Now also imagine a device that could store all this info or at least retreive all this info and be digitally tied to you. Have you ever gotten hurt and tried to get your digital records to take to another doctor? What if it were all available on your wrist. To that note, what if you got in a car wreck but had your entire medical history with the flick of a scan. How many ID cards do you carry? Drivers license, passport, credit cards, airline ticket, rental car info, hotel info, the list goes on forever. Imagine walking into an airport with nothing in your pockets, scanning your wrist to buy something, scanning your wrist to go thru security, scanning it to check in for a flight.

This future isn’t that far off. It really just takes someone putting all the pieces together. It takes someone doing it right. It takes someone that can get the masses to adopt it. Over the past year, my research has led me to believe that Apple has been thinking about all of these things. My guess it that they may have been thinking about it right. My hope is that they have, my hope is that they create something that can make us better, that can generally improve the physical nature of people’s lives. If they haven’t OR they fall short of my visions, I hope that someone does.

Smile 🙂
Technology can be full of light or it can be dark. I’m optimistic. I’m a dreamer. Here’s to the light in all of us and all the little things that can help it shine.

Adapting Dreamsocket

As we head into the future, it always seems habit to reflect on the past. This year we silently redid our company site reflecting the past, showing the present, and opening up to the future. It was appropriate given so much has changed since our inception. What is even more interesting is that these dramatic shifts in our business happened almost overnight. However, it wasn’t that we just spun the wheel in a different direction and completely retooled and refocused when we woke up the next day. Luckily for us, we set ourselves up for a transition and it paid off.

What am I blabbering about? How does it apply to me? If you are reading this, then you have some level of interest in who we are and what we do. You either want to know how we tick or what prompts some of our success. The reason I’m blabbering is that to be successful in anything you have to be willing to adapt and either be smart enough or lucky enough to adapt to the things that will bring you the most success. We managed to do a little of both and land up in a really good position.

In our case, we originally found extreme success in fulfilling a very niche need. That need was building custom online Flash video products for large media organizations. Knowing their business, needs, systems, and being able to deliver on that has made us a valuable asset in that space. Over time that space has matured, slowed down, and shifted. As this was happening, we had already planted seeds as a company that grew more rapidly than we could ever have imagined.

The first seed was an internal project codenamed Poor Bear, which eventually became our first iOS game Bear on a Wire. It was our foray into a new language, platform, and online business. We viewed it as an awesome way to just play around with technology and see what we could learn from it. We didn’t hold it up as a golden egg or anything beyond that. iOS was an interesting platform and after a completely non technical friend called up raving about his iPhone and how incredible email was, we knew we had to do something with it. I mean if someone’s intro to email is thru an iPhone, then there is something to be said about that. So we poured our hearts into the game, working on it in spare moments and exclusively at others. A lot was learned, fun was had, and in the end we had something great.

The Bear wasn’t your runaway Angry Birds, the golden goose wasn’t sitting in our office, and we don’t have islands YET (emphasize, YET because one day we’ll be having pirate wars). An interesting thing happened though, clients were interested in what we did. You see, we weren’t the only ones wanting to build things for our pocket toys. The problem was budgets sometimes have to wait to follow interests. Everyone was becoming interested, but not everyone had shifted their dollars yet. So as the year closed out, with a few dollars left on the table we were given a few chances to play around for our clients. Even though the Bear only pulled in enough for a nice dinner every month if we were lucky, it was our key into the next room. Our game showed our clients that not only could we play in the space, but we were quite good at it.

With a key in hand, we first peeked thru the door, then someone swung it wide open. The budgets came in. We went from developing our first mobile app to dozens of iOS and Android apps overnight. As the online video space cooled, the mobile app space became red hot. That is great and all, but putting a chicken in an oven without knowing how to cook it, no one is going to want to eat it after you’ve burnt it 5 times in a row. The quality just isn’t there. No matter what we are working on, we CARE and we put 200% effort into it. Details matter. This was no exception and helped us really win a lot of trust from our clients in this space. We are viewed as partners rather than simply providers and continue to deliver new versions of flagship apps as time goes on. In addition, unsatisfied with original vendors, we’ve had a lot of clients bring us apps that we’ve been able to take over and really improve upon. Having a relationship like that is probably the best thing we could ask for, because it shows us that our work is really appreciated. We feel acknowledgement that our clients can see how much we care and put forth.

The moral to this story is that things are always changing and that you should always play it smart and look for new and interesting things to explore. If you play your cards right or just luck out (maybe a bit of both), new opportunities will come to you that are bigger and brighter than you would have ever imagined.

The new Dreamsocket.com reflects this. It shows a little of what we’ve traditionally done, some newer things that we’ve gotten good at, and a glimpse of future things to come. We have had the great fortune of working on just about every digital medium that we could hope to. That is what turns work into FUN for us and keeps us coming in! We’ve been very lucky. Check out our portfolio to get a small glimpse into the fun times that our clients have given us.

Bear on a Wire (previously Poor Bear) IPhone game released

Bear on a Wire Trailer from bearzo on Vimeo.

The Game

Our first IPhone game, previously code named Poor Bear is officially available in the app store today under the name Bear on a Wire. For those of you who followed the progression of the game on our site (1, 2, 3, 4) know that this game didn’t start with designs, requirements, deadlines, or the promise of gold bars. Instead it was built on the premise that we could make something fun that we molded just how we wanted it. That mold shifted and turned over time. Even at the starting gate, we didn’t even know what type of game we were making. The game really grew organically and took on a life of its own. I’m personally blown away with the outcome, especially considering this was Chad’s (the developer) first game and he went into it not knowing Objective C. The design is a work of art as well. However, for those of you know Trevor (the designer), know that you could expect nothing less. Words can’t do justice to what 1 designer and 1 developer have done with this game. It is simply amazing and even though it is our own game, none of us can stop playing. That was the point though. We built something we loved. We hope you will too!

Support Us

We appreciate any support you can give us. For those with an IPhone grab the game now, rate it, and review it!
APP STORE: http://itunes.com/app/bearonawire

For those wanting to get the word out. Here are some links to blog, twitter, AIM, tell someone on a subway, etc. We will have flyers too that you can print and post on bathroom walls, telephone polls, and anywhere in eyes view.
SHARE THE BEAR!

APP STORE: http://itunes.com/app/bearonawire
SITE: http://bearonawire.com
TWITTER: http://twitter.com/bearonawire
VIMEO: http://www.vimeo.com/6367707
YOUTUBE: http://www.youtube.com/dreamsocket

Press Release

Dreamsocket & TVM Studio are excited to announce they have just released Bear On A Wire.

URL: http://bearonawire.com/

Apple app store link: http://itunes.com/app/bearonawire

About the game:
Our green hero, Bearzo, has had it! No more performing for “THE MAN” day in and day out. What! Do you think he is some kind of dancing bear? NO… he is a high wire bear, and it’s time for him to make his great escape from the Big Top. He loves his fans and his work, but he just wants to be free and feel his scarf blow in the wind as he shreds wire with the most insane moves ever attempted … on a Moped… on top of high voltage power lines. Get ready to feel the power of the 49cc, two stroke, and single cylinder stallion!

As you tear off on the wire, try to balance Bearzo and keep him from fallingdown into the 1.21 gigawatts that alternate through the wires below him (Ah, the smell of burnt bear hair). While balancing on the wire, acquire crazy mad points by using the different stunt key combinations to generate some MOPED MAYHEM (ECO..ECo..eco) Bearzo’s stunts include no hands, half twist, full twist, bear buck, back roll, front roll, jump roll, grinder, spin roll, spin buck, spin buck grinder, coat tail, coat tail kick, and the next to impossible coat tail kick spin grinder. Combine these stunts with full flips, double flips… triple flips…? Now you are just being crazy! Collect coins and rack up even more points. I know…you never saw collectable coins coming. Don’t get caught hibernating b/c it’s about to get all GRIZZLY up in here!

Get pumped for BEAR ON A WIRE.

Poor Bear Update 4: Collision Detection

I have been working on adding tricks to PoorBear over the last week. Trevor has sent us a ton of crazy animations for tricks (I will try and throw up a video preview of some of them soon) as a result, I was in desperate need of a way to generate collision verts in a manner other than plotting them by hand (yeah, I plotted and translated the verts for one animation by hand and it took about an hour). I will go over the method I used to solve this problem.

 

The problem

I am using the Chipmunk physics engine on PoorBear and very basic collision shapes for all objects to try and keep it as fast as possible. You can make very complex objects with Chipmunk, combining many circles and polygons, but I feel that a single convex poly will provide accurate enough collision for this particular game. So, PoorBear’s body and scooter are represented with seven verts which are depicted by the orange dots below:

Up until we decided to add tricks to the game these vertices were all that was needed to provide collision detection for PoorBear’s body and scooter. The only variation in the animation was the movement of his scarf which didn’t need any collision detection so the verts remained static. Since we want to be able to do some pretty crazy tricks, these verts are no longer sufficient because the trick animations aren’t close to the shape of these verts. For example, this is a frame from one of the tricks:

As you can see, not only is it a totally different shape but the image is a different size and the wheels and shocks of the scooter are also included. This particular animation has ten frames and each frame is different enough to warrant unique collision verts. We have around 15 different trick animations with as many as 40 frames each which is why it became unrealistic to plot the verts by hand.

I browsed the internet for advice on how to solve this problem, but being new to game development I wasn’t even sure what to search for. Someone mentioned to be that I could use a single color image to generate terrain when I was working on the level editor for PoorBear so I started thinking how I could implement something like that and apply it to this problem. The solution turned out to be simpler than I had imagined and has saved me a ton of time.

The solution

My solution was to open each frame in Photoshop, create a new blank layer, and draw a 1px black (could be any color) dot where I wanted each vert to be. Then simply hide the original layer and save the layer with the dots to a file. Then I wrote a little script with my scripting language of choice, Python, to grab each pixel and convert them to coordinates that I can use with the physics engine. I have since extended the script so that I can generate the verts for multiple frames and animations at once because all the animations for PoorBear are complied into 1024×1024 images to cut down on the number of textures being swapped. However, I will just go over the basic steps of generating verts for a single image.

Below is sample of the previous image with several points drawn to form a loose polygon for collision detection. The points are overlaid on the image so that it is obvious what they represent and the points are large for the sake of clarity. The red dots have to be 1px on the real thing.

 

Once the verts are pulled out of the image and converted to something the physics engine can understand, they will represent something like the following in the game:

 

The script

There is a great image handling module for Python called Python Imaging Library (PIL) which is required for this code to work.

First we need to open the image saved from photoshop and pull out the pixels that were drawn. This can be done with the following code:

This code opens the image and loads the pixel data into memory and steps through each pixel, adding the ones with a red channel value of 255 to the points list. We could have also used a list comprehension which most likely runs more efficiently but is considerably less readable. This is what it would look like:

Now that we have the location of all the pixels we drew in Photoshop, we need to convert them to something the physics engine can understand. Getting the pixel data was very simple thanks to PIL and at this step these points could be used for any physics engine with the right translations. These next steps will be more and more specific to my situation (Chimpmunk physics and the iPhone) but can be adjusted to most any project.

Chipmunk expects the verts to be in clockwise order and to form a convex poly. Currently, the verts are ordered by their x value. Given the image below, we need the verts in the order ABCDE but they are in the order ABECD right now.

I developed a simple algorithm which arranges the verts in the correct order, it has four basic steps:

  • 1. Iterate over all verts excluding the first and last
  • 2. Remove the verts with a y value less that half the height of the image, saving them in a temporary list
  • 3. Reverse the order of the temporary list
  • 4. Append the temporary list onto the original list

This is the code that does that:

At this point, we have pulled the pixel data out of the original image and sorted the points in an order that the physics engine will understand. Now we just need to translate the points to the coordinate system used by the physics engine. The pixels were stored linearly in the pixels list where the first pixel in the list represented the top left pixel of the image and the last represented the bottom right pixel. This can logically be thought of as a coordinate system with the origin in the top left and the positive y-axis growing downwards. Chipmunk uses the traditional coordinate system with the origin located in the center. We just need to loop back over every point and transform them to coordinates Chipmunk understands. This code will do that:

Now we have the list in an order and format that Chipmunk can use. Being that PoorBear is running on the iPhone, I just format this data to resemble a multidimensional C array and copy/paste it over into the code for the game. There are better ways to get the data over but copy/pasting is good enough for now.

The full script is below, I just chained each function together for simplicity.

Poor Bear Update 3: Development Progress

From our last post, you got a glimpse into Trevor’s mind and where things were headed from a design standpoint. This update shows the concrete transition of those elements into the game. The title screen and elements have been incorporated, along with stunt recognition (flips, wheelies), item collection, and finer game controls.

 

Help make Poor Happy!

As you can tell, Poor Bear’s life is starting to hype up, but we have him running in different directions. He is a little confused and we’d like your input. How? Well we’ve played with different goals for Poor from beat the buzzer to collect and score. What do you think would make Poor fun? Either of those? A combination? Or something else? We have tons of ideas, but would like to know what you think in terms of the fundamental game play. Please share your thoughts in the comments or with us directly!